*** pgsql/doc/src/FAQ/Attic/FAQ_DEV.html 2005/11/05 01:36:42 1.85.4.7 --- pgsql/doc/src/FAQ/Attic/FAQ_DEV.html 2008/04/22 09:26:36 1.85.4.8 *************** *** 2,1002 ****
- -Last updated: Thu Oct 27 09:48:14 EDT 2005
! !Current maintainer: Bruce Momjian (pgman@candle.pha.pa.us)
!
The most recent version of this document can be viewed at http://www.postgresql.org/files/documentation/faqs/FAQ_DEV.html.
!Download the code and have a look around. See 1.7.
! !Subscribe to and read the pgsql-hackers ! mailing list (often termed 'hackers'). This is where the major ! contributors and core members of the project discuss ! development.
! !PostgreSQL is developed mostly in the C programming language. It ! also makes use of Yacc and Lex.
! !The source code is targeted at most of the popular Unix ! platforms and the Windows environment (XP, Windows 2000, and ! up).
! !Most developers make use of the open source development tool ! chain. If you have contributed to open source software before, you ! will probably be familiar with these tools. They include: GCC (http://gcc.gnu.org, GDB (www.gnu.org/software/gdb/gdb.html), ! autoconf (www.gnu.org/software/autoconf/) ! AND GNU make (www.gnu.org/software/make/make.html.
! !Developers using this tool chain on Windows make use of MingW ! (see http://www.mingw.org/).
! !Some developers use compilers from other software vendors with ! mixed results.
! !Developers who are regularly rebuilding the source often pass ! the --enable-depend flag to configure. The result is that ! when you make a modification to a C header file, all files depend ! upon that file are also rebuilt.
! !You can learn more about these features by consulting the ! archives, the SQL standards and the recommend texts (see 1.10).
! !Send an email to pgsql-hackers with a proposal for what you want ! to do (assuming your contribution is not trivial). Working in ! isolation is not advisable because others might be working on the same ! TODO item, or you might have misunderstood the TODO item. In the ! email, discuss both the internal implementation method you plan to ! use, and any user-visible changes (new syntax, etc). For complex ! patches, it is important to get community feeback on your proposal ! before starting work. Failure to do so might mean your patch is ! rejected.
! !A web site is maintained for patches awaiting review, ! ! http://momjian.postgresql.org/cgi-bin/pgpatches, and ! those that are being kept for the next release, ! ! http://momjian.postgresql.org/cgi-bin/pgpatches2.
! !Generate the patch in contextual diff format. If you are ! unfamiliar with this, you might find the script ! src/tools/makediff/difforig useful.
! !Ensure that your patch is generated against the most recent ! version of the code. If it is a patch adding new functionality, the ! most recent version is CVS HEAD; if it is a bug fix, this will be ! the most recently version of the branch which suffers from the bug ! (for more on branches in PostgreSQL, see 1.15).
! !Finally, submit the patch to pgsql-patches@postgresql.org. It ! will be reviewed by other contributors to the project and will be ! either accepted or sent back for further work. Also, please try to ! include documentation changes as part of the patch. If you can't do ! that, let us know and we will manually update the documentation when ! the patch is applied.
! !Other than documentation in the source tree itself, you can find ! some papers/presentations discussing the code at ! http://www.postgresql.org/developer.
! !There are several ways to obtain the source tree. Occasional ! developers can just get the most recent source tree snapshot from ! ftp://ftp.postgresql.org.
! !Regular developers might want to take advantage of anonymous ! access to our source code management system. The source tree is ! currently hosted in CVS. For details of how to obtain the source ! from CVS see ! http://developer.postgresql.org/docs/postgres/cvs.html.
! !Basic system testing
! !The easiest way to test your code is to ensure that it builds ! against the latest version of the code and that it does not generate ! compiler warnings.
! !It is worth advised that you pass --enable-cassert to ! configure. This will turn on assertions with in the source ! which will often show us bugs because they cause data corruption of ! segmentation violations. This generally makes debugging much ! easier.
! !Then, perform run time testing via psql.
! !Regression test suite
! !The next step is to test your changes against the existing ! regression test suite. To do this, issue "make check" in the root ! directory of the source tree. If any tests failure, ! investigate.
! !If you've deliberately changed existing behavior, this change ! might cause a regression test failure but not any actual regression. ! If so, you should also patch the regression test suite.
! !Other run time testing
! !Some developers make use of tools such as valgrind (http://valgrind.kde.org) for memory ! testing, gprof (which comes with the GNU binutils suite) and ! oprofile (http://oprofile.sourceforge.net/) ! for profiling and other related tools.
! !What about unit testing, static analysis, model ! checking...?
! !There have been a number of discussions about other testing ! frameworks and some developers are exploring these ideas.
! !Keep in mind the Makefiles do not have the proper ! dependencies for include files. You have to do a make clean ! and then another make. If you are using GCC ! you can use the --enable-depend option of configure ! to have the compiler compute the dependencies automatically.
! !First, all the files in the src/tools directory are ! designed for developers.
!! RELEASE_CHANGES changes we have to make for each release ! backend description/flowchart of the backend directories ! ccsym find standard defines made by your compiler ! copyright fixes copyright notices ! ! entab converts tabs to spaces, used by pgindent ! find_static finds functions that could be made static ! find_typedef finds typedefs in the source code ! find_badmacros finds macros that use braces incorrectly ! fsync a script to provide information about the cost of cache ! syncing system calls ! make_ctags make vi 'tags' file in each directory ! make_diff make *.orig and diffs of source ! make_etags make emacs 'etags' files ! make_keywords make comparison of our keywords and SQL'92 ! make_mkid make mkid ID files ! pgcvslog used to generate a list of changes for each release ! pginclude scripts for adding/removing include files ! pgindent indents source files ! pgtest a semi-automated build system ! thread a thread testing script !! !
In src/include/catalog:
!! unused_oids a script which generates unused OIDs for use in system ! catalogs ! duplicate_oids finds duplicate OIDs in system catalog definitions !! If you point your browser at the tools/backend/index.html ! file, you will see few paragraphs describing the data flow, the ! backend components in a flow chart, and a description of the shared ! memory area. You can click on any flowchart box to see a ! description. If you then click on the directory name, you will be ! taken to the source directory, to browse the actual source code ! behind it. We also have several README files in some source ! directories to describe the function of the module. The browser ! will display these when you enter the directory also. The ! tools/backend directory is also contained on our web page ! under the title How PostgreSQL Processes a Query. ! !
Second, you really should have an editor that can handle tags, ! so you can tag a function call to see the function definition, and ! then tag inside that function to see an even lower-level function, ! and then back out twice to return to the original function. Most ! editors support this via tags or etags files.
! !Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/
! !By running tools/make_mkid, an archive of source symbols ! can be created that can be rapidly queried.
! !Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use ! glimpse, which can be found at http://webglimpse.net/.
! !tools/make_diff has tools to create patch diff files that ! can be applied to the distribution. This produces context diffs, ! which is our preferred format.
! !Our standard format is to indent each code level with one tab,
! where each tab is four spaces. You will need to set your editor to
! display tabs as four spaces:
!
! vi in ~/.exrc: ! set tabstop=4 ! set sw=4 ! more: ! more -x4 ! less: ! less -x4 ! emacs: ! M-x set-variable tab-width ! ! or ! ! (c-add-style "pgsql" ! '("bsd" ! (indent-tabs-mode . t) ! (c-basic-offset . 4) ! (tab-width . 4) ! (c-offsets-alist . ! ((case-label . +))) ! ) ! nil ) ; t = set this style, nil = don't ! ! (defun pgsql-c-mode () ! (c-mode) ! (c-set-style "pgsql") ! ) ! ! and add this to your autoload list (modify file path in macro): ! ! (setq auto-mode-alist ! (cons '("\\`/home/andrew/pgsql/.*\\.[chyl]\\'" . pgsql-c-mode) ! auto-mode-alist)) ! or ! /* ! * Local variables: ! * tab-width: 4 ! * c-indent-level: 4 ! * c-basic-offset: 4 ! * End: ! */ !!
pgindent is run on all source files just before each beta
! test period. It auto-formats all source files to make them
! consistent. Comment blocks that need specific line breaks should be
! formatted as block comments, where the comment starts as
! /*------
. These comments will not be reformatted in
! any way.
pginclude contains scripts used to add needed
! #include
's to include files, and removed unneeded
! #include
's.
When adding system types, you will need to assign oids to them. ! There is also a script called unused_oids in ! pgsql/src/include/catalog that shows the unused oids.
! !I have four good books, An Introduction to Database ! Systems, by C.J. Date, Addison, Wesley, A Guide to the SQL ! Standard, by C.J. Date, et. al, Addison, Wesley, ! Fundamentals of Database Systems, by Elmasri and Navathe, ! and Transaction Processing, by Jim Gray, Morgan, ! Kaufmann
! !There is also a database performance site, with a handbook ! on-line written by Jim Gray at http://www.benchmarkresources.com..
! !The files configure and configure.in are part of ! the GNU autoconf package. Configure allows us to test for ! various capabilities of the OS, and to set variables that can then ! be tested in C programs and Makefiles. Autoconf is installed on the ! PostgreSQL main server. To add options to configure, edit ! configure.in, and then run autoconf to generate ! configure.
! !When configure is run by the user, it tests various OS ! capabilities, stores those in config.status and ! config.cache, and modifies a list of *.in files. For ! example, if there exists a Makefile.in, configure generates ! a Makefile that contains substitutions for all @var@ ! parameters found by configure.
! !When you need to edit files, make sure you don't waste time ! modifying files generated by configure. Edit the *.in ! file, and re-run configure to recreate the needed file. If ! you run make distclean from the top-level source directory, ! all files derived by configure are removed, so you see only the ! file contained in the source distribution.
! !There are a variety of places that need to be modified to add a ! new port. First, start in the src/template directory. Add an ! appropriate entry for your OS. Also, use src/config.guess to ! add your OS to src/template/.similar. You shouldn't match ! the OS version exactly. The configure test will look for an ! exact OS version number, and if not found, find a match without ! version number. Edit src/configure.in to add your new OS. ! (See configure item above.) You will need to run autoconf, or patch ! src/configure too.
! !Then, check src/include/port and add your new OS file, ! with appropriate values. Hopefully, there is already locking code ! in src/include/storage/s_lock.h for your CPU. There is also ! a src/makefiles directory for port-specific Makefile ! handling. There is a backend/port directory if you need ! special files for your OS.
! !There is always a temptation to use the newest operating system ! features as soon as they arrive. We resist that temptation.
! !First, we support 15+ operating systems, so any new feature has ! to be well established before we will consider it. Second, most new ! wizz-bang features don't provide dramatic ! improvements. Third, they usually have some downside, such as ! decreased reliability or additional code required. Therefore, we ! don't rush to use new features but rather wait for the feature to ! be established, then ask for testing to show that a measurable ! improvement is possible.
! !As an example, threads are not currently used in the backend ! code because:
! !So, we are not ignorant of new features. It is just that we are ! cautious about their adoption. The TODO list often contains links ! to discussions showing our reasoning in these areas.
! !This was written by Lamar Owen:
! !2001-05-03
! !As to how the RPMs are built -- to answer that question sanely ! requires me to know how much experience you have with the whole RPM ! paradigm. 'How is the RPM built?' is a multifaceted question. The ! obvious simple answer is that I maintain:
! !I then download and build on as many different canonical ! distributions as I can -- currently I am able to build on Red Hat ! 6.2, 7.0, and 7.1 on my personal hardware. Occasionally I receive ! opportunity from certain commercial enterprises such as Great ! Bridge and PostgreSQL, Inc. to build on other distributions.
! !I test the build by installing the resulting packages and ! running the regression tests. Once the build passes these tests, I ! upload to the postgresql.org ftp server and make a release ! announcement. I am also responsible for maintaining the RPM ! download area on the ftp site.
! !You'll notice I said 'canonical' distributions above. That ! simply means that the machine is as stock 'out of the box' as ! practical -- that is, everything (except select few programs) on ! these boxen are installed by RPM; only official Red Hat released ! RPMs are used (except in unusual circumstances involving software ! that will not alter the build -- for example, installing a newer ! non-RedHat version of the Dia diagramming package is OK -- ! installing Python 2.1 on the box that has Python 1.5.2 installed is ! not, as that alters the PostgreSQL build). The RPM as uploaded is ! built to as close to out-of-the-box pristine as is possible. Only ! the standard released 'official to that release' compiler is used ! -- and only the standard official kernel is used as well.
! !For a time I built on Mandrake for RedHat consumption -- no ! more. Nonstandard RPM building systems are worse than useless. ! Which is not to say that Mandrake is useless! By no means is ! Mandrake useless -- unless you are building Red Hat RPMs -- and Red ! Hat is useless if you're trying to build Mandrake or SuSE RPMs, for ! that matter. But I would be foolish to use 'Lamar Owen's Super ! Special RPM Blend Distro 0.1.2' to build for public consumption! ! :-)
! !I _do_ attempt to make the _source_ RPM compatible with as many ! distributions as possible -- however, since I have limited ! resources (as a volunteer RPM maintainer) I am limited as to the ! amount of testing said build will get on other distributions, ! architectures, or systems.
! !And, while I understand people's desire to immediately upgrade ! to the newest version, realize that I do this as a side interest -- ! I have a regular, full-time job as a broadcast ! engineer/webmaster/sysadmin/Technical Director which occasionally ! prevents me from making timely RPM releases. This happened during ! the early part of the 7.1 beta cycle -- but I believe I was pretty ! much on the ball for the Release Candidates and the final ! release.
! !I am working towards a more open RPM distribution -- I would ! dearly love to more fully document the process and put everything ! into CVS -- once I figure out how I want to represent things such ! as the spec file in a CVS form. It makes no sense to maintain a ! changelog, for instance, in the spec file in CVS when CVS does a ! better job of changelogs -- I will need to write a tool to generate ! a real spec file from a CVS spec-source file that would add version ! numbers, changelog entries, etc to the result before building the ! RPM. IOW, I need to rethink the process -- and then go through the ! motions of putting my long RPM history into CVS one version at a ! time so that version history information isn't lost.
! !As to why all these files aren't part of the source tree, well, ! unless there was a large cry for it to happen, I don't believe it ! should. PostgreSQL is very platform-agnostic -- and I like that. ! Including the RPM stuff as part of the Official Tarball (TM) would, ! IMHO, slant that agnostic stance in a negative way. But maybe I'm ! too sensitive to that. I'm not opposed to doing that if that is the ! consensus of the core group -- and that would be a sneaky way to ! get the stuff into CVS :-). But if the core group isn't thrilled ! with the idea (and my instinct says they're not likely to be), I am ! opposed to the idea -- not to keep the stuff to myself, but to not ! hinder the platform-neutral stance. IMHO, of course.
! !Of course, there are many projects that DO include all the files ! necessary to build RPMs from their Official Tarball (TM).
! !This was written by Tom Lane:
! !2001-05-07
! !If you just do basic "cvs checkout", "cvs update", "cvs commit", ! then you'll always be dealing with the HEAD version of the files in ! CVS. That's what you want for development, but if you need to patch ! past stable releases then you have to be able to access and update ! the "branch" portions of our CVS repository. We normally fork off a ! branch for a stable release just before starting the development ! cycle for the next release.
! !The first thing you have to know is the branch name for the ! branch you are interested in getting at. To do this, look at some ! long-lived file, say the top-level HISTORY file, with "cvs status ! -v" to see what the branch names are. (Thanks to Ian Lance Taylor ! for pointing out that this is the easiest way to do it.) Typical ! branch names are:
!! REL7_1_STABLE ! REL7_0_PATCHES ! REL6_5_PATCHES !! !
OK, so how do you do work on a branch? By far the best way is to ! create a separate checkout tree for the branch and do your work in ! that. Not only is that the easiest way to deal with CVS, but you ! really need to have the whole past tree available anyway to test ! your work. (And you *better* test your work. Never forget that ! dot-releases tend to go out with very little beta testing --- so ! whenever you commit an update to a stable branch, you'd better be ! doubly sure that it's correct.)
! !Normally, to checkout the head branch, you just cd to the place ! you want to contain the toplevel "pgsql" directory and say
!! cvs ... checkout pgsql !! !
To get a past branch, you cd to wherever you want it and ! say
!! cvs ... checkout -r BRANCHNAME pgsql !! !
For example, just a couple days ago I did
!! mkdir ~postgres/REL7_1 ! cd ~postgres/REL7_1 ! cvs ... checkout -r REL7_1_STABLE pgsql !! !
and now I have a maintenance copy of 7.1.*.
! !When you've done a checkout in this way, the branch name is ! "sticky": CVS automatically knows that this directory tree is for ! the branch, and whenever you do "cvs update" or "cvs commit" in ! this tree, you'll fetch or store the latest version in the branch, ! not the head version. Easy as can be.
! !So, if you have a patch that needs to apply to both the head and ! a recent stable branch, you have to make the edits and do the ! commit twice, once in your development tree and once in your stable ! branch tree. This is kind of a pain, which is why we don't normally ! fork the tree right away after a major release --- we wait for a ! dot-release or two, so that we won't have to double-patch the first ! wave of fixes.
! !There are three versions of the SQL standard: SQL-92, SQL:1999, ! and SQL:2003. They are endorsed by ANSI and ISO. Draft versions can ! be downloaded from:
! !Some SQL standards web pages are:
! !Many technical questions held by those new to the code have been ! answered on the pgsql-hackers mailing list - the archives of which ! can be found at http://archives.postgresql.org/pgsql-hackers/.
! !If you cannot find discussion or your particular question, feel ! free to put it to the list.
! !Major contributors also answer technical questions, including ! questions about development of new features, on IRC at ! irc.freenode.net in the #postgresql channel.
! !PostgreSQL website development is discussed on the ! pgsql-www@postgresql.org mailing list. The is a project page where ! the source code is available at http://gborg.postgresql.org/project/pgweb/projdisplay.php ! , the code for the next version of the website is under the ! "portal" module. You will also find code for the "techdocs" ! website if you would like to contribute to that. A temporary todo ! list for current website development issues is available at http://xzilla.postgresql.org/todo
! !You first need to find the tuples(rows) you are interested in. ! There are two ways. First, SearchSysCache() and related ! functions allow you to query the system catalogs. This is the ! preferred way to access system tables, because the first call to ! the cache loads the needed rows, and future requests can return the ! results without accessing the base table. The caches use system ! table indexes to look up tuples. A list of available caches is ! located in src/backend/utils/cache/syscache.c. ! src/backend/utils/cache/lsyscache.c contains many ! column-specific cache lookup functions.
! !The rows returned are cache-owned versions of the heap rows. ! Therefore, you must not modify or delete the tuple returned by ! SearchSysCache(). What you should do is release it ! with ReleaseSysCache() when you are done using it; this ! informs the cache that it can discard that tuple if necessary. If ! you neglect to call ReleaseSysCache(), then the cache entry ! will remain locked in the cache until end of transaction, which is ! tolerable but not very desirable.
! !If you can't use the system cache, you will need to retrieve the ! data directly from the heap table, using the buffer cache that is ! shared by all backends. The backend automatically takes care of ! loading the rows into the buffer cache.
! !Open the table with heap_open(). You can then start a ! table scan with heap_beginscan(), then use ! heap_getnext() and continue as long as ! HeapTupleIsValid() returns true. Then do a ! heap_endscan(). Keys can be assigned to the ! scan. No indexes are used, so all rows are going to be ! compared to the keys, and only the valid rows returned.
! !You can also use heap_fetch() to fetch rows by block ! number/offset. While scans automatically lock/unlock rows from the ! buffer cache, with heap_fetch(), you must pass a ! Buffer pointer, and ReleaseBuffer() it when ! completed.
! !Once you have the row, you can get data that is common to all ! tuples, like t_self and t_oid, by merely accessing ! the HeapTuple structure entries. If you need a ! table-specific column, you should take the HeapTuple pointer, and ! use the GETSTRUCT() macro to access the table-specific start ! of the tuple. You then cast the pointer as a Form_pg_proc ! pointer if you are accessing the pg_proc table, or ! Form_pg_type if you are accessing pg_type. You can then ! access the columns by using a structure pointer:
!
! ((Form_pg_class) GETSTRUCT(tuple))->relnatts
!
!
! You must not directly change live tuples in this way. The
! best way is to use heap_modifytuple() and pass it your
! original tuple, and the values you want changed. It returns a
! palloc'ed tuple, which you pass to heap_replace(). You can
! delete tuples by passing the tuple's t_self to
! heap_destroy(). You use t_self for
! heap_update() too. Remember, tuples can be either system
! cache copies, which might go away after you call
! ReleaseSysCache(), or read directly from disk buffers, which
! go away when you heap_getnext(), heap_endscan, or
! ReleaseBuffer(), in the heap_fetch() case. Or it may
! be a palloc'ed tuple, that you must pfree() when finished.
!
! Table, column, type, function, and view names are stored in ! system tables in columns of type Name. Name is a ! fixed-length, null-terminated type of NAMEDATALEN bytes. ! (The default value for NAMEDATALEN is 64 bytes.)
!
! typedef struct nameData
! {
! char data[NAMEDATALEN];
! } NameData;
! typedef NameData *Name;
!
!
! Table, column, type, function, and view names that come into the
! backend via user queries are stored as variable-length,
! null-terminated character strings.
!
! Many functions are called with both types of names, ie. ! heap_open(). Because the Name type is null-terminated, it is ! safe to pass it to a function expecting a char *. Because there are ! many cases where on-disk names(Name) are compared to user-supplied ! names(char *), there are many cases where Name and char * are used ! interchangeably.
! !We do this because this allows a consistent way to pass data ! inside the backend in a flexible way. Every node has a ! NodeTag which specifies what type of data is inside the ! Node. Lists are groups of Nodes chained together as a ! forward-linked list.
! !Here are some of the List manipulation commands:
! !!! You can print nodes easily inside gdb. First, to disable ! output truncation when you use the gdb print command: !!
!- lfirst(i), lfirst_int(i), lfirst_oid(i)
! !- return the data (a point, integer and OID respectively) at ! list element i.
! !- lnext(i)
! !- return the next list element after i.
! !- foreach(i, list)
! !- ! loop through list, assigning each list element to ! i. It is important to note that i is a List *, ! not the data in the List element. You need to use ! lfirst(i) to get at the data. Here is a typical code ! snippet that loops through a List containing Var *'s ! and processes each one: !
! !!!List *list; ! ListCell *i; ! ! foreach(i, list) ! { ! Var *var = lfirst(i); ! ! /* process var here */ ! } !
!- lcons(node, list)
! !- add node to the front of list, or create a ! new list with node if list is NIL.
! !- lappend(list, node)
! !- add node to the end of list. This is more ! expensive that lcons.
! !- nconc(list1, list2)
! !- Concat list2 on to the end of list1.
! !- length(list)
! !- return the length of the list.
! !- nth(i, list)
! !- return the i'th element in list.
! !- lconsi, ...
! !- There are integer versions of these: lconsi, ! lappendi, etc. Also versions for OID lists: lconso, ! lappendo, etc.
!
! (gdb) set print elements 0
!
!
! Instead of printing values in gdb format, you can use the next two
! commands to print out List, Node, and structure contents in a
! verbose format that is easier to understand. List's are unrolled
! into nodes, and nodes are printed in detail. The first prints in a
! short format, and the second in a long format:
!
! (gdb) call print(any_pointer)
! (gdb) call pprint(any_pointer)
!
!
! The output appears in the postmaster log file, or on your screen if
! you are running a backend directly without a postmaster.
!
! The structures passing around from the parser, rewrite, ! optimizer, and executor require quite a bit of support. Most ! structures have support routines in src/backend/nodes used ! to create, copy, read, and output those structures (in particular, ! the files copyfuncs.c and equalfuncs.c. Make sure you ! add support for your new field to these files. Find any other ! places the structure might need code for your new field. mkid ! is helpful with this (see 1.9).
! !palloc() and pfree() are used in place of malloc() ! and free() because we find it easier to automatically free all ! memory allocated when a query completes. This assures us that all ! memory that was allocated gets freed even if we have lost track of ! where we allocated it. There are special non-query contexts that ! memory can be allocated in. These affect when the allocated memory ! is freed by the backend.
! !ereport() is used to send messages to the front-end, and ! optionally terminate the current query being processed. The first ! parameter is an ereport level of DEBUG (levels 1-5), ! LOG, INFO, NOTICE, ERROR, FATAL, ! or PANIC. NOTICE prints on the user's terminal and ! the postmaster logs. INFO prints only to the user's terminal ! and LOG prints only to the server logs. (These can be ! changed from postgresql.conf.) ERROR prints in both ! places, and terminates the current query, never returning from the ! call. FATAL terminates the backend process. The remaining ! parameters of ereport are a printf-style set of ! parameters to print.
! !ereport(ERROR) frees most memory and open file ! descriptors so you don't need to clean these up before the ! call.
! !Normally, transactions can not see the rows they modify. This
! allows UPDATE foo SET x = x + 1
to work correctly.
However, there are cases where a transactions needs to see rows ! affected in previous parts of the transaction. This is accomplished ! using a Command Counter. Incrementing the counter allows ! transactions to be broken into pieces so each piece can see rows ! modified by previous pieces. CommandCounterIncrement() ! increments the Command Counter, creating a new part of the ! transaction.
! !First, try running configure with the --enable-cassert ! option, many assert()s monitor the progress of the backend ! and halt the program when something unexpected occurs.
! !The postmaster has a -d option that allows even more ! detailed information to be reported. The -d option takes a ! number that specifies the debug level. Be warned that high debug ! level values generate large log files.
! !If the postmaster is not running, you can actually run the ! postgres backend from the command line, and type your ! SQL statement directly. This is recommended ! only for debugging purposes. If you have compiled with debugging ! symbols, you can use a debugger to see what is happening. Because ! the backend was not started from postmaster, it is not ! running in an identical environment and locking/backend interaction ! problems might not be duplicated.
! !If the postmaster is running, start psql in one
! window, then find the PID of the postgres
! process used by psql using SELECT pg_backend_pid()
.
! Use a debugger to attach to the postgres PID.
! You can set breakpoints in the debugger and issue queries from the
! other. If you are looking to find the location that is generating
! an error or log message, set a breakpoint at errfinish>.
!
! psql. If you are debugging postgres startup, you can
! set PGOPTIONS="-W n", then start psql. This will cause startup
! to delay for n seconds so you can attach to the process with
! the debugger, set any breakpoints, and continue through the startup
! sequence.
You can also compile with profiling to see what functions are ! taking execution time. The backend profile files will be deposited ! in the pgsql/data directory. The client profile file will be ! put in the client's current directory. Linux requires a compile with ! -DLINUX_PROFILE for proper profiling.
- --- 2,14 ----The developer FAQ can be found on the PostgreSQL wiki:
!http://wiki.postgresql.org/wiki/Development_information