Bison
Bison
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1 Reverse Polish Notation Calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1.1 Declarations for rpcalc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1.2 Grammar Rules for rpcalc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.2.1 Explanation of input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.2.2 Explanation of line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.2.3 Explanation of exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.3 The rpcalc Lexical Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.4 The Controlling Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.5 The Error Reporting Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.6 Running Bison to Make the Parser . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.7 Compiling the Parser Implementation File . . . . . . . . . . . . . . . . 32
2.2 Infix Notation Calculator: calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Simple Error Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Location Tracking Calculator: ltcalc . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.1 Declarations for ltcalc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.2 Grammar Rules for ltcalc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
ii
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Introduction
Bison is a general-purpose parser generator that converts an annotated context-free gram-
mar into a deterministic LR or generalized LR (GLR) parser employing LALR(1), IELR(1)
or canonical LR(1) parser tables. Once you are proficient with Bison, you can use it to
develop a wide range of language parsers, from those used in simple desk calculators to
complex programming languages.
Bison is upward compatible with Yacc: all properly-written Yacc grammars ought to
work with Bison with no change. Anyone familiar with Yacc should be able to use Bison
with little trouble. You need to be fluent in C, C++ or Java programming in order to use
Bison or to understand this manual.
We begin with tutorial chapters that explain the basic concepts of using Bison and show
three explained examples, each building on the last. If you don’t know Bison or Yacc, start
by reading these chapters. Reference chapters follow, which describe specific aspects of
Bison in detail.
Bison was written originally by Robert Corbett. Richard Stallman made it Yacc-
compatible. Wilfred Hansen of Carnegie Mellon University added multi-character string
literals and other features. Since then, Bison has grown more robust and evolved many
other new features thanks to the hard work of a long list of volunteers. For details, see the
THANKS and ChangeLog files included in the Bison distribution.
This edition corresponds to version 3.7.6 of Bison.
2
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of
works.
The licenses for most software and other practical works are designed to take away your
freedom to share and change the works. By contrast, the GNU General Public License is
intended to guarantee your freedom to share and change all versions of a program—to make
sure it remains free software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to any other work
released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General
Public Licenses are designed to make sure that you have the freedom to distribute copies
of free software (and charge for them if you wish), that you receive source code or can get
it if you want it, that you can change the software or use pieces of it in new free programs,
and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking
you to surrender the rights. Therefore, you have certain responsibilities if you distribute
copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you
must pass on to the recipients the same freedoms that you received. You must make sure
that they, too, receive or can get the source code. And you must show them these terms so
they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copy-
right on the software, and (2) offer you this License giving you legal permission to copy,
distribute and/or modify it.
For the developers’ and authors’ protection, the GPL clearly explains that there is no
warranty for this free software. For both users’ and authors’ sake, the GPL requires that
modified versions be marked as changed, so that their problems will not be attributed
erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the
software inside them, although the manufacturer can do so. This is fundamentally incom-
patible with the aim of protecting users’ freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to use, which is pre-
cisely where it is most unacceptable. Therefore, we have designed this version of the GPL
to prohibit the practice for those products. If such problems arise substantially in other
domains, we stand ready to extend this provision to those domains in future versions of the
GPL, as needed to protect the freedom of users.
GNU GENERAL PUBLIC LICENSE 4
Finally, every program is threatened constantly by software patents. States should not
allow patents to restrict development and use of software on general-purpose computers, but
in those that do, we wish to avoid the special danger that patents applied to a free program
could make it effectively proprietary. To prevent this, the GPL assures that patents cannot
be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
The “System Libraries” of an executable work include anything, other than the work as
a whole, that (a) is included in the normal form of packaging a Major Component, but
which is not part of that Major Component, and (b) serves only to enable use of the
work with that Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A “Major Component”,
in this context, means a major essential component (kernel, window system, and so
on) of the specific operating system (if any) on which the executable work runs, or a
compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code
needed to generate, install, and (for an executable work) run the object code and to
modify the work, including scripts to control those activities. However, it does not
include the work’s System Libraries, or general-purpose tools or generally available
free programs which are used unmodified in performing those activities but which are
not part of the work. For example, Corresponding Source includes interface definition
files associated with source files for the work, and the source code for shared libraries
and dynamically linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those subprograms
and other parts of the work.
The Corresponding Source need not include anything that users can regenerate auto-
matically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the
Program, and are irrevocable provided the stated conditions are met. This License ex-
plicitly affirms your unlimited permission to run the unmodified Program. The output
from running a covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your rights of fair use
or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without
conditions so long as your license otherwise remains in force. You may convey covered
works to others for the sole purpose of having them make modifications exclusively
for you, or provide you with facilities for running those works, provided that you
comply with the terms of this License in conveying all material for which you do not
control copyright. Those thus making or running the covered works for you must do
so exclusively on your behalf, under your direction and control, on terms that prohibit
them from making any copies of your copyrighted material outside their relationship
with you.
Conveying under any other circumstances is permitted solely under the conditions
stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users’ Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under
any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty
adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention
of such measures.
GNU GENERAL PUBLIC LICENSE 6
When you convey a covered work, you waive any legal power to forbid circumvention of
technological measures to the extent such circumvention is effected by exercising rights
under this License with respect to the covered work, and you disclaim any intention
to limit operation or modification of the work as a means of enforcing, against the
work’s users, your or third parties’ legal rights to forbid circumvention of technological
measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program’s source code as you receive it, in any
medium, provided that you conspicuously and appropriately publish on each copy an
appropriate copyright notice; keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code; keep intact all
notices of the absence of any warranty; and give all recipients a copy of this License
along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer
support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from
the Program, in the form of source code under the terms of section 4, provided that
you also meet all of these conditions:
a. The work must carry prominent notices stating that you modified it, and giving a
relevant date.
b. The work must carry prominent notices stating that it is released under this Li-
cense and any conditions added under section 7. This requirement modifies the
requirement in section 4 to “keep intact all notices”.
c. You must license the entire work, as a whole, under this License to anyone who
comes into possession of a copy. This License will therefore apply, along with any
applicable section 7 additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no permission to license
the work in any other way, but it does not invalidate such permission if you have
separately received it.
d. If the work has interactive user interfaces, each must display Appropriate Legal
Notices; however, if the Program has interactive interfaces that do not display
Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which
are not by their nature extensions of the covered work, and which are not combined
with it such as to form a larger program, in or on a volume of a storage or distribution
medium, is called an “aggregate” if the compilation and its resulting copyright are
not used to limit the access or legal rights of the compilation’s users beyond what the
individual works permit. Inclusion of a covered work in an aggregate does not cause
this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and
5, provided that you also convey the machine-readable Corresponding Source under
the terms of this License, in one of these ways:
GNU GENERAL PUBLIC LICENSE 7
a. Convey the object code in, or embodied in, a physical product (including a phys-
ical distribution medium), accompanied by the Corresponding Source fixed on a
durable physical medium customarily used for software interchange.
b. Convey the object code in, or embodied in, a physical product (including a physi-
cal distribution medium), accompanied by a written offer, valid for at least three
years and valid for as long as you offer spare parts or customer support for that
product model, to give anyone who possesses the object code either (1) a copy of
the Corresponding Source for all the software in the product that is covered by this
License, on a durable physical medium customarily used for software interchange,
for a price no more than your reasonable cost of physically performing this con-
veying of source, or (2) access to copy the Corresponding Source from a network
server at no charge.
c. Convey individual copies of the object code with a copy of the written offer to
provide the Corresponding Source. This alternative is allowed only occasionally
and noncommercially, and only if you received the object code with such an offer,
in accord with subsection 6b.
d. Convey the object code by offering access from a designated place (gratis or for
a charge), and offer equivalent access to the Corresponding Source in the same
way through the same place at no further charge. You need not require recipients
to copy the Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source may be on
a different server (operated by you or a third party) that supports equivalent
copying facilities, provided you maintain clear directions next to the object code
saying where to find the Corresponding Source. Regardless of what server hosts
the Corresponding Source, you remain obligated to ensure that it is available for
as long as needed to satisfy these requirements.
e. Convey the object code using peer-to-peer transmission, provided you inform other
peers where the object code and Corresponding Source of the work are being offered
to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Cor-
responding Source as a System Library, need not be included in conveying the object
code work.
A “User Product” is either (1) a “consumer product”, which means any tangible per-
sonal property which is normally used for personal, family, or household purposes, or
(2) anything designed or sold for incorporation into a dwelling. In determining whether
a product is a consumer product, doubtful cases shall be resolved in favor of coverage.
For a particular product received by a particular user, “normally used” refers to a
typical or common use of that class of product, regardless of the status of the par-
ticular user or of the way in which the particular user actually uses, or expects or is
expected to use, the product. A product is a consumer product regardless of whether
the product has substantial commercial, industrial or non-consumer uses, unless such
uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, autho-
rization keys, or other information required to install and execute modified versions of a
covered work in that User Product from a modified version of its Corresponding Source.
GNU GENERAL PUBLIC LICENSE 8
The information must suffice to ensure that the continued functioning of the modified
object code is in no case prevented or interfered with solely because modification has
been made.
If you convey an object code work under this section in, or with, or specifically for
use in, a User Product, and the conveying occurs as part of a transaction in which
the right of possession and use of the User Product is transferred to the recipient in
perpetuity or for a fixed term (regardless of how the transaction is characterized),
the Corresponding Source conveyed under this section must be accompanied by the
Installation Information. But this requirement does not apply if neither you nor any
third party retains the ability to install modified object code on the User Product (for
example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement
to continue to provide support service, warranty, or updates for a work that has been
modified or installed by the recipient, or for the User Product in which it has been
modified or installed. Access to a network may be denied when the modification itself
materially and adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with
this section must be in a format that is publicly documented (and with an implementa-
tion available to the public in source code form), and must require no special password
or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by mak-
ing exceptions from one or more of its conditions. Additional permissions that are
applicable to the entire Program shall be treated as though they were included in this
License, to the extent that they are valid under applicable law. If additional permis-
sions apply only to part of the Program, that part may be used separately under those
permissions, but the entire Program remains governed by this License without regard
to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any
additional permissions from that copy, or from any part of it. (Additional permissions
may be written to require their own removal in certain cases when you modify the
work.) You may place additional permissions on material, added by you to a covered
work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered
work, you may (if authorized by the copyright holders of that material) supplement
the terms of this License with terms:
a. Disclaiming warranty or limiting liability differently from the terms of sections 15
and 16 of this License; or
b. Requiring preservation of specified reasonable legal notices or author attributions
in that material or in the Appropriate Legal Notices displayed by works containing
it; or
c. Prohibiting misrepresentation of the origin of that material, or requiring that mod-
ified versions of such material be marked in reasonable ways as different from the
original version; or
GNU GENERAL PUBLIC LICENSE 9
d. Limiting the use for publicity purposes of names of licensors or authors of the
material; or
e. Declining to grant rights under trademark law for use of some trade names, trade-
marks, or service marks; or
f. Requiring indemnification of licensors and authors of that material by anyone who
conveys the material (or modified versions of it) with contractual assumptions
of liability to the recipient, for any liability that these contractual assumptions
directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within
the meaning of section 10. If the Program as you received it, or any part of it, con-
tains a notice stating that it is governed by this License along with a term that is a
further restriction, you may remove that term. If a license document contains a further
restriction but permits relicensing or conveying under this License, you may add to a
covered work material governed by the terms of that license document, provided that
the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the
relevant source files, a statement of the additional terms that apply to those files, or a
notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a sep-
arately written license, or stated as exceptions; the above requirements apply either
way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided un-
der this License. Any attempt otherwise to propagate or modify it is void, and will
automatically terminate your rights under this License (including any patent licenses
granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder
explicitly and finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means prior to 60 days
after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if
the copyright holder notifies you of the violation by some reasonable means, this is the
first time you have received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after your receipt of the
notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have
been terminated and not permanently reinstated, you do not qualify to receive new
licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the
Program. Ancillary propagation of a covered work occurring solely as a consequence of
using peer-to-peer transmission to receive a copy likewise does not require acceptance.
GNU GENERAL PUBLIC LICENSE 10
However, nothing other than this License grants you permission to propagate or modify
any covered work. These actions infringe copyright if you do not accept this License.
Therefore, by modifying or propagating a covered work, you indicate your acceptance
of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license
from the original licensors, to run, modify and propagate that work, subject to this
License. You are not responsible for enforcing compliance by third parties with this
License.
An “entity transaction” is a transaction transferring control of an organization, or
substantially all assets of one, or subdividing an organization, or merging organizations.
If propagation of a covered work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever licenses to the work
the party’s predecessor in interest had or could give under the previous paragraph, plus
a right to possession of the Corresponding Source of the work from the predecessor in
interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or
affirmed under this License. For example, you may not impose a license fee, royalty, or
other charge for exercise of rights granted under this License, and you may not initiate
litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent
claim is infringed by making, using, selling, offering for sale, or importing the Program
or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the
Program or a work on which the Program is based. The work thus licensed is called
the contributor’s “contributor version”.
A contributor’s “essential patent claims” are all patent claims owned or controlled by
the contributor, whether already acquired or hereafter acquired, that would be infringed
by some manner, permitted by this License, of making, using, or selling its contributor
version, but do not include claims that would be infringed only as a consequence of
further modification of the contributor version. For purposes of this definition, “con-
trol” includes the right to grant patent sublicenses in a manner consistent with the
requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license
under the contributor’s essential patent claims, to make, use, sell, offer for sale, import
and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or com-
mitment, however denominated, not to enforce a patent (such as an express permission
to practice a patent or covenant not to sue for patent infringement). To “grant” such
a patent license to a party means to make such an agreement or commitment not to
enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corre-
sponding Source of the work is not available for anyone to copy, free of charge and under
the terms of this License, through a publicly available network server or other readily
accessible means, then you must either (1) cause the Corresponding Source to be so
GNU GENERAL PUBLIC LICENSE 11
available, or (2) arrange to deprive yourself of the benefit of the patent license for this
particular work, or (3) arrange, in a manner consistent with the requirements of this
License, to extend the patent license to downstream recipients. “Knowingly relying”
means you have actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient’s use of the covered work in a country,
would infringe one or more identifiable patents in that country that you have reason
to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey,
or propagate by procuring conveyance of, a covered work, and grant a patent license
to some of the parties receiving the covered work authorizing them to use, propagate,
modify or convey a specific copy of the covered work, then the patent license you grant
is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its cover-
age, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the
rights that are specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is in the business of
distributing software, under which you make payment to the third party based on the
extent of your activity of conveying the work, and under which the third party grants,
to any of the parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work conveyed by you (or
copies made from those copies), or (b) primarily for and in connection with specific
products or compilations that contain the covered work, unless you entered into that
arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or
other defenses to infringement that may otherwise be available to you under applicable
patent law.
12. No Surrender of Others’ Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that
contradict the conditions of this License, they do not excuse you from the conditions
of this License. If you cannot convey a covered work so as to satisfy simultaneously
your obligations under this License and any other pertinent obligations, then as a
consequence you may not convey it at all. For example, if you agree to terms that
obligate you to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this License would
be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or
combine any covered work with a work licensed under version 3 of the GNU Affero
General Public License into a single combined work, and to convey the resulting work.
The terms of this License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License, section 13,
concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
GNU GENERAL PUBLIC LICENSE 12
The Free Software Foundation may publish revised and/or new versions of the GNU
General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that
a certain numbered version of the GNU General Public License “or any later version”
applies to it, you have the option of following the terms and conditions either of that
numbered version or of any later version published by the Free Software Foundation.
If the Program does not specify a version number of the GNU General Public License,
you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU
General Public License can be used, that proxy’s public statement of acceptance of a
version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no
additional obligations are imposed on any author or copyright holder as a result of your
choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PER-
MITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN
WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE
THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EX-
PRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE
OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFEC-
TIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO
MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE
LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, IN-
CIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO
LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUS-
TAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAM-
AGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given
local legal effect according to their terms, reviewing courts shall apply local law that
most closely approximates an absolute waiver of all civil liability in connection with
the Program, unless a warranty or assumption of liability accompanies a copy of the
Program in return for a fee.
GNU GENERAL PUBLIC LICENSE 13
You should have received a copy of the GNU General Public License
along with this program. If not, see https://www.gnu.org/licenses/.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it
starts in an interactive mode:
program Copyright (C) year name of author
This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w’.
This is free software, and you are welcome to redistribute it
under certain conditions; type ‘show c’ for details.
The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of
the General Public License. Of course, your program’s commands might be different; for a
GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to
sign a “copyright disclaimer” for the program, if necessary. For more information on this,
and how to apply and follow the GNU GPL, see https://www.gnu.org/licenses/.
The GNU General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider it more
useful to permit linking proprietary applications with the library. If this is what you want
to do, use the GNU Lesser General Public License instead of this License. But first, please
read https://www.gnu.org/licenses/why-not-lgpl.html.
14
for C include ‘identifier’, ‘number’, ‘string’, plus one symbol for each keyword, operator
or punctuation mark: ‘if’, ‘return’, ‘const’, ‘static’, ‘int’, ‘char’, ‘plus-sign’, ‘open-brace’,
‘close-brace’, ‘comma’ and many more. (These tokens can be subdivided into characters,
but that is a matter of lexicography, not grammar.)
Here is a simple C function subdivided into tokens:
int /* keyword ‘int’ */
square (int x) /* identifier, open-paren, keyword ‘int’,
identifier, close-paren */
{ /* open-brace */
return x * x; /* keyword ‘return’, identifier, asterisk,
identifier, semicolon */
} /* close-brace */
The syntactic groupings of C include the expression, the statement, the declaration,
and the function definition. These are represented in the grammar of C by nonterminal
symbols ‘expression’, ‘statement’, ‘declaration’ and ‘function definition’. The full grammar
uses dozens of additional language constructs, each with its own nonterminal symbol, in
order to express the meanings of these four. The example above is a function definition;
it contains one declaration, and one statement. In the statement, each ‘x’ is an expression
and so is ‘x * x’.
Each nonterminal symbol must have grammatical rules showing how it is made out of
simpler constructs. For example, one kind of C statement is the return statement; this
would be described with a grammar rule which reads informally as follows:
A ‘statement’ can be made of a ‘return’ keyword, an ‘expression’ and a ‘semi-
colon’.
There would be many other rules for ‘statement’, one for each kind of statement in C.
One nonterminal symbol must be distinguished as the special one which defines a com-
plete utterance in the language. It is called the start symbol. In a compiler, this means a
complete input program. In the C language, the nonterminal symbol ‘sequence of definitions
and declarations’ plays this role.
For example, ‘1 + 2’ is a valid C expression—a valid part of a C program—but it is not
valid as an entire C program. In the context-free grammar of C, this follows from the fact
that ‘expression’ is not the start symbol.
The Bison parser reads a sequence of tokens as its input, and groups the tokens using the
grammar rules. If the input is valid, the end result is that the entire token sequence reduces
to a single grouping whose symbol is the grammar’s start symbol. If we use a grammar for
C, the entire input must be a ‘sequence of definitions and declarations’. If not, the parser
reports a syntax error.
Each grouping can also have a semantic value as well as its nonterminal symbol. For
example, in a calculator, an expression typically has a semantic value that is a number. In
a compiler for a programming language, an expression typically has a semantic value that
is a tree structure describing the meaning of the expression.
are never performed. When a reduction makes two parsers identical, causing them to merge,
Bison records both sets of semantic actions. Whenever the last two parsers merge, reverting
to the single-parser case, Bison resolves all the outstanding actions either by precedences
given to the grammar rules involved, or by performing both actions, and then calling a
designated user-defined function on the resulting values to produce an arbitrary merged
result.
If the input is syntactically incorrect, both branches fail and the parser reports a syntax
error as usual.
The effect of all this is that the parser seems to “guess” the correct branch to take, or in
other words, it seems to use more lookahead than the underlying LR(1) algorithm actually
allows for. In this example, LR(2) would suffice, but also some cases that are not LR(k) for
any k can be handled this way.
In general, a GLR parser can take quadratic or cubic worst-case time, and the current
Bison parser even takes exponential time and space for some grammars. In practice, this
rarely happens, and for many grammars it is possible to prove that it cannot happen. The
present example contains only one conflict between two rules, and the type-declaration
context containing the conflict cannot be nested. So the number of branches that can exist
at any time is limited by the constant 2, and the parsing time is still linear.
Here is a Bison grammar corresponding to the example above. It parses a vastly simpli-
fied form of Pascal type declarations.
%token TYPE DOTDOT ID
%%
type_decl: TYPE ID ’=’ type ’;’ ;
type:
’(’ id_list ’)’
| expr DOTDOT expr
;
id_list:
ID
| id_list ’,’ ID
;
expr:
’(’ expr ’)’
| expr ’+’ expr
| expr ’-’ expr
| expr ’*’ expr
| expr ’/’ expr
| ID
;
When used as a normal LR(1) grammar, Bison correctly complains about one
reduce/reduce conflict. In the conflicting situation the parser chooses one of the
alternatives, arbitrarily the one declared first. Therefore the following correct input is not
recognized:
type t = (a) .. b;
Chapter 1: The Concepts of Bison 20
The parser can be turned into a GLR parser, while also telling Bison to be silent about the
one known reduce/reduce conflict, by adding these two declarations to the Bison grammar
file (before the first ‘%%’):
%glr-parser
%expect-rr 1
No change in the grammar itself is required. Now the parser recognizes all valid declarations,
according to the limited syntax above, transparently. In fact, the user does not even notice
when the parser splits.
So here we have a case where we can use the benefits of GLR, almost without disad-
vantages. Even in simple cases like this, however, there are at least two potential problems
to beware. First, always analyze the conflicts reported by Bison to make sure that GLR
splitting is only done where it is intended. A GLR parser splitting inadvertently may cause
problems less obvious than an LR parser statically choosing the wrong alternative in a con-
flict. Second, consider interactions with the lexer (see Section 7.1 [Semantic Info in Token
Kinds], page 132) with great care. Since a split parser consumes tokens without performing
any actions during the split, the lexer cannot obtain information via parser actions. Some
cases of lexer interactions can be eliminated by using GLR to shift the complications from
the lexer to the parser. You must check the remaining cases for correctness.
In our example, it would be safe for the lexer to return tokens based on their current
meanings in some symbol table, because no new symbols are defined in the middle of a type
declaration. Though it is possible for a parser to define the enumeration constants as they
are parsed, before the type declaration is completed, it actually makes no difference since
they cannot be used within the same enumerated type declaration.
%token TYPENAME ID
%right ’=’
%left ’+’
%glr-parser
%%
prog:
%empty
| prog stmt { printf ("\n"); }
;
Chapter 1: The Concepts of Bison 21
stmt:
expr ’;’ %dprec 1
| decl %dprec 2
;
expr:
ID { printf ("%s ", $$); }
| TYPENAME ’(’ expr ’)’
{ printf ("%s <cast> ", $1); }
| expr ’+’ expr { printf ("+ "); }
| expr ’=’ expr { printf ("= "); }
;
decl:
TYPENAME declarator ’;’
{ printf ("%s <declare> ", $1); }
| TYPENAME declarator ’=’ expr ’;’
{ printf ("%s <init-declare> ", $1); }
;
declarator:
ID { printf ("\"%s\" ", $1); }
| ’(’ declarator ’)’
;
This models a problematic part of the C++ grammar—the ambiguity between certain dec-
larations and statements. For example,
T (x) = y+z;
parses as either an expr or a stmt (assuming that ‘T’ is recognized as a TYPENAME and
‘x’ as an ID). Bison detects this as a reduce/reduce conflict between the rules expr : ID
and declarator : ID, which it cannot resolve at the time it encounters x in the example
above. Since this is a GLR parser, it therefore splits the problem into two parses, one for
each choice of resolving the reduce/reduce conflict. Unlike the example from the previous
section (see Section 1.5.1 [Using GLR on Unambiguous Grammars], page 18), however,
neither of these parses “dies,” because the grammar as it stands is ambiguous. One of
the parsers eventually reduces stmt : expr ’;’ and the other reduces stmt : decl, after
which both parsers are in an identical state: they’ve seen ‘prog stmt’ and have the same
unprocessed input remaining. We say that these parses have merged.
At this point, the GLR parser requires a specification in the grammar of how to choose
between the competing parses. In the example above, the two %dprec declarations specify
that Bison is to give precedence to the parse that interprets the example as a decl, which
implies that x is a declarator. The parser therefore prints
"x" y z + T <init-declare>
The %dprec declarations only come into play when more than one parse survives. Con-
sider a different input string for this parser:
Chapter 1: The Concepts of Bison 22
T (x) + y;
This is another example of using GLR to parse an unambiguous construct, as shown in
the previous section (see Section 1.5.1 [Using GLR on Unambiguous Grammars], page 18).
Here, there is no ambiguity (this cannot be parsed as a declaration). However, at the
time the Bison parser encounters x, it does not have enough information to resolve the
reduce/reduce conflict (again, between x as an expr or a declarator). In this case, no
precedence declaration is used. Again, the parser splits into two, one assuming that x is an
expr, and the other assuming x is a declarator. The second of these parsers then vanishes
when it sees +, and the parser prints
x T <cast> y +
Suppose that instead of resolving the ambiguity, you wanted to see all the possibilities.
For this purpose, you must merge the semantic actions of the two possible parsers, rather
than choosing one over the other. To do so, you could change the declaration of stmt as
follows:
stmt:
expr ’;’ %merge <stmtMerge>
| decl %merge <stmtMerge>
;
and define the stmtMerge function as:
static YYSTYPE
stmtMerge (YYSTYPE x0, YYSTYPE x1)
{
printf ("<OR> ");
return "";
}
with an accompanying forward declaration in the C declarations at the beginning of the
file:
%{
#define YYSTYPE char const *
static YYSTYPE stmtMerge (YYSTYPE x0, YYSTYPE x1);
%}
With these declarations, the resulting parser parses the first example as both an expr and
a decl, and prints
"x" y z + T <init-declare> x T <cast> y z + = <OR>
Bison requires that all of the productions that participate in any particular merge have
identical ‘%merge’ clauses. Otherwise, the ambiguity would be unresolvable, and the parser
will report an error during any parse that results in the offending merge.
1.5.3.2 YYERROR
Another Bison feature requiring special consideration is YYERROR (see Section 4.5 [Special
Features for Use in Actions], page 107), which you can invoke in a semantic action to
initiate error recovery. During deterministic GLR operation, the effect of YYERROR is the
same as its effect in a deterministic parser. The effect in a deferred action is similar,
but the precise point of the error is undefined; instead, the parser reverts to deterministic
operation, selecting an unspecified stack on which to continue with a syntax error. In
a semantic predicate (see Section 1.5.4 [Controlling a Parse with Arbitrary Predicates],
page 23) during nondeterministic parsing, YYERROR silently prunes the parse that invoked
the test.
terministic parser, causes the stack in which it is reduced to die. In a deterministic parser,
it acts like YYERROR.
As the example shows, predicates otherwise look like semantic actions, and therefore
you must take them into account when determining the numbers to use for denoting the
semantic values of right-hand side symbols. Predicate actions, however, have no defined
value, and may not be given labels.
There is a subtle difference between semantic predicates and ordinary actions in nonde-
terministic mode, since the latter are deferred. For example, we could try to rewrite the
previous example as
widget:
{ if (!new_syntax) YYERROR; }
"widget" id new_args { $$ = f($3, $4); }
| { if (new_syntax) YYERROR; }
"widget" id old_args { $$ = f($3, $4); }
;
(reversing the sense of the predicate tests to cause an error when they are false). However,
this does not have the same effect if new_args and old_args have overlapping syntax.
Since the midrule actions testing new_syntax are deferred, a GLR parser first encounters
the unresolved ambiguous reduction for cases where new_args and old_args recognize the
same string before performing the tests of new_syntax. It therefore reports an error.
Finally, be careful in writing predicates: deferred actions have not been evaluated, so
that using them in a predicate will have undefined effects.
1.6 Locations
Many applications, like interpreters or compilers, have to produce verbose and useful error
messages. To achieve this, one must be able to keep track of the textual location, or location,
of each syntactic construct. Bison provides a mechanism for handling these locations.
Each token has a semantic value. In a similar fashion, each token has an associated
location, but the type of locations is the same for all tokens and groupings. Moreover, the
output parser is equipped with a default data structure for storing locations (see Section 3.5
[Tracking Locations], page 66, for more details).
Like semantic values, locations can be reached in actions using a dedicated set of con-
structs. In the example above, the location of the whole grouping is @$, while the locations
of the subexpressions are @1 and @3.
When a rule is matched, a default action is used to compute the semantic value of its left
hand side (see Section 3.4.6 [Actions], page 59). In the same way, another default action
is used for locations. However, the action for locations is general enough for most cases,
meaning there is usually no need to describe for each rule how @$ should be formed. When
building a new location for a given grouping, the default behavior of the output parser is
to take the beginning of the first symbol, and the end of the last symbol.
parser is called a Bison parser, and this file is called a Bison parser implementation file.
Keep in mind that the Bison utility and the Bison parser are two distinct programs: the
Bison utility is a program whose output is the Bison parser implementation file that becomes
part of your program.
The job of the Bison parser is to group tokens into groupings according to the grammar
rules—for example, to build identifiers and operators into expressions. As it does this, it
runs the actions for the grammar rules it uses.
The tokens come from a function called the lexical analyzer that you must supply in some
fashion (such as by writing it in C). The Bison parser calls the lexical analyzer each time it
wants a new token. It doesn’t know what is “inside” the tokens (though their semantic values
may reflect this). Typically the lexical analyzer makes the tokens by parsing characters of
text, but Bison does not depend on this. See Section 4.3 [The Lexical Analyzer Function
yylex], page 100.
The Bison parser implementation file is C code which defines a function named yyparse
which implements that grammar. This function does not make a complete C program:
you must supply some additional functions. One is the lexical analyzer. Another is an
error-reporting function which the parser calls to report an error. In addition, a complete C
program must start with a function called main; you have to provide this, and arrange for it
to call yyparse or the parser will never run. See Chapter 4 [Parser C-Language Interface],
page 98.
Aside from the token kind names and the symbols in the actions you write, all symbols
defined in the Bison parser implementation file itself begin with ‘yy’ or ‘YY’. This includes
interface functions such as the lexical analyzer function yylex, the error reporting function
yyerror and the parser function yyparse itself. This also includes numerous identifiers
used for internal purposes. Therefore, you should avoid using C identifiers starting with
‘yy’ or ‘YY’ in the Bison grammar file except for the ones defined in this manual. Also, you
should avoid using the C identifiers ‘malloc’ and ‘free’ for anything other than their usual
meanings.
In some cases the Bison parser implementation file includes system headers, and in those
cases your code should respect the identifiers reserved by those headers. On some non-GNU
hosts, <limits.h>, <stddef.h>, <stdint.h> (if available), and <stdlib.h> are included
to declare memory allocators and integer types and constants. <libintl.h> is included
if message translation is in use (see Section 4.6 [Parser Internationalization], page 109).
Other system headers may be included if you define YYDEBUG (see Section 8.5 [Tracing Your
Parser], page 148) or YYSTACK_USE_ALLOCA (see Appendix A [Bison Symbols], page 209) to
a nonzero value.
2. Write a lexical analyzer to process input and pass tokens to the parser. The lexical
analyzer may be written by hand in C (see Section 4.3 [The Lexical Analyzer Function
yylex], page 100). It could also be produced using Lex, but the use of Lex is not
discussed in this manual.
3. Write a controlling function that calls the Bison-produced parser.
4. Write error-reporting routines.
To turn this source code as written into a runnable program, you must follow these steps:
1. Run Bison on the grammar to produce the parser.
2. Compile the code output by Bison, as well as any other source files.
3. Link the object files to produce the finished product.
Bison declarations
%%
Grammar rules
%%
Epilogue
The ‘%%’, ‘%{’ and ‘%}’ are punctuation that appears in every Bison grammar file to separate
the sections.
The prologue may define types and variables used in the actions. You can also use
preprocessor commands to define macros used there, and use #include to include header
files that do any of these things. You need to declare the lexical analyzer yylex and the
error printer yyerror here, along with any other global identifiers used by the actions in
the grammar rules.
The Bison declarations declare the names of the terminal and nonterminal symbols, and
may also describe operator precedence and the data types of semantic values of various
symbols.
The grammar rules define how to construct each nonterminal symbol from its parts.
The epilogue can contain any code you want to use. Often the definitions of functions
declared in the prologue go here. In a simple program, all the rest of the program can go
here.
27
2 Examples
Now we show and explain several sample programs written using Bison: a Reverse Polish
Notation calculator, an algebraic (infix) notation calculator — later extended to track “lo-
cations” — and a multi-function calculator. All produce usable, though limited, interactive
desk-top calculators.
These examples are simple, but Bison grammars for real programming languages are
written the same way. You can copy these examples into a source file to try them.
%{
#include <stdio.h>
#include <math.h>
int yylex (void);
void yyerror (char const *);
%}
defined as; if you don’t define it, int is the default. Because we specify ‘{double}’, each
token and each expression has an associated value, which is a floating point number. C
code can use YYSTYPE to refer to the value api.value.type.
Each terminal symbol that is not a single-character literal must be declared. (Single-
character literals normally don’t need to be declared.) In this example, all the arithmetic
operators are designated by single-character literals, so the only terminal symbol that needs
to be declared is NUM, the token kind for numeric constants.
line:
’\n’
| exp ’\n’ { printf ("%.10g\n", $1); }
;
exp:
NUM
| exp exp ’+’ { $$ = $1 + $2; }
| exp exp ’-’ { $$ = $1 - $2; }
| exp exp ’*’ { $$ = $1 * $2; }
| exp exp ’/’ { $$ = $1 / $2; }
| exp exp ’^’ { $$ = pow ($1, $2); } /* Exponentiation */
| exp ’n’ { $$ = -$1; } /* Unary minus */
;
%%
The groupings of the rpcalc “language” defined here are the expression (given the name
exp), the line of input (line), and the complete input transcript (input). Each of these
nonterminal symbols has several alternate rules, joined by the vertical bar ‘|’ which is read
as “or”. The following sections explain what these rules mean.
The semantics of the language is determined by the actions taken when a grouping
is recognized. The actions are the C code that appears inside braces. See Section 3.4.6
[Actions], page 59.
You must specify these actions in C, but Bison provides the means for passing semantic
values between the rules. In each action, the pseudo-variable $$ stands for the semantic
value for the grouping that the rule is going to construct. Assigning a value to $$ is the
main job of most actions. The semantic values of the components of the rule are referred
to as $1, $2, and so on.
%empty
| input line
;
This definition reads as follows: “A complete input is either an empty string, or a
complete input followed by an input line”. Notice that “complete input” is defined in terms
of itself. This definition is said to be left recursive since input appears always as the
leftmost symbol in the sequence. See Section 3.3.3 [Recursive Rules], page 55.
The first alternative is empty because there are no symbols between the colon and the
first ‘|’; this means that input can match an empty string of input (no tokens). We write
the rules this way because it is legitimate to type Ctrl-d right after you start the calculator.
It’s conventional to put an empty alternative first and to use the (optional) %empty directive,
or to write the comment ‘/* empty */’ in it (see Section 3.3.2 [Empty Rules], page 55).
The second alternate rule (input line) handles all nontrivial input. It means, “After
reading any number of lines, read one more line if possible.” The left recursion makes this
rule into a loop. Since the first alternative matches empty input, the loop can be executed
zero or more times.
The parser function yyparse continues to process input until a grammatical error is seen
or the lexical analyzer says there are no more input tokens; we will arrange for the latter
to happen at end-of-input.
identifier, that identifier is defined by Bison as a C enum whose definition is the appropriate
code. In this example, therefore, NUM becomes an enum for yylex to use.
The semantic value of the token (if it has one) is stored into the global variable yylval,
which is where the Bison parser will look for it. (The C data type of yylval is YYSTYPE,
whose value was defined at the beginning of the grammar via ‘%define api.value.type
{double}’; see Section 2.1.1 [Declarations for rpcalc], page 27.)
A token kind code of zero is returned if the end-of-input is encountered. (Bison recognizes
any nonpositive value as indicating end-of-input.)
#include <ctype.h>
#include <stdlib.h>
int
yylex (void)
{
int c = getchar ();
/* Skip white space. */
while (c == ’ ’ || c == ’\t’)
c = getchar ();
/* Process numbers. */
if (c == ’.’ || isdigit (c))
{
ungetc (c, stdin);
if (scanf ("%lf", &yylval) != 1)
abort ();
return NUM;
}
/* Return end-of-input. */
else if (c == EOF)
return YYEOF;
/* Return a single char. */
else
return c;
}
int
main (void)
{
return yyparse ();
}
The file rpcalc now contains the executable code. Here is an example session using
rpcalc.
$ rpcalc
4 9 +
⇒ 13
3 7 + 3 4 5 *+-
⇒ -13
3 7 + 3 4 5 * + - n Note the unary minus, ‘n’
⇒ 13
5 6 / 4 n +
⇒ -3.166666667
3 4 ^ Exponentiation
⇒ 81
^D End-of-file indicator
$
%{
#include <math.h>
#include <stdio.h>
int yylex (void);
void yyerror (char const *);
%}
Chapter 2: Examples 34
/* Bison declarations. */
%define api.value.type {double}
%token NUM
%left ’-’ ’+’
%left ’*’ ’/’
%precedence NEG /* negation--unary minus */
%right ’^’ /* exponentiation */
line:
’\n’
| exp ’\n’ { printf ("\t%.10g\n", $1); }
;
exp:
NUM
| exp ’+’ exp { $$ = $1 + $3; }
| exp ’-’ exp { $$ = $1 - $3; }
| exp ’*’ exp { $$ = $1 * $3; }
| exp ’/’ exp { $$ = $1 / $3; }
| ’-’ exp %prec NEG { $$ = -$2; }
| exp ’^’ exp { $$ = pow ($1, $3); }
| ’(’ exp ’)’ { $$ = $2; }
;
%%
The functions yylex, yyerror and main can be the same as before.
There are two important new features shown in this code.
In the second section (Bison declarations), %left declares token kinds and says they are
left-associative operators. The declarations %left and %right (right associativity) take the
place of %token which is used to declare a token kind name without associativity/precedence.
(These tokens are single-character literals, which ordinarily don’t need to be declared. We
declare them here to specify the associativity/precedence.)
Operator precedence is determined by the line ordering of the declarations; the higher
the line number of the declaration (lower on the page or screen), the higher the precedence.
Hence, exponentiation has the highest precedence, unary minus (NEG) is next, followed by ‘*’
and ‘/’, and so on. Unary minus is not associative, only precedence matters (%precedence.
See Section 5.3 [Operator Precedence], page 114.
The other important new feature is the %prec in the grammar section for the unary
minus operator. The %prec simply instructs Bison that the rule ‘| ’-’ exp’ has the same
precedence as NEG—in this case the next-to-highest. See Section 5.4 [Context-Dependent
Precedence], page 116.
Chapter 2: Examples 35
%{
#include <math.h>
int yylex (void);
void yyerror (char const *);
%}
/* Bison declarations. */
%define api.value.type {int}
%token NUM
line:
’\n’
| exp ’\n’ { printf ("%d\n", $1); }
;
exp:
NUM
| exp ’+’ exp { $$ = $1 + $3; }
| exp ’-’ exp { $$ = $1 - $3; }
| exp ’*’ exp { $$ = $1 * $3; }
Chapter 2: Examples 37
This code shows how to reach locations inside of semantic actions, by using the pseudo-
variables @n for rule components, and the pseudo-variable @$ for groupings.
We don’t need to assign a value to @$: the output parser does it automatically. By
default, before executing the C code of each action, @$ is set to range from the beginning
of @1 to the end of @n, for a rule with n components. This behavior can be redefined (see
Section 3.5.3 [Default Action for Locations], page 68), and for very specific rules, @$ can be
computed by hand.
To this end, we must take into account every single character of the input text, to avoid
the computed locations of being fuzzy or wrong:
int
yylex (void)
{
int c;
/* Step. */
yylloc.first_line = yylloc.last_line;
yylloc.first_column = yylloc.last_column;
Chapter 2: Examples 38
/* Process numbers. */
if (isdigit (c))
{
yylval = c - ’0’;
++yylloc.last_column;
while (isdigit (c = getchar ()))
{
++yylloc.last_column;
yylval = yylval * 10 + c - ’0’;
}
ungetc (c, stdin);
return NUM;
}
/* Return end-of-input. */
if (c == EOF)
return YYEOF;
Basically, the lexical analyzer performs the same processing as before: it skips blanks
and tabs, and reads numbers or single-character tokens. In addition, it updates yylloc,
the global variable (of type YYLTYPE) containing the token’s location.
Now, each time this function returns a token, the parser has its kind as well as its
semantic value, and its location in the text. The last needed change is to initialize yylloc,
for example in the controlling function:
int
main (void)
{
yylloc.first_line = yylloc.last_line = 1;
yylloc.first_column = yylloc.last_column = 0;
return yyparse ();
}
Remember that computing locations is not a matter of syntax. Every character must be
associated to a location update, whether it is in valid input, in comments, in literal strings,
and so on.
Chapter 2: Examples 39
%precedence ’=’
%left ’-’ ’+’
%left ’*’ ’/’
%precedence NEG /* negation--unary minus */
%right ’^’ /* exponentiation */
The above grammar introduces only two new features of the Bison language. These
features allow semantic values to have various data types (see Section 3.4.2 [More Than
One Value Type], page 57).
The special union value assigned to the %define variable api.value.type specifies that
the symbols are defined with their data types. Bison will generate an appropriate definition
of YYSTYPE to store these values.
Since values can now have various types, it is necessary to associate a type with each
grammar symbol whose semantic value is used. These symbols are NUM, VAR, FUN, and exp.
Their declarations are augmented with their data type (placed between angle brackets). For
instance, values of NUM are stored in double.
The Bison construct %nterm is used for declaring nonterminal symbols, just as %token is
used for declaring token kinds. Previously we did not use %nterm before because nonterminal
symbols are normally declared implicitly by the rules that define them. But exp must be
declared explicitly so we can specify its value type. See Section 3.7.4 [Nonterminal Symbols],
page 72.
line:
’\n’
| exp ’\n’ { printf ("%.10g\n", $1); }
| error ’\n’ { yyerrok; }
;
Chapter 2: Examples 41
exp:
NUM
| VAR { $$ = $1->value.var; }
| VAR ’=’ exp { $$ = $3; $1->value.var = $3; }
| FUN ’(’ exp ’)’ { $$ = $1->value.fun ($3); }
| exp ’+’ exp { $$ = $1 + $3; }
| exp ’-’ exp { $$ = $1 - $3; }
| exp ’*’ exp { $$ = $1 * $3; }
| exp ’/’ exp { $$ = $1 / $3; }
| ’-’ exp %prec NEG { $$ = -$2; }
| exp ’^’ exp { $$ = pow ($1, $3); }
| ’(’ exp ’)’ { $$ = $2; }
;
/* End of grammar. */
%%
The new version of main will call init_table to initialize the symbol table:
struct init
{
char const *name;
func_t *fun;
};
symrec *
putsym (char const *name, int sym_type)
{
symrec *res = (symrec *) malloc (sizeof (symrec));
res->name = strdup (name);
res->type = sym_type;
res->value.var = 0; /* Set value to 0 even if fun. */
res->next = sym_table;
sym_table = res;
return res;
}
symrec *
getsym (char const *name)
{
for (symrec *p = sym_table; p; p = p->next)
if (strcmp (p->name, name) == 0)
return p;
return NULL;
}
int
yylex (void)
{
int c = getchar ();
if (c == EOF)
return YYEOF;
Chapter 2: Examples 44
Bison generated a definition of YYSTYPE with a member named NUM to store value of NUM
symbols.
2.6 Exercises
1. Add some new functions from math.h to the initialization list.
2. Add another array that contains constants and their values. Then modify init_table
to add these constants to the symbol table. It will be easiest to give the constants type
VAR.
3. Make the program report an error if the user refers to an uninitialized variable in any
way except to store a value in it.
46
Bison declarations
%%
Grammar rules
%%
Epilogue
Comments enclosed in ‘/* ... */’ may appear in any of the sections. As a GNU exten-
sion, ‘//’ introduces a comment that continues until end of line.
%union {
long n;
tree t; /* tree is defined in ptypes.h. */
}
Chapter 3: Bison Grammar Files 47
%{
static void print_token (yytoken_kind_t token, YYSTYPE val);
%}
...
When in doubt, it is usually safer to put prologue code before all Bison declarations,
rather than after. For example, any definitions of feature test macros like _GNU_SOURCE or
_POSIX_C_SOURCE should appear before all Bison declarations, as feature test macros can
affect the behavior of Bison-generated #include directives.
%union {
long n;
tree t; /* tree is defined in ptypes.h. */
}
%{
static void print_token (yytoken_kind_t token, YYSTYPE val);
%}
...
Notice that there are two Prologue sections here, but there’s a subtle distinction between
their functionality. For example, if you decide to override Bison’s default definition for
YYLTYPE, in which Prologue section should you write your new definition? You should
write it in the first since Bison will insert that code into the parser implementation file
before the default YYLTYPE definition. In which Prologue section should you prototype an
internal function, trace_token, that accepts YYLTYPE and yytoken_kind_t as arguments?
You should prototype it in the second since Bison will insert that code after the YYLTYPE
and yytoken_kind_t definitions.
This distinction in functionality between the two Prologue sections is established by the
appearance of the %union between them. This behavior raises a few questions. First, why
should the position of a %union affect definitions related to YYLTYPE and yytoken_kind_t?
Chapter 3: Bison Grammar Files 48
Second, what if there is no %union? In that case, the second kind of Prologue section is
not available. This behavior is not intuitive.
To avoid this subtle %union dependency, rewrite the example using a %code top and an
unqualified %code. Let’s go ahead and add the new YYLTYPE definition and the trace_token
prototype at the same time:
%code top {
#define _GNU_SOURCE
#include <stdio.h>
#include "ptypes.h"
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%union {
long n;
tree t; /* tree is defined in ptypes.h. */
}
%code {
static void print_token (yytoken_kind_t token, YYSTYPE val);
static void trace_token (yytoken_kind_t token, YYLTYPE loc);
}
...
In this way, %code top and the unqualified %code achieve the same functionality as the two
kinds of Prologue sections, but it’s always explicit which kind you intend. Moreover, both
kinds are always available even in the absence of %union.
The %code top block above logically contains two parts. The first two lines before the
warning need to appear near the top of the parser implementation file. The first line after the
warning is required by YYSTYPE and thus also needs to appear in the parser implementation
file. However, if you’ve instructed Bison to generate a parser header file (see Section 3.7.13
[Bison Declaration Summary], page 79), you probably want that line to appear before the
YYSTYPE definition in that header file as well. The YYLTYPE definition should also appear
in the parser header file to override the default YYLTYPE definition there.
Chapter 3: Bison Grammar Files 49
In other words, in the %code top block above, all but the first two lines are dependency
code required by the YYSTYPE and YYLTYPE definitions. Thus, they belong in one or more
%code requires:
%code top {
#define _GNU_SOURCE
#include <stdio.h>
}
%code requires {
#include "ptypes.h"
}
%union {
long n;
tree t; /* tree is defined in ptypes.h. */
}
%code requires {
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%code {
static void print_token (yytoken_kind_t token, YYSTYPE val);
static void trace_token (yytoken_kind_t token, YYLTYPE loc);
}
...
Now Bison will insert #include "ptypes.h" and the new YYLTYPE definition before the
Bison-generated YYSTYPE and YYLTYPE definitions in both the parser implementation file
and the parser header file. (By the same reasoning, %code requires would also be the
appropriate place to write your own definition for YYSTYPE.)
When you are writing dependency code for YYSTYPE and YYLTYPE, you should prefer
%code requires over %code top regardless of whether you instruct Bison to generate a
parser header file. When you are writing code that you need Bison to insert only into the
parser implementation file and that has no special need to appear at the top of that file, you
should prefer the unqualified %code over %code top. These practices will make the purpose
of each block of your code explicit to Bison and to other developers reading your grammar
file. Following these practices, we expect the unqualified %code and %code requires to be
the most important of the four Prologue alternatives.
Chapter 3: Bison Grammar Files 50
At some point while developing your parser, you might decide to provide trace_token
to modules that are external to your parser. Thus, you might wish for Bison to insert
the prototype into both the parser header file and the parser implementation file. Since
this function is not a dependency required by YYSTYPE or YYLTYPE, it doesn’t make sense to
move its prototype to a %code requires. More importantly, since it depends upon YYLTYPE
and yytoken_kind_t, %code requires is not sufficient. Instead, move its prototype from
the unqualified %code to a %code provides:
%code top {
#define _GNU_SOURCE
#include <stdio.h>
}
%code requires {
#include "ptypes.h"
}
%union {
long n;
tree t; /* tree is defined in ptypes.h. */
}
%code requires {
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%code provides {
void trace_token (yytoken_kind_t token, YYLTYPE loc);
}
%code {
static void print_token (FILE *file, int token, YYSTYPE val);
}
...
Bison will insert the trace_token prototype into both the parser header file and the parser
implementation file after the definitions for yytoken_kind_t, YYLTYPE, and YYSTYPE.
The above examples are careful to write directives in an order that reflects the layout of
the generated parser implementation and header files: %code top, %code requires, %code
provides, and then %code. While your grammar files may generally be easier to read if
Chapter 3: Bison Grammar Files 51
you also follow this order, Bison does not require it. Instead, Bison lets you choose an
organization that makes sense to you.
You may declare any of these directives multiple times in the grammar file. In that case,
Bison concatenates the contained code in declaration order. This is the only way in which
the position of one of these directives within the grammar file affects its functionality.
The result of the previous two properties is greater flexibility in how you may organize
your grammar file. For example, you may organize semantic-type-related directives by
semantic type:
%code requires { #include "type1.h" }
%union { type1 field1; }
%destructor { type1_free ($$); } <field1>
%printer { type1_print (yyo, $$); } <field1>
definition of yyparse. For example, the definitions of yylex and yyerror often go here.
Because C requires functions to be declared before being used, you often need to declare
functions like yylex and yyerror in the Prologue, even if you define them in the Epilogue.
See Chapter 4 [Parser C-Language Interface], page 98.
If the last section is empty, you may omit the ‘%%’ that separates it from the grammar
rules.
The Bison parser itself contains many macros and identifiers whose names start with
‘yy’ or ‘YY’, so it is a good idea to avoid using any such names (except those documented
in this manual) in the epilogue of the grammar file.
• A literal string token is written like a C string constant; for example, "<=" is a literal
string token. A literal string token doesn’t need to be declared unless you need to
specify its semantic value data type (see Section 3.4.1 [Data Types of Semantic Values],
page 56), associativity, or precedence (see Section 5.3 [Operator Precedence], page 114).
You can associate the literal string token with a symbolic name as an alias, using the
%token declaration (see Section 3.7.2 [Token Kind Names], page 70). If you don’t do
that, the lexical analyzer has to retrieve the token code for the literal string token from
the yytname table (see Section 4.3.1 [Calling Convention for yylex], page 100).
Warning: literal string tokens do not work in Yacc.
By convention, a literal string token is used only to represent a token that consists
of that particular string. Thus, you should use the token kind "<=" to represent the
string ‘<=’ as a token. Bison does not enforce this convention, but if you depart from
it, people who read your program will be confused.
All the escape sequences used in string literals in C can be used in Bison as well,
except that you must not use a null character within a string literal. Also, unlike
Standard C, trigraphs have no special meaning in Bison string literals, nor is backslash-
newline allowed. A literal string token must contain two or more characters; for a token
containing just one character, use a character token (see above).
How you choose to write a terminal symbol has no effect on its grammatical meaning.
That depends only on where it appears in rules and on when the parser function returns
that symbol.
The value returned by yylex is always one of the terminal symbols, except that a zero
or negative value signifies end-of-input. Whichever way you write the token kind in the
grammar rules, you write it the same way in the definition of yylex. The numeric code
for a character token kind is simply the positive numeric code of the character, so yylex
can use the identical value to generate the requisite code, though you may need to convert
it to unsigned char to avoid sign-extension on hosts where char is signed. Each named
token kind becomes a C macro in the parser implementation file, so yylex can use the name
to stand for the code. (This is why periods don’t make sense in terminal symbols.) See
Section 4.3.1 [Calling Convention for yylex], page 100.
If yylex is defined in a separate file, you need to arrange for the token-kind definitions
to be available there. Use the -d option when you run Bison, so that it will write these
definitions into a separate header file name.tab.h which you can include in the other source
files that need it. See Chapter 9 [Invoking Bison], page 154.
If you want to write a grammar that is portable to any Standard C host, you must use
only nonnull character tokens taken from the basic execution character set of Standard C.
This set consists of the ten digits, the 52 lower- and upper-case English letters, and the
characters in the following C-language string:
"\a\b\t\n\v\f\r !\"#%&’()*+,-./:;<=>?[\\]^_{|}~"
The yylex function and Bison must use a consistent character set and encoding for char-
acter tokens. For example, if you run Bison in an ASCII environment, but then compile
and run the resulting program in an environment that uses an incompatible character set
like EBCDIC, the resulting program may not work because the tables generated by Bison
will assume ASCII numeric values for character tokens. It is standard practice for software
Chapter 3: Bison Grammar Files 54
distributions to contain C source files that were generated by Bison in an ASCII environ-
ment, so installers on platforms that are incompatible with ASCII must rebuild those files
before compiling them.
The symbol error is a terminal symbol reserved for error recovery (see Chapter 6 [Error
Recovery], page 130); you shouldn’t use it for any other purpose. In particular, yylex
should never return this value. The default value of the error token is 256, unless you
explicitly assigned 256 to one of your tokens with a %token declaration.
result:
rule1-components...
| rule2-components...
...
;
They are still considered distinct rules even when joined in this way.
Any kind of sequence can be defined using either left recursion or right recursion, but you
should always use left recursion, because it can parse a sequence of any number of elements
with bounded stack space. Right recursion uses up space on the Bison stack in proportion
to the number of elements in the sequence, because all the elements must be shifted onto the
stack before the rule can be applied even once. See Chapter 5 [The Bison Parser Algorithm],
page 111, for further explanation of this.
Indirect or mutual recursion occurs when the result of the rule does not appear directly
on its right hand side, but does appear in rules for other nonterminals which do appear on
its right hand side.
For example:
expr:
primary
| primary ’+’ primary
;
primary:
constant
| ’(’ expr ’)’
;
defines two mutually-recursive nonterminals, since each refers to the other.
This macro definition must go in the prologue of the grammar file (see Section 3.1 [Outline
of a Bison Grammar], page 46). If compatibility with POSIX Yacc matters to you, use
this. Note however that Bison cannot know YYSTYPE’s value, not even whether it is defined,
so there are services it cannot provide. Besides this works only for languages that have a
preprocessor.
/* For an "identifier". */
yylval.ID = "42";
return ID;
If the %define variable api.token.prefix is defined (see Section 3.7.14 [%define Sum-
mary], page 84), then it is also used to prefix the union member names. For instance, with
‘%define api.token.prefix {TOK_}’:
/* For an "integer". */
yylval.TOK_INT = 42;
return TOK_INT;
This Bison extension cannot work if %yacc (or -y/--yacc) is enabled, as POSIX man-
dates that Yacc generate tokens as macros (e.g., ‘#define INT 258’, or ‘#define TOK_INT
258’).
A similar feature is provided for C++ that in addition overcomes C++ limitations (that
forbid non-trivial objects to be part of a union): ‘%define api.value.type variant’, see
Section 10.1.4.2 [++ Variants-titlehundefinedi], page 171.
3.4.6 Actions
An action accompanies a syntactic rule and contains C code to be executed each time an
instance of that rule is recognized. The task of most actions is to compute a semantic value
for the grouping built by the rule from the semantic values associated with tokens or smaller
groupings.
An action consists of braced code containing C statements, and can be placed at any
position in the rule; it is executed at that position. Most rules have just one action at the
end of the rule, following all the components. Actions in the middle of a rule are tricky and
used only for special purposes (see Section 3.4.8 [Actions in Midrule], page 61).
The C code in an action can refer to the semantic values of the components matched
by the rule with the construct $n, which stands for the value of the nth component. The
semantic value for the grouping being constructed is $$. In addition, the semantic values
of symbols can be accessed with the named references construct $name or $[name]. Bison
translates both of these constructs into expressions of the appropriate type when it copies
the actions into the parser implementation file. $$ (or $name, when it stands for the current
grouping) is translated to a modifiable lvalue, so it can be assigned to.
Here is a typical example:
exp:
...
| exp ’+’ exp { $$ = $1 + $3; }
Or, in terms of named references:
exp[result]:
...
| exp[left] ’+’ exp[right] { $result = $left + $right; }
Chapter 3: Bison Grammar Files 60
This rule constructs an exp from two smaller exp groupings connected by a plus-sign token.
In the action, $1 and $3 ($left and $right) refer to the semantic values of the two
component exp groupings, which are the first and third symbols on the right hand side of
the rule. The sum is stored into $$ ($result) so that it becomes the semantic value of
the addition-expression just recognized by the rule. If there were a useful semantic value
associated with the ‘+’ token, it could be referred to as $2.
See Section 3.6 [Named References], page 69, for more information about using the
named references construct.
Note that the vertical-bar character ‘|’ is really a rule separator, and actions are attached
to a single rule. This is a difference with tools like Flex, for which ‘|’ stands for either “or”,
or “the same action as that of the next rule”. In the following example, the action is
triggered only when ‘b’ is found:
a-or-b: ’a’|’b’ { a_or_b_found = 1; };
If you don’t specify an action for a rule, Bison supplies a default: $$ = $1. Thus, the
value of the first symbol in the rule becomes the value of the whole rule. Of course, the
default action is valid only if the two data types match. There is no meaningful default
action for an empty rule; every empty rule must have an explicit action unless the rule’s
value does not matter.
$n with n zero or negative is allowed for reference to tokens and groupings on the stack
before those that match the current rule. This is a very risky practice, and to use it reliably
you must be certain of the context in which the rule is applied. Here is a case in which you
can use this reliably:
foo:
expr bar ’+’ expr { ... }
| expr bar ’-’ expr { ... }
;
bar:
%empty { previous_expr = $0; }
;
As long as bar is used only in the fashion shown here, $0 always refers to the expr which
precedes bar in the definition of foo.
It is also possible to access the semantic value of the lookahead token, if any, from a
semantic action. This semantic value is stored in yylval. See Section 4.5 [Special Features
for Use in Actions], page 107.
exp:
...
| exp ’+’ exp { $$ = $1 + $3; }
$1 and $3 refer to instances of exp, so they all have the data type declared for the nonter-
minal symbol exp. If $2 were used, it would have the data type declared for the terminal
symbol ’+’, whatever that might be.
Alternatively, you can specify the data type when you refer to the value, by inserting
‘<type>’ after the ‘$’ at the beginning of the reference. For example, if you have defined
types as shown here:
%union {
int itype;
double dtype;
}
then you can write $<itype>1 to refer to the first subunit of the rule as an integer, or
$<dtype>1 to refer to it as a double.
The midrule action itself counts as one of the components of the rule. This makes a
difference when there is another action later in the same rule (and usually there is another
at the end): you have to count the actions along with the symbols when working out which
number n to use in $n.
The midrule action can also have a semantic value. The action can set its value with
an assignment to $$, and actions later in the rule can refer to the value using $n. Since
there is no symbol to name the action, there is no way to declare a data type for the value
in advance, so you must use the ‘$<...>n’ construct to specify a data type each time you
refer to this value.
There is no way to set the value of the entire rule with a midrule action, because assign-
ments to $$ do not have that effect. The only way to set the value for the entire rule is
with an ordinary action at the end of the rule.
Here is an example from a hypothetical compiler, handling a let statement that looks like
‘let (variable) statement’ and serves to create a variable named variable temporarily
for the duration of statement. To parse this construct, we must put variable into the symbol
table while statement is parsed, then remove it afterward. Here is how it is done:
Chapter 3: Bison Grammar Files 62
stmt:
"let" ’(’ var ’)’
{
$<context>$ = push_context ();
declare_variable ($3);
}
stmt
{
$$ = $6;
pop_context ($<context>5);
}
As soon as ‘let (variable)’ has been recognized, the first action is run. It saves a copy
of the current semantic context (the list of accessible variables) as its semantic value, using
alternative context in the data-type union. Then it calls declare_variable to add the
new variable to that list. Once the first action is finished, the embedded statement stmt
can be parsed.
Note that the midrule action is component number 5, so the ‘stmt’ is component num-
ber 6. Named references can be used to improve the readability and maintainability (see
Section 3.6 [Named References], page 69):
stmt:
"let" ’(’ var ’)’
{
$<context>let = push_context ();
declare_variable ($3);
}[let]
stmt
{
$$ = $6;
pop_context ($<context>let);
}
After the embedded statement is parsed, its semantic value becomes the value of the
entire let-statement. Then the semantic value from the earlier action is used to restore
the prior list of variables. This removes the temporary let-variable from the list so that it
won’t appear to exist while the rest of the program is parsed.
Because the types of the semantic values of midrule actions are unknown to Bison, type-
based features (e.g., ‘%printer’, ‘%destructor’) do not work, which could result in memory
leaks. They also forbid the use of the variant implementation of the api.value.type in
C++ (see Section 10.1.4.2 [++ Variants-titlehundefinedi], page 171).
See Section 3.4.8.2 [Typed Midrule Actions], page 62, for one way to address this issue,
and Section 3.4.8.3 [Midrule Action Translation], page 63, for another: turning mid-action
actions into regular actions.
previous semantic context $<context>5 without restoring it. Thus, $<context>5 needs a
destructor (see Section 3.7.7 [Freeing Discarded Symbols], page 73), and Bison needs the
type of the semantic value (context) to select the right destructor.
As an extension to Yacc’s midrule actions, Bison offers a means to type their semantic
value: specify its type tag (‘<...>’ before the midrule action.
Consider the previous example, with an untyped midrule action:
stmt:
"let" ’(’ var ’)’
{
$<context>$ = push_context (); // ***
declare_variable ($3);
}
stmt
{
$$ = $6;
pop_context ($<context>5); // ***
}
If instead you write:
stmt:
"let" ’(’ var ’)’
<context>{ // ***
$$ = push_context (); // ***
declare_variable ($3);
}
stmt
{
$$ = $6;
pop_context ($5); // ***
}
then %printer and %destructor work properly (no more leaks!), C++ variants can be
used, and redundancy is reduced (<context> is specified once).
A midrule action is expected to generate a value if it uses $$, or the (final) action uses
$n where n denote the midrule action. In that case its nonterminal is rather named @n:
exp: { a(); } "b" { $$ = c(); } { d(); } "e" { f = $1; };
is translated into
@1: %empty { a(); };
@2: %empty { $$ = c(); };
$@3: %empty { d(); };
exp: @1 "b" @2 $@3 "e" { f = $1; }
There are probably two errors in the above example: the first midrule action does not
generate a value (it does not use $$ although the final action uses it), and the value of the
second one is not used (the final action does not use $3). Bison reports these errors when
the midrule-value warnings are enabled (see Chapter 9 [Invoking Bison], page 154):
$ bison -Wmidrule-value mid.y
mid.y:2.6-13: warning: unset value: $$
2 | exp: { a(); } "b" { $$ = c(); } { d(); } "e" { f = $1; };
| ^~~~~~~~
mid.y:2.19-31: warning: unused value: $3
2 | exp: { a(); } "b" { $$ = c(); } { d(); } "e" { f = $1; };
| ^~~~~~~~~~~~~
It is sometimes useful to turn midrule actions into regular actions, e.g., to factor them,
or to escape from their limitations. For instance, as an alternative to typed midrule action,
you may bury the midrule action inside a nonterminal symbol and to declare a printer and
a destructor for that symbol:
%nterm <context> let
%destructor { pop_context ($$); } let
%printer { print_context (yyo, $$); } let
%%
stmt:
let stmt
{
$$ = $2;
pop_context ($let);
};
let:
"let" ’(’ var ’)’
{
$let = push_context ();
declare_variable ($var);
};
Chapter 3: Bison Grammar Files 65
subroutine:
%empty { prepare_for_local_variables (); }
;
compound:
subroutine ’{’ declarations statements ’}’
| subroutine ’{’ statements ’}’
;
Now Bison can execute the action in the rule for subroutine without deciding which rule
for compound it will eventually use.
In addition, the named references construct @name and @[name] may also be used to
address the symbol locations. See Section 3.6 [Named References], page 69, for more infor-
mation about using the named references construct.
Here is a basic example using the default data type for locations:
exp:
...
| exp ’/’ exp
{
@$.first_column = @1.first_column;
@$.first_line = @1.first_line;
@$.last_column = @3.last_column;
@$.last_line = @3.last_line;
if ($3)
$$ = $1 / $3;
else
{
$$ = 1;
fprintf (stderr, "%d.%d-%d.%d: division by zero",
@3.first_line, @3.first_column,
@3.last_line, @3.last_column);
}
}
As for semantic values, there is a default action for locations that is run each time a rule
is matched. It sets the beginning of @$ to the beginning of the first symbol, and the end of
@$ to the end of the last symbol.
With this default action, the location tracking can be fully automatic. The example
above simply rewrites this way:
exp:
...
| exp ’/’ exp
{
if ($3)
$$ = $1 / $3;
else
{
$$ = 1;
fprintf (stderr, "%d.%d-%d.%d: division by zero",
@3.first_line, @3.first_column,
@3.last_line, @3.last_column);
}
}
It is also possible to access the location of the lookahead token, if any, from a semantic
action. This location is stored in yylloc. See Section 4.5 [Special Features for Use in
Actions], page 107.
Chapter 3: Bison Grammar Files 68
• Your macro should parenthesize its arguments, if need be, since the actual arguments
may not be surrounded by parentheses. Also, your macro should expand to something
that can be used as a single statement when it is followed by a semicolon.
In order to force Bison to recognize ‘name.suffix’ in its entirety as the name of a semantic
value, the bracketed syntax ‘$[name.suffix]’ must be used.
In the event that the stack type is a union, you must augment the %token or other
token declaration to include the data type alternative delimited by angle-brackets (see
Section 3.4.2 [More Than One Value Type], page 57).
For example:
%union { /* define stack type */
double val;
symrec *tptr;
}
%token <val> NUM /* define token NUM and its type */
You can associate a literal string token with a token kind name by writing the literal
string at the end of a %token declaration which declares the name. For example:
%token ARROW "=>"
For example, a grammar for the C language might specify these names with equivalent
literal string tokens:
%token <operator> OR "||"
%token <operator> LE 134 "<="
%left OR "<="
Once you equate the literal string and the token kind name, you can use them interchange-
ably in further declarations or the grammar rules. The yylex function can use the token
name or the literal string to obtain the token kind code (see Section 4.3.1 [Calling Conven-
tion for yylex], page 100).
String aliases allow for better error messages using the literal strings instead of the to-
ken names, such as ‘syntax error, unexpected ||, expecting number or (’ rather than
‘syntax error, unexpected OR, expecting NUM or LPAREN’.
String aliases may also be marked for internationalization (see Section 4.6.2 [Token
Internationalization], page 109):
%token
OR "||"
LPAREN "("
RPAREN ")"
’\n’ _("end of line")
<double>
NUM _("number")
would produce in French ‘erreur de syntaxe, || inattendu, attendait nombre ou (’
rather than ‘erreur de syntaxe, || inattendu, attendait number ou (’.
or
%left <type> symbols...
And indeed any of these declarations serves the purposes of %token. But in addition,
they specify the associativity and relative precedence for all the symbols:
• The associativity of an operator op determines how repeated uses of the operator nest:
whether ‘x op y op z’ is parsed by grouping x with y first or by grouping y with z
first. %left specifies left-associativity (grouping x with y first) and %right specifies
right-associativity (grouping y with z first). %nonassoc specifies no associativity, which
means that ‘x op y op z’ is considered a syntax error.
%precedence gives only precedence to the symbols, and defines no associativity at all.
Use this to define precedence only, and leave any potential conflict due to associativity
enabled.
• The precedence of an operator determines how it nests with other operators. All
the tokens declared in a single precedence declaration have equal precedence and nest
together according to their associativity. When two tokens declared in different prece-
dence declarations associate, the one declared later has the higher precedence and is
grouped first.
For backward compatibility, there is a confusing difference between the argument lists
of %token and precedence declarations. Only a %token can associate a literal string with a
token kind name. A precedence declaration always interprets a literal string as a reference
to a separate token. For example:
%left OR "<=" // Does not declare an alias.
%left OR 134 "<=" 135 // Declares 134 for OR and 135 for "<=".
where tag denotes a type tag such as ‘<ival>’, id denotes an identifier such as ‘NUM’, number
a decimal or hexadecimal integer such as ‘300’ or ‘0x12d’, char a character literal such as
‘’+’’, and string a string literal such as ‘"number"’. The postfix quantifiers are ‘?’ (zero or
one), ‘*’ (zero or more) and ‘+’ (one or more).
The directives %precedence, %right and %nonassoc behave like %left.
grammar file. The parser will invoke the code associated with one of these whenever
it discards any user-defined grammar symbol that has no per-symbol and no per-type
%destructor. The parser uses the code for <*> in the case of such a grammar symbol
for which you have formally declared a semantic type tag (%token, %nterm, and %type
count as such a declaration, but $<tag>$ does not). The parser uses the code for <>
in the case of such a grammar symbol that has no declared semantic type tag.
For example:
%union { char *string; }
%token <string> STRING1 STRING2
%nterm <string> string1 string2
%union { char character; }
%token <character> CHR
%nterm <character> chr
%token TAGLESS
%destructor { } <character>
%destructor { free ($$); } <*>
%destructor { free ($$); printf ("%d", @$.first_line); } STRING1 string1
%destructor { printf ("Discarding tagless symbol.\n"); } <>
guarantees that, when the parser discards any user-defined symbol that has a semantic type
tag other than <character>, it passes its semantic value to free by default. However, when
the parser discards a STRING1 or a string1, it uses the third %destructor, which frees it
and prints its line number to stdout (free is invoked only once). Finally, the parser merely
prints a message whenever it discards any symbol, such as TAGLESS, that has no semantic
type tag.
A Bison-generated parser invokes the default %destructors only for user-defined as
opposed to Bison-defined symbols. For example, the parser will not invoke either kind
of default %destructor for the special Bison-defined symbols $accept, $undefined, or
$end (see Appendix A [Bison Symbols], page 209), none of which you can reference in
your grammar. It also will not invoke either for the error token (see Appendix A [Bison
Symbols], page 209), which is always defined by Bison regardless of whether you reference
it in your grammar. However, it may invoke one of them for the end token (token 0) if you
redefine it from $end to, for example, END:
%token END 0
Finally, Bison will never invoke a %destructor for an unreferenced midrule semantic
value (see Section 3.4.8 [Actions in Midrule], page 61). That is, Bison does not consider
a midrule to have a semantic value if you do not reference $$ in the midrule’s action or
$n (where n is the right-hand side symbol position of the midrule) in any later action in
that rule. However, if you do reference either, the Bison-generated parser will invoke the
<> %destructor whenever it discards the midrule symbol.
• the current lookahead and the entire stack (except the current right-hand side symbols)
when the parser returns immediately, and
• the current lookahead and the entire stack (including the current right-hand side sym-
bols) when the C++ parser (lalr1.cc) catches an exception in parse,
• the start symbol, when the parser succeeds.
The parser can return immediately because of an explicit call to YYABORT or YYACCEPT,
or failed error recovery, or memory exhaustion.
Right-hand side symbols of a rule that explicitly triggers a syntax error via YYERROR are
not discarded automatically. As a rule of thumb, destructors are invoked only when user
actions cannot manage the memory.
For example:
%union { char *string; }
%token <string> STRING1 STRING2
%nterm <string> string1 string2
%union { char character; }
%token <character> CHR
%nterm <character> chr
%token TAGLESS
performs only the second %printer in this case, so it prints only once. Finally, the parser
print ‘<>’ for any symbol, such as TAGLESS, that has no semantic type tag. See Section 8.5.2
[Enabling Debug Traces for mfcalc], page 150, for a complete example.
empty_dims:
%empty %expect 2
| empty_dims ’[’ ’]’
;
Mid-rule actions generate implicit rules that are also subject to conflicts (see
Section 3.4.8.4 [Conflicts due to Midrule Actions], page 65). To attach an %expect or
%expect-rr annotation to an implicit mid-rule action’s rule, put it before the action. For
example,
%glr-parser
%expect-rr 1
Chapter 3: Bison Grammar Files 77
%%
clause:
"condition" %expect-rr 1 { value_mode(); } ’(’ exprs ’)’
| "condition" %expect-rr 1 { class_mode(); } ’(’ types ’)’
;
Here, the appropriate mid-rule action will not be determined until after the ‘(’ token is
shifted. Thus, the two actions will clash with each other, and we should expect one re-
duce/reduce conflict for each.
In general, using %expect involves these steps:
• Compile your grammar without %expect. Use the -v option to get a verbose list of
where the conflicts occur. Bison will also print the number of conflicts.
• Check each of the conflicts to make sure that Bison’s default resolution is what you
really want. If not, rewrite the grammar and go back to the beginning.
• Add an %expect declaration, copying the number n from the number that Bison printed.
With GLR parsers, add an %expect-rr declaration as well.
• Optionally, count up the number of states in which one or more conflicted reductions
for particular rules appear and add these numbers to the affected rules as %expect-rr
or %expect modifiers as appropriate. Rules that are in conflict appear in the output
listing surrounded by square brackets or, in the case of reduce/reduce conflicts, as
reductions having the same lookahead symbol as a square-bracketed reduction in the
same state.
Now Bison will report an error if you introduce an unexpected conflict, but will keep
silent otherwise.
The result is that the communication variables yylval and yylloc become local variables
in yyparse, and a different calling convention is used for the lexical analyzer function yylex.
See Section 4.3.6 [Calling Conventions for Pure Parsers], page 103, for the details of this.
The variable yynerrs becomes local in yyparse in pull mode but it becomes a member
of yypstate in push mode. (see Section 4.4.1 [The Error Reporting Function yyerror],
page 104). The convention for calling yyparse itself is unchanged.
Whether the parser is pure has nothing to do with the grammar rules. You can generate
either a pure parser or a nonreentrant parser from any valid grammar.
changed to remove the token as a parameter. A nonreentrant push parser example would
thus look like this:
extern int yychar;
int status;
yypstate *ps = yypstate_new ();
do {
yychar = yylex ();
status = yypush_parse (ps);
} while (status == YYPUSH_MORE);
yypstate_delete (ps);
That’s it. Notice the next token is put into the global variable yychar for use by the
next invocation of the yypush_parse function.
Bison also supports both the push parser interface along with the pull parser interface
in the same generated parser. In order to get this functionality, you should replace the
‘%define api.push-pull push’ declaration with the ‘%define api.push-pull both’ dec-
laration. Doing this will create all of the symbols mentioned earlier along with the two
extra symbols, yyparse and yypull_parse. yyparse can be used exactly as it normally
would be used. However, the user should note that it is implemented in the generated
parser by calling yypull_parse. This makes the yyparse function that is generated with
the ‘%define api.push-pull both’ declaration slower than the normal yyparse function.
If the user calls the yypull_parse function it will parse the rest of the input stream. It is
possible to yypush_parse tokens to select a subgrammar and then yypull_parse the rest
of the input stream. If you would like to switch back and forth between between parsing
styles, you would have to write your own yypull_parse function that knows when to quit
looking for input. An example of using the yypull_parse function would look like this:
yypstate *ps = yypstate_new ();
yypull_parse (ps); /* Will call the lexer */
yypstate_delete (ps);
Adding the ‘%define api.pure’ declaration does exactly the same thing to the generated
parser with ‘%define api.push-pull both’ as it did for ‘%define api.push-pull push’.
%union [Directive]
Declare the collection of data types that semantic values may have (see Section 3.4.4
[The Union Declaration], page 58).
%token [Directive]
Declare a terminal symbol (token kind name) with no precedence or associativity
specified (see Section 3.7.2 [Token Kind Names], page 70).
%right [Directive]
Declare a terminal symbol (token kind name) that is right-associative (see
Section 3.7.3 [Operator Precedence], page 71).
Chapter 3: Bison Grammar Files 80
%left [Directive]
Declare a terminal symbol (token kind name) that is left-associative (see Section 3.7.3
[Operator Precedence], page 71).
%nonassoc [Directive]
Declare a terminal symbol (token kind name) that is nonassociative (see Section 3.7.3
[Operator Precedence], page 71). Using it in a way that would be associative is a
syntax error.
%nterm [Directive]
Declare the type of semantic values for a nonterminal symbol (see Section 3.7.4 [Non-
terminal Symbols], page 72).
%type [Directive]
Declare the type of semantic values for a symbol (see Section 3.7.4 [Nonterminal
Symbols], page 72).
%start [Directive]
Specify the grammar’s start symbol (see Section 3.7.10 [The Start-Symbol], page 77).
%expect [Directive]
Declare the expected number of shift/reduce conflicts, either overall or for a given
rule (see Section 3.7.9 [Suppressing Conflict Warnings], page 76).
%expect-rr [Directive]
Declare the expected number of reduce/reduce conflicts, either overall or for a given
rule (see Section 3.7.9 [Suppressing Conflict Warnings], page 76).
For C parsers, the parser header file declares YYSTYPE unless YYSTYPE is already
defined as a macro or you have used a <type> tag without using %union. Therefore,
if you are using a %union (see Section 3.4.2 [More Than One Value Type], page 57)
with components that require other definitions, or if you have defined a YYSTYPE macro
or type definition (see Section 3.4.1 [Data Types of Semantic Values], page 56), you
need to arrange for these definitions to be propagated to all modules, e.g., by putting
them in a prerequisite header that is included both by your parser and by any other
module that needs YYSTYPE.
Unless your parser is pure, the parser header file declares yylval as an external
variable. See Section 3.7.11 [A Pure (Reentrant) Parser], page 77.
If you have also used locations, the parser header file declares YYLTYPE and yylloc
using a protocol similar to that of the YYSTYPE macro and yylval. See Section 3.5
[Tracking Locations], page 66.
This parser header file is normally essential if you wish to put the definition of yylex in
a separate source file, because yylex typically needs to be able to refer to the above-
mentioned declarations and to the token kind codes. See Section 4.3.4 [Semantic
Values of Tokens], page 102.
If you have declared %code requires or %code provides, the output header also
contains their code. See Section 3.7.15 [%code Summary], page 94.
The generated header is protected against multiple inclusions with a C preprocessor
guard: ‘YY_PREFIX_FILE_INCLUDED’, where PREFIX and FILE are the prefix (see
Section 3.8 [Multiple Parsers in the Same Program], page 95) and generated file name
turned uppercase, with each series of non alphanumerical characters converted to a
single underscore.
For instance with ‘%define api.prefix {calc}’ and ‘%defines "lib/parse.h"’, the
header will be guarded as follows.
#ifndef YY_CALC_LIB_PARSE_H_INCLUDED
# define YY_CALC_LIB_PARSE_H_INCLUDED
...
#endif /* ! YY_CALC_LIB_PARSE_H_INCLUDED */
%destructor [Directive]
Specify how the parser should reclaim the memory associated to discarded symbols.
See Section 3.7.7 [Freeing Discarded Symbols], page 73.
%locations [Directive]
Generate the code processing the locations (see Section 4.5 [Special Features for Use
in Actions], page 107). This mode is enabled as soon as the grammar uses the special
‘@n’ tokens, but if your grammar does not use it, using ‘%locations’ allows for more
accurate syntax error messages.
%no-lines [Directive]
Don’t generate any #line preprocessor commands in the parser implementation file.
Ordinarily Bison writes these commands in the parser implementation file so that the
C compiler and debuggers will associate errors and object code with your source file
(the grammar file). This directive causes them to associate errors with the parser
implementation file, treating it as an independent source file in its own right.
%pure-parser [Directive]
Deprecated version of ‘%define api.pure’ (see Section 3.7.14 [%define Summary],
page 84), for which Bison is more careful to warn about unreasonable usage.
%token-table [Directive]
This feature is obsolescent, avoid it in new projects.
Generate an array of token names in the parser implementation file. The name of the
array is yytname; yytname[i] is the name of the token whose internal Bison token
code is i. The first three elements of yytname correspond to the predefined tokens
Chapter 3: Bison Grammar Files 83
"$end", "error", and "$undefined"; after these come the symbols defined in the
grammar file.
The name in the table includes all the characters needed to represent the token in
Bison. For single-character literals and literal strings, this includes the surrounding
quoting characters and any escape sequences. For example, the Bison single-character
literal ’+’ corresponds to a three-character name, represented in C as "’+’"; and
the Bison two-character literal string "\\/" corresponds to a five-character name,
represented in C as "\"\\\\/\"".
When you specify %token-table, Bison also generates macro definitions for macros
YYNTOKENS, YYNNTS, and YYNRULES, and YYNSTATES:
YYNTOKENS
The number of terminal symbols, i.e., the highest token code, plus one.
YYNNTS The number of nonterminal symbols.
YYNRULES The number of grammar rules,
YYNSTATES
The number of parser states (see Section 5.5 [Parser States], page 117).
Here’s code for looking up a multicharacter token in yytname, assuming that the
characters of the token are stored in token_buffer, and assuming that the token
does not contain any characters like ‘"’ that require escaping.
for (int i = 0; i < YYNTOKENS; i++)
if (yytname[i]
&& yytname[i][0] == ’"’
&& ! strncmp (yytname[i] + 1, token_buffer,
strlen (token_buffer))
&& yytname[i][strlen (token_buffer) + 1] == ’"’
&& yytname[i][strlen (token_buffer) + 2] == 0)
break;
This method is discouraged: the primary purpose of string aliases is forging good
error messages, not describing the spelling of keywords. In addition, looking for the
token kind at runtime incurs a (small but noticeable) cost.
Finally, %token-table is incompatible with the custom and detailed values of the
parse.error %define variable.
%verbose [Directive]
Write an extra output file containing verbose descriptions of the parser states and
what is done for each type of lookahead token in that state. See Section 8.2 [Under-
standing Your Parser], page 138, for more information.
%yacc [Directive]
Pretend the option --yacc was given, i.e., imitate Yacc, including its naming con-
ventions. Only makes sense with the yacc.c skeleton. See Section 9.1.3 [Tuning the
Parser], page 161, for more.
Of course %yacc is a Bison extension. . .
Chapter 3: Bison Grammar Files 84
The rest of this section summarizes variables and values that %define accepts.
Some variables take Boolean values. In this case, Bison will complain if the variable
definition does not meet one of the following four conditions:
1. value is true
2. value is omitted (or "" is specified). This is equivalent to true.
3. value is false.
4. variable is never defined. In this case, Bison selects a default value.
What variables are accepted, as well as their meanings and default values, depend on the
selected target language and/or the parser skeleton (see Section 3.7.13 [Bison Declaration
Summary], page 79, see Section 3.7.13 [Bison Declaration Summary], page 79). Unaccepted
variables produce an error. Some of the accepted variables are described below.
• Purpose: Specify how the generated parser should include the generated header.
Historically, when option -D/--defines was used, bison generated a header and
pasted an exact copy of it into the generated parser implementation file. Since
Bison 3.6, it is #included as ‘"basename.h"’, instead of duplicated, unless file
is ‘y.tab’, see below.
The api.header.include variable allows to control how the generated parser
#includes the generated header. For instance:
%define api.header.include {"parse.h"}
or
%define api.header.include {<parser/parse.h>}
Using api.header.include does not change the name of the generated header,
only how it is included.
To work around limitations of Automake’s ylwrap (which runs bison with
--yacc), api.header.include is not predefined when the output file is
y.tab.c. Define it to avoid the duplication.
• Accepted Values: An argument for #include.
• Default Value: ‘"header-basename"’, unless the header file is y.tab.h, where
header-basename is the name of the generated header, without directory part.
For instance with bison -d calc/parse.y, api.header.include defaults to
‘"parse.h"’, not ‘"calc/parse.h"’.
• History: Introduced in Bison 3.4. Defaults to ‘"basename.h"’ since Bison 3.7,
unless the header file is y.tab.h.
%define api.location.file "file" [Directive]
%define api.location.file none [Directive]
• Language(s): C++
• Purpose: Define the name of the file in which Bison’s default location and posi-
tion types are generated. See Section 10.1.5.3 [Exposing the Location Classes],
page 174.
• Accepted Values:
none If locations are enabled, generate the definition of the position and
location classes in the header file if %defines, otherwise in the
parser implementation.
"file" Generate the definition of the position and location classes in file.
This file name can be relative (to where the parser file is output) or
absolute.
• Default Value: Not applicable if locations are not enabled, or if a user loca-
tion type is specified (see api.location.type). Otherwise, Bison’s location
is generated in location.hh (see Section 10.1.5.2 [++ location-titlehundefinedi],
page 174).
• History: Introduced in Bison 3.2.
%define api.location.include {"file"} [Directive]
%define api.location.include {<file>} [Directive]
• Language(s): C++
Chapter 3: Bison Grammar Files 86
• Purpose: Specify how the generated file that defines the position and location
classes is included. This makes sense when the location class is exposed to
the rest of your application/library in another directory. See Section 10.1.5.3
[Exposing the Location Classes], page 174.
• Accepted Values: Argument for #include.
• Default Value: ‘"dir/location.hh"’ where dir is the directory part of the out-
put. For instance src/parse if --output=src/parse/parser.cc was given.
• History: Introduced in Bison 3.2.
• Purpose: Add a prefix to the name of the symbol kinds. For instance
%define api.symbol.prefix {S_}
%token FILE for ERROR
%%
start: FILE for ERROR;
generates this definition in C:
/* Symbol kind. */
enum yysymbol_kind_t
{
S_YYEMPTY = -2, /* No symbol. */
S_YYEOF = 0, /* $end */
S_YYERROR = 1, /* error */
S_YYUNDEF = 2, /* $undefined */
S_FILE = 3, /* FILE */
S_for = 4, /* for */
S_ERROR = 5, /* ERROR */
S_YYACCEPT = 6, /* $accept */
S_start = 7 /* start */
};
• Accepted Values: Any non empty string. Must be a valid identifier in the target
language (typically a non empty sequence of letters, underscores, and —not at
the beginning— digits).
The empty prefix is invalid:
• in C it would create collision with the YYERROR macro, and potentially token
kind definitions and symbol kind definitions would collide;
• unnamed symbols (such as ‘’+’’) have a name which starts with a digit;
• even in languages with scoped enumerations such as Java, an empty prefix
is dangerous: symbol names may collide with the target language keywords,
or with other members of the SymbolKind class.
• Default Value: YYSYMBOL_ in C. S_ in C++, D and Java.
• History: introduced in Bison 3.6.
• Purpose: Add a prefix to the token names when generating their definition in
the target language. For instance
%define api.token.prefix {TOK_}
%token FILE for ERROR
%%
start: FILE for ERROR;
generates the definition of the symbols TOK_FILE, TOK_for, and TOK_ERROR in
the generated source files. In particular, the scanner must use these prefixed
token names, while the grammar itself may still use the short names (as in the
sample rule given above). The generated informational files (*.output, *.xml,
*.gv) are not modified by this prefix.
Bison also prefixes the generated member names of the semantic value union. See
Section 3.4.3 [Generating the Semantic Value Type], page 57, for more details.
See Section 10.1.8.3 [++ Parser-titlehundefinedi], page 183, and Section 10.1.8.4
[++ Scanner-titlehundefinedi], page 185, for a complete example.
• Accepted Values: Any string. Must be a valid identifier prefix in the target
language (typically, a possibly empty sequence of letters, underscores, and —not
at the beginning— digits).
• Default Value: empty
• History: introduced in Bison 3.0.
For any particular qualifier or for the unqualified form, if there are multiple occurrences
of the %code directive, Bison concatenates the specified code in the order in which it appears
in the grammar file.
Not all qualifiers are accepted for all target languages. Unaccepted qualifiers produce
an error. Some of the accepted qualifiers are:
requires
• Language(s): C, C++
• Purpose: This is the best place to write dependency code required for
YYSTYPE and YYLTYPE. In other words, it’s the best place to define types
referenced in %union directives. If you use #define to override Bison’s de-
fault YYSTYPE and YYLTYPE definitions, then it is also the best place. How-
ever you should rather %define api.value.type and api.location.type.
• Location(s): The parser header file and the parser implementation file
before the Bison-generated YYSTYPE and YYLTYPE definitions.
provides
• Language(s): C, C++
• Purpose: This is the best place to write additional definitions and decla-
rations that should be provided to other modules.
• Location(s): The parser header file and the parser implementation file after
the Bison-generated YYSTYPE, YYLTYPE, and token definitions.
top
• Language(s): C, C++
Chapter 3: Bison Grammar Files 95
The %define variable api.prefix works in two different ways. In the implementation
file, it works by adding macro definitions to the beginning of the parser implementation file,
defining yyparse as prefixparse, and so on:
This effectively substitutes one name for the other in the entire parser implementation
file, thus the “original” names (yylex, YYSTYPE, . . . ) are also usable in the parser imple-
mentation file.
However, in the parser header file, the symbols are defined renamed, for instance:
The macro YYDEBUG is commonly used to enable the tracing support in parsers. To
comply with this tradition, when api.prefix is used, YYDEBUG (not renamed) is used as a
default value:
/* Debug traces. */
#ifndef CDEBUG
# if defined YYDEBUG
# if YYDEBUG
# define CDEBUG 1
# else
# define CDEBUG 0
# endif
# else
# define CDEBUG 0
# endif
#endif
#if CDEBUG
extern int cdebug;
#endif
Chapter 3: Bison Grammar Files 97
Prior to Bison 2.6, a feature similar to api.prefix was provided by the obsolete directive
%name-prefix (see Appendix A [Bison Symbols], page 209) and the option --name-prefix
(see Section 9.1.4 [Output Files], page 162).
98
In an action, you can cause immediate return from yyparse by using these macros:
YYACCEPT [Macro]
Return immediately with value 0 (to report success).
YYABORT [Macro]
Return immediately with value 1 (to report failure).
If you use a reentrant parser, you can optionally pass additional parameter information
to it in a reentrant way. To do so, use the declaration %parse-param:
In the grammar actions, use expressions like this to refer to the data:
exp: ... { ...; *randomness += 1; ... }
Using the following:
%parse-param {int *randomness}
Results in these signatures:
void yyerror (int *randomness, const char *msg);
int yyparse (int *randomness);
Or, if both %define api.pure full (or just %define api.pure) and %locations are used:
void yyerror (YYLTYPE *llocp, int *randomness, const char *msg);
int yyparse (int *randomness);
return YYEOF;
...
else if (c == ’+’ || c == ’-’)
return c; /* Assume token kind for ’+’ is ’+’. */
...
else
return INT; /* Return the kind of the token. */
...
}
This interface has been designed so that the output from the lex utility can be used without
change as the definition of yylex.
The YYEOF token denotes the end of file, and signals to the parser that there is nothing
left afterwards. See Section 4.3.1 [Calling Convention for yylex], page 100, for an example.
Returning YYUNDEF tells the parser that some lexical error was found. It will emit an error
message about an “invalid token”, and enter error-recovery (see Chapter 6 [Error Recovery],
page 130). Returning an unknown token kind results in the exact same behavior.
Returning YYerror requires the parser to enter error-recovery without emitting an error
message. This way the lexical analyzer can produce an accurate error messages about the
invalid input (something the parser cannot do), and yet benefit from the error-recovery
features of the parser.
int
yylex (void)
{
...
switch (c)
{
...
case ’0’: case ’1’: case ’2’: case ’3’: case ’4’:
case ’5’: case ’6’: case ’7’: case ’8’: case ’9’:
...
return TOK_NUM;
...
case EOF:
return YYEOF;
default:
yyerror ("syntax error: invalid character: %c", c);
return YYerror;
}
}
Chapter 4: Parser C-Language Interface 102
By default, the value of yylloc is a structure and you need only initialize the members
that are going to be used by the actions. The four members are called first_line, first_
column, last_line and last_column. Note that the use of this feature makes the parser
noticeably slower.
The data type of yylloc has the name YYLTYPE.
not for the Yacc parser, for historical reasons, and this is the why %define api.pure full
should be preferred over %define api.pure.
When %locations %define api.pure full is used, yyerror has the following signature:
void yyerror (YYLTYPE *locp, char const *msg);
The prototypes are only indications of how the code produced by Bison uses yyerror.
Bison-generated code always ignores the returned value, so yyerror can return any type,
including void. Also, yyerror can be a variadic function; that is why the message is always
passed last.
Traditionally yyerror returns an int that is always ignored, but this is purely for
historical reasons, and void is preferable since it more accurately describes the return type
for yyerror.
The variable yynerrs contains the number of syntax errors reported so far. Normally this
variable is global; but if you request a pure parser (see Section 3.7.11 [A Pure (Reentrant)
Parser], page 77) then it is a local variable which only the actions can access.
Use the following types and functions to build the error message.
yypcontext_t [Type]
An opaque type that captures the circumstances of the syntax error.
yysymbol_kind_t [Type]
An enum of all the grammar symbols, tokens and nonterminals. Its enumerators are
forged from the symbol names:
enum yysymbol_kind_t
{
YYSYMBOL_YYEMPTY = -2, /* No symbol. */
YYSYMBOL_YYEOF = 0, /* "end of file" */
YYSYMBOL_YYerror = 1, /* error */
YYSYMBOL_YYUNDEF = 2, /* "invalid token" */
YYSYMBOL_PLUS = 3, /* "+" */
YYSYMBOL_MINUS = 4, /* "-" */
[...]
YYSYMBOL_VAR = 14, /* "variable" */
YYSYMBOL_NEG = 15, /* NEG */
Chapter 4: Parser C-Language Interface 106
$$ [Variable]
Acts like a variable that contains the semantic value for the grouping made by the
current rule. See Section 3.4.6 [Actions], page 59.
$n [Variable]
Acts like a variable that contains the semantic value for the nth component of the
current rule. See Section 3.4.6 [Actions], page 59.
$<typealt>$ [Variable]
Like $$ but specifies alternative typealt in the union specified by the %union decla-
ration. See Section 3.4.7 [Data Types of Values in Actions], page 60.
$<typealt>n [Variable]
Like $n but specifies alternative typealt in the union specified by the %union decla-
ration. See Section 3.4.7 [Data Types of Values in Actions], page 60.
YYABORT ; [Macro]
Return immediately from yyparse, indicating failure. See Section 4.1 [The Parser
Function yyparse], page 98.
YYACCEPT ; [Macro]
Return immediately from yyparse, indicating success. See Section 4.1 [The Parser
Function yyparse], page 98.
If the macro is used when it is not valid, such as when there is a lookahead token
already, then it reports a syntax error with a message ‘cannot back up’ and performs
ordinary error recovery.
In either case, the rest of the action is not executed.
YYEMPTY [Value]
Value stored in yychar when there is no lookahead token.
YYEOF [Value]
Value stored in yychar when the lookahead is the end of the input stream.
YYERROR ; [Macro]
Cause an immediate syntax error. This statement initiates error recovery just as if
the parser itself had detected an error; however, it does not call yyerror, and does
not print any message. If you want to print an error message, call yyerror explicitly
before the ‘YYERROR;’ statement. See Chapter 6 [Error Recovery], page 130.
YYRECOVERING [Macro]
The expression YYRECOVERING () yields 1 when the parser is recovering from a syntax
error, and 0 otherwise. See Chapter 6 [Error Recovery], page 130.
yychar [Variable]
Variable containing either the lookahead token, or YYEOF when the lookahead is the
end of the input stream, or YYEMPTY when no lookahead has been performed so the
next token is not yet known. Do not modify yychar in a deferred semantic action (see
Section 1.5.3 [GLR Semantic Actions], page 22). See Section 5.1 [Lookahead Tokens],
page 111.
yyclearin ; [Macro]
Discard the current lookahead token. This is useful primarily in error rules. Do not
invoke yyclearin in a deferred semantic action (see Section 1.5.3 [GLR Semantic
Actions], page 22). See Chapter 6 [Error Recovery], page 130.
yyerrok ; [Macro]
Resume generating error messages immediately for subsequent syntax errors. This is
useful primarily in error rules. See Chapter 6 [Error Recovery], page 130.
yylloc [Variable]
Variable containing the lookahead token location when yychar is not set to YYEMPTY
or YYEOF. Do not modify yylloc in a deferred semantic action (see Section 1.5.3 [GLR
Semantic Actions], page 22). See Section 3.5.2 [Actions and Locations], page 66.
yylval [Variable]
Variable containing the lookahead token semantic value when yychar is not set
to YYEMPTY or YYEOF. Do not modify yylval in a deferred semantic action (see
Section 1.5.3 [GLR Semantic Actions], page 22). See Section 3.4.6 [Actions], page 59.
@$ [Value]
Acts like a structure variable containing information on the textual location of the
grouping made by the current rule. See Section 3.5 [Tracking Locations], page 66.
Chapter 4: Parser C-Language Interface 109
@n [Value]
Acts like a structure variable containing information on the textual location of the
nth component of the current rule. See Section 3.5 [Tracking Locations], page 66.
%token
’\n’ _("end of line")
<double>
NUM _("number")
<symrec*>
FUN _("function")
VAR _("variable")
The remainder of the grammar may freely use either the token symbol (FUN) or its alias
("function"), but not with the internationalization marker (_("function")).
If at least one token alias is internationalized, then the generated parser will use both
N_ and _, that must be defined (see Section “The Programmer’s View” in GNU gettext
utilities). They are used only on string aliases marked for translation. In other words, even
if your catalog features a translation for “function”, then with
%token
<symrec*>
FUN "function"
VAR _("variable")
“function” will appear untranslated in debug traces and error messages.
Unless defined by the user, the end-of-file token, YYEOF, is provided “end of file” as an
alias. It is also internationalized if the user internationalized tokens. To map it to another
string, use:
%token END 0 _("end of input")
111
term:
’(’ expr ’)’
| term ’!’
| "number"
;
Suppose that the tokens ‘1 + 2’ have been read and shifted; what should be done? If
the following token is ‘)’, then the first three tokens must be reduced to form an expr.
This is the only valid course, because shifting the ‘)’ would produce a sequence of symbols
term ’)’, and no rule allows this.
If the following token is ‘!’, then it must be shifted immediately so that ‘2 !’ can be
reduced to make a term. If instead the parser were to reduce before shifting, ‘1 + 2’ would
become an expr. It would then be impossible to shift the ‘!’ because doing so would produce
on the stack the sequence of symbols expr ’!’. No rule allows that sequence.
The lookahead token is stored in the variable yychar. Its semantic value and location,
if any, are stored in the variables yylval and yylloc. See Section 4.5 [Special Features for
Use in Actions], page 107.
The conflict exists because the grammar as written is ambiguous: either parsing of the
simple nested if-statement is legitimate. The established convention is that these ambiguities
are resolved by attaching the else-clause to the innermost if-statement; this is what Bison
accomplishes by choosing to shift rather than reduce. (It would ideally be cleaner to write an
unambiguous grammar, but that is very hard to do in this case.) This particular ambiguity
was first encountered in the specifications of Algol 60 and is called the “dangling else”
ambiguity.
To assist the grammar author in understanding the nature of each conflict, Bison can be
asked to generate “counterexamples”. In the present case it actually even proves that the
grammar is ambiguous by exhibiting a string with two different parses:
Example: "if" expr "then" "if" expr "then" stmt • "else" stmt
Shift derivation
if_stmt
→ "if" expr "then" stmt
→ if_stmt
→ "if" expr "then" stmt • "else" stmt
Example: "if" expr "then" "if" expr "then" stmt • "else" stmt
Reduce derivation
if_stmt
→ "if" expr "then" stmt "else" stmt
→ if_stmt
→ "if" expr "then" stmt •
See Section 8.1 [Generation of Counterexamples], page 135, for more details.
To avoid warnings from Bison about predictable, legitimate shift/reduce conflicts, you
can use the %expect n declaration. There will be no warning as long as the number of
shift/reduce conflicts is exactly n, and Bison will report an error if there is a different
number. See Section 3.7.9 [Suppressing Conflict Warnings], page 76. However, we don’t
recommend the use of %expect (except ‘%expect 0’!), as an equal number of conflicts does
not mean that they are the same. When possible, you should rather use precedence direc-
tives to fix the conflicts explicitly (see Section 5.3.6 [Using Precedence For Non Operators],
page 116).
The definition of if_stmt above is solely to blame for the conflict, but the conflict does
not actually appear without additional rules. Here is a complete Bison grammar file that
actually manifests the conflict:
%%
stmt:
expr
| if_stmt
;
if_stmt:
"if" expr "then" stmt
| "if" expr "then" stmt "else" stmt
;
Chapter 5: The Bison Parser Algorithm 114
expr:
"identifier"
;
The %prec modifier declares the precedence of a particular rule by specifying a terminal
symbol whose precedence should be used for that rule. It’s not necessary for that symbol
to appear otherwise in the rule. The modifier’s syntax is:
%prec terminal-symbol
and it is written after the components of the rule. Its effect is to assign the rule the
precedence of terminal-symbol, overriding the precedence that would be deduced for it in
the ordinary way. The altered rule precedence then affects how conflicts involving that rule
are resolved (see Section 5.3 [Operator Precedence], page 114).
Here is how %prec solves the problem of unary minus. First, declare a precedence for a
fictitious terminal symbol named UMINUS. There are no tokens of this type, but the symbol
serves to stand for its precedence:
...
%left ’+’ ’-’
%left ’*’
%left UMINUS
Now the precedence of UMINUS can be used in specific rules:
exp:
...
| exp ’-’ exp
...
| ’-’ exp %prec UMINUS
sequence:
%empty { printf ("empty sequence\n"); }
| maybeword
| sequence word { printf ("added word %s\n", $2); }
;
maybeword:
%empty { printf ("empty maybeword\n"); }
| word { printf ("single word %s\n", $1); }
;
The error is an ambiguity: as counterexample generation would demonstrate (see Section 8.1
[Generation of Counterexamples], page 135), there is more than one way to parse a single
word into a sequence. It could be reduced to a maybeword and then into a sequence via
the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first
rule, and this could be combined with the word using the third rule for sequence.
There is also more than one way to reduce nothing-at-all into a sequence. This can be
done directly via the first rule, or indirectly via maybeword and then the second rule.
You might think that this is a distinction without a difference, because it does not change
whether any particular input is valid or not. But it does affect which actions are run. One
parsing order runs the second rule’s action; the other runs the first rule’s action and the
third rule’s action. In this example, the output of the program changes.
Bison resolves a reduce/reduce conflict by choosing to use the rule that appears first in
the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be
studied and usually eliminated. Here is the proper way to define sequence:
sequence:
%empty { printf ("empty sequence\n"); }
| sequence word { printf ("added word %s\n", $2); }
;
Here is another common error that yields a reduce/reduce conflict:
sequence:
%empty
| sequence words
| sequence redirects
;
words:
%empty
| words word
;
redirects:
%empty
| redirects redirect
;
Chapter 5: The Bison Parser Algorithm 119
The intention here is to define a sequence which can contain either word or redirect
groupings. The individual definitions of sequence, words and redirects are error-free,
but the three together make a subtle ambiguity: even an empty input can be parsed in
infinitely many ways!
Consider: nothing-at-all could be a words. Or it could be two words in a row, or three,
or any number. It could equally well be a redirects, or two, or any number. Or it could
be a words followed by three redirects and another words. And so on.
Here are two ways to correct these rules. First, to make it a single level of sequence:
sequence:
%empty
| sequence word
| sequence redirect
;
Second, to prevent either a words or a redirects from being empty:
sequence:
%empty
| sequence words
| sequence redirects
;
words:
word
| words word
;
redirects:
redirect
| redirects redirect
;
Yet this proposal introduces another kind of ambiguity! The input ‘word word’ can be
parsed as a single words composed of two ‘word’s, or as two one-word words (and likewise for
redirect/redirects). However this ambiguity is now a shift/reduce conflict, and therefore
it can now be addressed with precedence directives.
To simplify the matter, we will proceed with word and redirect being tokens: "word"
and "redirect".
To prefer the longest words, the conflict between the token "word" and the rule
‘sequence: sequence words’ must be resolved as a shift. To this end, we use the same
techniques as exposed above, see Section 5.3.6 [Using Precedence For Non Operators],
page 116. One solution relies on precedences: use %prec to give a lower precedence to the
rule:
%precedence "word"
%precedence "sequence"
%%
Chapter 5: The Bison Parser Algorithm 120
sequence:
%empty
| sequence word %prec "sequence"
| sequence redirect %prec "sequence"
;
words:
word
| words "word"
;
Another solution relies on associativity: provide both the token and the rule with the
same precedence, but make them right-associative:
%right "word" "redirect"
%%
sequence:
%empty
| sequence word %prec "word"
| sequence redirect %prec "redirect"
;
return_spec:
type
| name ’:’ type
;
type: "id";
name: "id";
name_list:
name
| name ’,’ name_list
;
It would seem that this grammar can be parsed with only a single token of lookahead:
when a param_spec is being read, an "id" is a name if a comma or colon follows, or a
type if another "id" follows. In other words, this grammar is LR(1). Yet Bison finds one
Chapter 5: The Bison Parser Algorithm 121
reduce/reduce conflict, for which counterexample generation (see Section 8.1 [Generation
of Counterexamples], page 135) would find a nonunifying example.
This is because Bison does not handle all LR(1) grammars by default, for historical
reasons. In this grammar, two contexts, that after an "id" at the beginning of a param_
spec and likewise at the beginning of a return_spec, are similar enough that Bison assumes
they are the same. They appear similar because the same set of rules would be active—the
rule for reducing to a name and that for reducing to a type. Bison is unable to determine
at that stage of processing that the rules would require different lookahead tokens in the
two contexts, so it makes a single parser state for them both. Combining the two contexts
causes a conflict later. In parser terminology, this occurrence means that the grammar is
not LALR(1).
For many practical grammars (specifically those that fall into the non-LR(1) class), the
limitations of LALR(1) result in difficulties beyond just mysterious reduce/reduce conflicts.
The best way to fix all these problems is to select a different parser table construction algo-
rithm. Either IELR(1) or canonical LR(1) would suffice, but the former is more efficient and
easier to debug during development. See Section 5.8.1 [LR Table Construction], page 122,
for details.
If you instead wish to work around LALR(1)’s limitations, you can often fix a mysterious
conflict by identifying the two parser states that are being confused, and adding something
to make them look distinct. In the above example, adding one rule to return_spec as
follows makes the problem go away:
...
return_spec:
type
| name ’:’ type
| "id" "bogus" /* This rule is never used. */
;
This corrects the problem because it introduces the possibility of an additional active
rule in the context after the "id" at the beginning of return_spec. This rule is not active
in the corresponding context in a param_spec, so the two contexts receive distinct parser
states. As long as the token "bogus" is never generated by yylex, the added rule cannot
alter the way actual input is parsed.
In this particular example, there is another way to solve the problem: rewrite the rule for
return_spec to use "id" directly instead of via name. This also causes the two confusing
contexts to have different sets of active rules, because the one for return_spec activates
the altered rule for return_spec rather than the one for name.
param_spec:
type
| name_list ’:’ type
;
return_spec:
type
| "id" ’:’ type
;
Chapter 5: The Bison Parser Algorithm 122
For a more detailed exposition of LALR(1) parsers and parser generators, see [DeRemer
1982], page 230.
5.8 Tuning LR
The default behavior of Bison’s LR-based parsers is chosen mostly for historical reasons,
but that behavior is often not robust. For example, in the previous section, we discussed
the mysterious conflicts that can be produced by LALR(1), Bison’s default parser table
construction algorithm. Another example is Bison’s %define parse.error verbose direc-
tive, which instructs the generated parser to produce verbose syntax error messages, which
can sometimes contain incorrect information.
In this section, we explore several modern features of Bison that allow you to tune fun-
damental aspects of the generated LR-based parsers. Some of these features easily eliminate
shortcomings like those mentioned above. Others can be helpful purely for understanding
your parser.
For example, to activate IELR, you might add the following directive to you grammar
file:
%define lr.type ielr
For the example in Section 5.7 [Mysterious Conflicts], page 120, the mysterious conflict
is then eliminated, so there is no need to invest time in comprehending the conflict or
restructuring the grammar to fix it. If, during future development, the grammar evolves
such that all mysterious behavior would have disappeared using just LALR, you need not
fear that continuing to use IELR will result in unnecessarily large parser tables. That
is, IELR generates LALR tables when LALR (using a deterministic parsing algorithm) is
sufficient to support the full language-recognition power of LR. Thus, by enabling IELR
Chapter 5: The Bison Parser Algorithm 123
at the start of grammar development, you can safely and completely eliminate the need to
consider LALR’s shortcomings.
While IELR is almost always preferable, there are circumstances where LALR or the
canonical LR parser tables described by Knuth (see [Knuth 1965], page 230) can be useful.
Here we summarize the relative advantages of each parser table construction algorithm
within Bison:
• LALR
There are at least two scenarios where LALR can be worthwhile:
• GLR without static conflict resolution.
When employing GLR parsers (see Section 1.5 [Writing GLR Parsers], page 17),
if you do not resolve any conflicts statically (for example, with %left or
%precedence), then the parser explores all potential parses of any given input. In
this case, the choice of parser table construction algorithm is guaranteed not to
alter the language accepted by the parser. LALR parser tables are the smallest
parser tables Bison can currently construct, so they may then be preferable.
Nevertheless, once you begin to resolve conflicts statically, GLR behaves more like
a deterministic parser in the syntactic contexts where those conflicts appear, and
so either IELR or canonical LR can then be helpful to avoid LALR’s mysterious
behavior.
• Malformed grammars.
Occasionally during development, an especially malformed grammar with a major
recurring flaw may severely impede the IELR or canonical LR parser table con-
struction algorithm. LALR can be a quick way to construct parser tables in order
to investigate such problems while ignoring the more subtle differences from IELR
and canonical LR.
• IELR
IELR (Inadequacy Elimination LR) is a minimal LR algorithm. That is, given any
grammar (LR or non-LR), parsers using IELR or canonical LR parser tables always
accept exactly the same set of sentences. However, like LALR, IELR merges parser
states during parser table construction so that the number of parser states is often an
order of magnitude less than for canonical LR. More importantly, because canonical
LR’s extra parser states may contain duplicate conflicts in the case of non-LR gram-
mars, the number of conflicts for IELR is often an order of magnitude less as well. This
effect can significantly reduce the complexity of developing a grammar.
• Canonical LR
While inefficient, canonical LR parser tables can be an interesting means to explore a
grammar because they possess a property that IELR and LALR tables do not. That
is, if %nonassoc is not used and default reductions are left disabled (see Section 5.8.2
[Default Reductions], page 124), then, for every left context of every canonical LR
state, the set of tokens accepted by that state is guaranteed to be the exact set of
tokens that is syntactically acceptable in that left context. It might then seem that an
advantage of canonical LR parsers in production is that, under the above constraints,
they are guaranteed to detect a syntax error as soon as possible without performing
any unnecessary reductions. However, IELR parsers that use LAC are also able to
Chapter 5: The Bison Parser Algorithm 124
achieve this behavior without sacrificing %nonassoc or default reductions. For details
and a few caveats of LAC, see Section 5.8.3 [LAC], page 125.
For a more detailed exposition of the mysterious behavior in LALR parsers and the
benefits of IELR, see [Denny 2008], page 230, and [Denny 2010 November], page 230.
defaulted state. However, the default accept action does not delay any yylex invocation or
syntax error detection because the accept action ends the parse.
For LALR and IELR, Bison enables default reductions in nearly all states by default.
There are only two exceptions. First, states that have a shift action on the error token do
not have default reductions because delayed syntax error detection could then prevent the
error token from ever being shifted in that state. However, parser state merging can cause
the same effect anyway, and LAC fixes it in both cases, so future versions of Bison might
drop this exception when LAC is activated. Second, GLR parsers do not record the default
reduction as the action on a lookahead token for which there is a conflict. The correct
action in this case is to split the parse instead.
To adjust which states have default reductions enabled, use the %define
lr.default-reduction directive.
5.8.3 LAC
Canonical LR, IELR, and LALR can suffer from a couple of problems upon encountering
a syntax error. First, the parser might perform additional parser stack reductions before
discovering the syntax error. Such reductions can perform user semantic actions that are
unexpected because they are based on an invalid token, and they cause error recovery to
begin in a different syntactic context than the one in which the invalid token was encoun-
tered. Second, when verbose error messages are enabled (see Section 4.4 [Error Reporting],
page 104), the expected token list in the syntax error message can both contain invalid
tokens and omit valid tokens.
The culprits for the above problems are %nonassoc, default reductions in inconsistent
states (see Section 5.8.2 [Default Reductions], page 124), and parser state merging. Because
IELR and LALR merge parser states, they suffer the most. Canonical LR can suffer only
if %nonassoc is used or if default reductions are enabled for inconsistent states.
LAC (Lookahead Correction) is a new mechanism within the parsing algorithm that
solves these problems for canonical LR, IELR, and LALR without sacrificing %nonassoc,
default reductions, or state merging. You can enable LAC with the %define parse.lac
directive.
This feature is currently only available for deterministic parsers in C and C++.
Chapter 5: The Bison Parser Algorithm 126
While the LAC algorithm shares techniques that have been recognized in the parser
community for years, for the publication that introduces LAC, see [Denny 2010 May],
page 230.
uses the same basic algorithm for parsing as an ordinary Bison parser, but behaves differ-
ently in cases where there is a shift/reduce conflict that has not been resolved by precedence
rules (see Section 5.3 [Operator Precedence], page 114) or a reduce/reduce conflict. When
a GLR parser encounters such a situation, it effectively splits into a several parsers, one
for each possible shift or reduction. These parsers then proceed as usual, consuming tokens
in lock-step. Some of the stacks may encounter other conflicts and split further, with the
result that instead of a sequence of states, a Bison GLR parsing stack is what is in effect a
tree of states.
In effect, each stack represents a guess as to what the proper parse is. Additional
input may indicate that a guess was wrong, in which case the appropriate stack silently
disappears. Otherwise, the semantics actions generated in each stack are saved, rather than
being executed immediately. When a stack disappears, its saved semantic actions never get
executed. When a reduction causes two stacks to become equivalent, their sets of semantic
actions are both saved with the state that results from the reduction. We say that two
stacks are equivalent when they both represent the same sequence of states, and each pair
of corresponding states represents a grammar symbol that produces the same segment of
the input token stream.
Whenever the parser makes a transition from having multiple states to having one, it
reverts to the normal deterministic parsing algorithm, after resolving and executing the
saved-up actions. At this transition, some of the states on the stack will have semantic
values that are sets (actually multisets) of possible actions. The parser tries to pick one of
the actions by first finding one whose rule has the highest dynamic precedence, as set by the
‘%dprec’ declaration. Otherwise, if the alternative actions are not ordered by precedence,
but there the same merging function is declared for both rules by the ‘%merge’ declaration,
Bison resolves and evaluates both and then calls the merge function on the result. Otherwise,
it reports an ambiguity.
It is possible to use a data structure for the GLR parsing tree that permits the pro-
cessing of any LR(1) grammar in linear time (in the size of the input), any unambiguous
(not necessarily LR(1)) grammar in quadratic worst-case time, and any general (possibly
ambiguous) context-free grammar in cubic worst-case time. However, Bison currently uses
a simpler data structure that requires time proportional to the length of the input times the
maximum number of stacks required for any prefix of the input. Thus, really ambiguous
or nondeterministic grammars can require exponential time and space to process. Such
badly behaving examples, however, are not generally of practical interest. Usually, nonde-
terminism in a grammar is local—the parser is “in doubt” only for a few tokens at a time.
Therefore, the current data structure should generally be adequate. On LR(1) portions of
a grammar, in particular, it is only slightly slower than with the deterministic LR(1) Bison
parser.
For a more detailed exposition of GLR parsers, see [Scott 2000], page 230.
Because Bison parsers have growing stacks, hitting the upper limit usually results from
using a right recursion instead of a left recursion, see Section 3.3.3 [Recursive Rules], page 55.
By defining the macro YYMAXDEPTH, you can control how deep the parser stack can
become before memory is exhausted. Define the macro with a value that is an integer.
This value is the maximum number of tokens that can be shifted (and not reduced) before
overflow.
The stack space allowed is not necessarily allocated. If you specify a large value for
YYMAXDEPTH, the parser normally allocates a small stack at first, and then makes it bigger by
stages as needed. This increasing allocation happens automatically and silently. Therefore,
you do not need to make YYMAXDEPTH painfully small merely to save space for ordinary
inputs that do not need much stack.
However, do not allow YYMAXDEPTH to be a value so large that arithmetic overflow could
occur when calculating the size of the stack space. Also, do not allow YYMAXDEPTH to be
less than YYINITDEPTH.
The default value of YYMAXDEPTH, if you do not define it, is 10000.
You can control how much stack is allocated initially by defining the macro YYINITDEPTH
to a positive integer. For the deterministic parser in C, this value must be a compile-time
constant unless you are assuming C99 or some other target language or compiler that allows
variable-length arrays. The default is 200.
Do not allow YYINITDEPTH to be greater than YYMAXDEPTH.
You can generate a deterministic parser containing C++ user code from the default (C)
skeleton, as well as from the C++ skeleton (see Section 10.1 [++ Parsers-titlehundefinedi],
page 166). However, if you do use the default skeleton and want to allow the parsing stack
to grow, be careful not to use semantic types or location types that require non-trivial copy
constructors. The C skeleton bypasses these constructors when copying data to new, larger
stacks.
130
6 Error Recovery
It is not usually acceptable to have a program terminate on a syntax error. For example, a
compiler should recover sufficiently to parse the rest of the input file and check it for errors;
a calculator should accept another expression.
In a simple interactive command parser where each input is one line, it may be sufficient
to allow yyparse to return 1 on error and have the caller ignore the rest of the input line
when that happens (and then call yyparse again). But this is inadequate for a compiler,
because it forgets all the syntactic context leading up to the error. A syntax error deep
within a function in the compiler input should not cause the compiler to treat the following
line like the beginning of a source file.
You can define how to recover from a syntax error by writing rules to recognize the
special token error. This is a terminal symbol that is always defined (you need not declare
it) and reserved for error handling. The Bison parser generates an error token whenever
a syntax error happens; if you have provided a rule to recognize this token in the current
context, the parse can continue.
For example:
stmts:
%empty
| stmts ’\n’
| stmts exp ’\n’
| stmts error ’\n’
The fourth rule in this example says that an error followed by a newline makes a valid
addition to any stmts.
What happens if a syntax error occurs in the middle of an exp? The error recovery rule,
interpreted strictly, applies to the precise sequence of a stmts, an error and a newline. If
an error occurs in the middle of an exp, there will probably be some additional tokens and
subexpressions on the stack after the last stmts, and there will be tokens to read before
the next newline. So the rule is not applicable in the ordinary way.
But Bison can force the situation to fit the rule, by discarding part of the semantic
context and part of the input. First it discards states and objects from the stack until
it gets back to a state in which the error token is acceptable. (This means that the
subexpressions already parsed are discarded, back to the last complete stmts.) At this
point the error token can be shifted. Then, if the old lookahead token is not acceptable
to be shifted next, the parser reads tokens and discards them until it finds a token which
is acceptable. In this example, Bison reads and discards input until the next newline so
that the fourth rule can apply. Note that discarded symbols are possible sources of memory
leaks, see Section 3.7.7 [Freeing Discarded Symbols], page 73, for a means to reclaim this
memory.
The choice of error rules in the grammar is a choice of strategies for error recovery. A
simple and useful strategy is simply to skip the rest of the current input line or current
statement if an error is detected:
stmt: error ’;’ /* On error, skip until ’;’ is read. */
Chapter 6: Error Recovery 131
notype_initdcl:
notype_declarator maybeasm ’=’ init
| notype_declarator maybeasm
;
Here initdcl can redeclare a typedef name, but notype_initdcl cannot. The distinction
between declarator and notype_declarator is the same sort of thing.
There is some similarity between this technique and a lexical tie-in (described next), in
that information which alters the lexical analysis is changed during parsing by other parts of
the program. The difference is here the information is global, and is used for other purposes
in the program. A true lexical tie-in has a special-purpose flag controlled by the syntactic
context.
constant:
INTEGER
| STRING
;
Here we assume that yylex looks at the value of hexflag; when it is nonzero, all integers
are parsed in hexadecimal, and tokens starting with letters are parsed as integers if possible.
The declaration of hexflag shown in the prologue of the grammar file is needed to make
it accessible to the actions (see Section 3.1.1 [The prologue], page 46). You must also write
the code in yylex to obey the flag.
Chapter 7: Handling Context Dependencies 134
Developing a parser can be a challenge, especially if you don’t understand the algorithm
(see Chapter 5 [The Bison Parser Algorithm], page 111). This chapter explains how to
understand and debug a parser.
The most frequent issue users face is solving their conflicts. To fix them, the first step is
understanding how they arise in a given grammar. This is made much easier by automated
generation of counterexamples, cover in the first section (see Section 8.1 [Generation of
Counterexamples], page 135).
In most cases though, looking at the structure of the automaton is still needed. The
following sections explain how to generate and read the detailed structural description of
the automaton. There are several formats available:
− as text, see Section 8.2 [Understanding Your Parser], page 138;
− as a graph, see Section 8.3 [Visualizing Your Parser], page 145;
− or as a markup report that can be turned, for instance, into HTML, see Section 8.4
[Visualizing your parser in multiple formats], page 148.
The last section focuses on the dynamic part of the parser: how to enable and understand
the parser run-time traces (see Section 8.5 [Tracing Your Parser], page 148).
Example: "if" expr "then" "if" expr "then" stmt • "else" stmt
Shift derivation
if_stmt
→ "if" expr "then" stmt
→ if_stmt
→ "if" expr "then" stmt • "else" stmt
Example: "if" expr "then" "if" expr "then" stmt • "else" stmt
Reduce derivation
if_stmt
→ "if" expr "then" stmt "else" stmt
→ if_stmt
→ "if" expr "then" stmt •
This shows two different derivations for one single expression, which proves that the
grammar is ambiguous.
%%
sequence:
%empty
| maybeword
| sequence "word"
;
maybeword:
%empty
| "word"
;
Each of these three conflicts, again, prove that the grammar is ambiguous. For instance,
the second conflict (the reduce/reduce one) shows that the grammar accepts the empty
input in two different ways.
Sometimes, the search will not find an example that can be derived in two ways. In these
cases, counterexample generation will provide two examples that are the same up until the
dot. Most notably, this will happen when your grammar requires a stronger parser (more
lookahead, LR instead of LALR). The following example isn’t LR(1):
%token ID
%%
s: a ID
a: expr
expr: %empty | expr ID ’,’
bison reports:
$end (0) 0
’*’ (42) 3
’+’ (43) 1
’-’ (45) 2
’/’ (47) 4
error (256)
NUM <ival> (258) 5
STR <sval> (259)
$accept (9)
on left: 0
exp <ival> (10)
on left: 1 2 3 4 5
on right: 0 1 2 3 4
Bison then proceeds onto the automaton itself, describing each state with its set of items,
also known as dotted rules. Each item is a production rule together with a point (‘.’)
marking the location of the input cursor.
State 0
exp go to state 2
This reads as follows: “state 0 corresponds to being at the very beginning of the parsing,
in the initial rule, right before the start symbol (here, exp). When the parser returns to
this state right after having reduced a rule that produced an exp, the control flow jumps to
state 2. If there is no such transition on a nonterminal symbol, and the lookahead is a NUM,
then this token is shifted onto the parse stack, and the control flow jumps to state 1. Any
other lookahead triggers a syntax error.”
Even though the only active rule in state 0 seems to be rule 0, the report lists NUM as
a lookahead token because NUM can be at the beginning of any rule deriving an exp. By
default Bison reports the so-called core or kernel of the item set, but if you want to see
more detail you can invoke bison with --report=itemset to list the derived items as well:
Chapter 8: Debugging Your Parser 141
State 0
exp go to state 2
In the state 1. . .
State 1
5 exp: NUM •
$default accept
Chapter 8: Debugging Your Parser 142
the initial rule is completed (the start symbol and the end-of-input were read), the parsing
exits successfully.
The interpretation of states 4 to 7 is straightforward, and is left to the reader.
State 4
exp go to state 8
State 5
exp go to state 9
State 6
exp go to state 10
State 7
exp go to state 11
As was announced in beginning of the report, ‘State 8 conflicts: 1 shift/reduce’:
State 8
Indeed, there are two actions associated to the lookahead ‘/’: either shifting (and going
to state 7), or reducing rule 1. The conflict means that either the grammar is ambiguous, or
the parser lacks information to make the right decision. Indeed the grammar is ambiguous,
as, since we did not specify the precedence of ‘/’, the sentence ‘NUM + NUM / NUM’ can be
parsed as ‘NUM + (NUM / NUM)’, which corresponds to shifting ‘/’, or as ‘(NUM + NUM) / NUM’,
which corresponds to reducing rule 1.
Because in deterministic parsing a single decision can be made, Bison arbitrarily chose to
disable the reduction, see Section 5.2 [Shift/Reduce Conflicts], page 112. Discarded actions
are reported between square brackets.
Note that all the previous states had a single possible action: either shifting the next
token and going to the corresponding state, or reducing a single rule. In the other cases,
i.e., when shifting and reducing is possible or when several reductions are possible, the
lookahead is required to select the action. State 8 is one such state: if the lookahead is ‘*’
or ‘/’ then the action is shifting, otherwise the action is reducing rule 1. In other words, the
first two items, corresponding to rule 1, are not eligible when the lookahead token is ‘*’, since
we specified that ‘*’ has higher precedence than ‘+’. More generally, some items are eligible
only with some set of possible lookahead tokens. When run with --report=lookahead,
Bison specifies these lookahead tokens:
State 8
Note however that while ‘NUM + NUM / NUM’ is ambiguous (which results in the conflicts on
‘/’), ‘NUM + NUM * NUM’ is not: the conflict was solved thanks to associativity and precedence
directives. If invoked with --report=solved, Bison includes information about the solved
conflicts in the report:
Conflict between rule 1 and token ’+’ resolved as reduce (%left ’+’).
Conflict between rule 1 and token ’-’ resolved as reduce (%left ’-’).
Conflict between rule 1 and token ’*’ resolved as shift (’+’ < ’*’).
Chapter 8: Debugging Your Parser 144
This shows two separate derivations in the grammar for the same exp: ‘e1 + e2 / e3’.
The derivations show how your rules would parse the given example. Here, the first deriva-
tion completes a reduction when seeing ‘/’, causing ‘e1 + e2’ to be grouped as an exp. The
second derivation shifts on ‘/’, resulting in ‘e2 / e3’ being grouped as an exp. Therefore,
it is easy to see that adding %precedence directives would fix this conflict.
State 9
State 10
State 11
output file is called foo.gv. A DOT file may also be produced via an XML file and XSLT
processing (see Section 8.4 [Visualizing your parser in multiple formats], page 148).
%%
exp: a ";" | b ".";
a: "0";
b: "0";
The graphical output (see Figure 8.1) is very similar to the textual one, and as such it is
easier understood by making direct comparisons between them. See Chapter 8 [Debugging
Your Parser], page 135, for a detailed analysis of the textual report.
State 0
"0" exp a b
State 1
State 2 State 3 State 4
3 a: "0" • [";"]
0 $accept: exp • $end 1 exp: a • ";" 2 exp: b • "."
4 b: "0" • ["."]
Acc R1 R2
When invoked with --report=lookaheads, the lookahead tokens, when needed, are
shown next to the relevant rule between square brackets as a comma separated list. This is
the case in the figure for the representation of reductions, below.
The transitions are represented as directed edges between the current and the target
states.
Chapter 8: Debugging Your Parser 147
1 exp: a • ";"
State 3
1 exp: a • "."
"."
State 6
1 exp: a "." •
3 a: "0" • [";"]
4 b: "0" • ["."]
State 1
3 a: "0" • ["."]
4 b: "0" • [";"]
[";"]
R3 R4
When unresolved conflicts are present, because in deterministic parsing a single de-
cision can be made, Bison can arbitrarily choose to disable a reduction, see Section 5.2
[Shift/Reduce Conflicts], page 112. Discarded actions are distinguished by a red filling
color on these nodes, just like how they are reported between square brackets in the verbose
file.
Chapter 8: Debugging Your Parser 148
The reduction corresponding to the rule number 0 is the acceptation state. It is shown
as a blue diamond, labeled “Acc”.
the Parser], page 161). This is a Bison extension. Unless POSIX and Yacc
portability matter to you, this is the preferred solution.
the option -t (POSIX Yacc compliant)
the option --debug (Bison extension)
Use the -t option when you run Bison (see Chapter 9 [Invoking Bison],
page 154). With ‘%define api.prefix {c}’, it defines CDEBUG to 1, otherwise
it defines YYDEBUG to 1.
the directive ‘%debug’ (deprecated)
Add the %debug directive (see Section 3.7.13 [Bison Declaration Summary],
page 79). This Bison extension is maintained for backward compatibility with
previous versions of Bison; use %define parse.trace instead.
the macro YYDEBUG (C/C++ only)
Define the macro YYDEBUG to a nonzero value when you compile the parser.
This is compliant with POSIX Yacc. You could use -DYYDEBUG=1 as a compiler
option or you could put ‘#define YYDEBUG 1’ in the prologue of the grammar
file (see Section 3.1.1 [The prologue], page 46).
If the %define variable api.prefix is used (see Section 3.8 [Multiple Parsers
in the Same Program], page 95), for instance ‘%define api.prefix {c}’, then
if CDEBUG is defined, its value controls the tracing feature (enabled if and only
if nonzero); otherwise tracing is enabled if and only if YYDEBUG is nonzero.
We suggest that you always enable the trace option so that debugging is always possible.
In C the trace facility outputs messages with macro calls of the form YYFPRINTF
(stderr, format, args) where format and args are the usual printf format and variadic
arguments. If you define YYDEBUG to a nonzero value but do not define YYFPRINTF,
<stdio.h> is automatically included and YYFPRINTF is defined to fprintf.
Once you have compiled the program with trace facilities, the way to request a trace is
to store a nonzero value in the variable yydebug. You can do this by making the C code do
it (in main, perhaps), or you can alter the value with a C debugger.
Each step taken by the parser when yydebug is nonzero produces a line or two of trace
information, written on stderr. The trace messages tell you these things:
• Each time the parser calls yylex, what kind of token was read.
• Each time a token is shifted, the depth and complete contents of the state stack (see
Section 5.5 [Parser States], page 117).
• Each time a rule is reduced, which rule it is, and the complete contents of the state
stack afterward.
To make sense of this information, it helps to refer to the automaton description file
(see Section 8.2 [Understanding Your Parser], page 138). This file shows the meaning of
each state in terms of positions in various rules, and also what each state will do with
each possible input token. As you read the successive trace messages, you can see that the
parser is functioning according to its specification in the listing file. Eventually you will
arrive at the place where something undesirable happens, and you will see which parts of
the grammar are to blame.
Chapter 8: Debugging Your Parser 150
The parser implementation file is a C/C++/Java program and you can use debuggers
on it, but it’s not easy to interpret what it is doing. The parser function is a finite-state
machine interpreter, and aside from the actions it executes the same code over and over.
Only the values of variables show where in the grammar it is working.
If you define YYPRINT, it should take three arguments. The parser will pass a standard
I/O stream, the numeric code for the token kind, and the token value (from yylval).
For yacc.c only. Obsoleted by %printer.
Here is an example of YYPRINT suitable for the multi-function calculator (see Section 2.5.1
[Declarations for mfcalc], page 39):
%{
static void print_token_value (FILE *file, int type, YYSTYPE value);
#define YYPRINT(File, Type, Value) \
print_token_value (File, Type, Value)
%}
static void
print_token_value (FILE *file, yytoken_kind_t kind, YYSTYPE value)
{
if (kind == VAR)
fprintf (file, "%s", value.tptr->name);
else if (kind == NUM)
fprintf (file, "%d", value.val);
}
See Section 8.5.2 [Enabling Debug Traces for mfcalc], page 150, for the proper use of
%printer.
154
9 Invoking Bison
0 (success)
when there were no errors. Warnings, which are diagnostics about dubious
constructs, do not change the exit status, unless they are turned into errors
(see [-Werror], page 160).
1 (failure) when there were errors. No file was generated (except the reports generated by
--verbose, etc.). In particular, the output files that possibly existed were not
changed.
63 (mismatch)
when bison does not meet the version requirements of the grammar file. See
Section 3.7.1 [Require a Version of Bison], page 70. No file was generated or
changed.
-h
--help Print a summary of the command-line options to Bison and exit.
-V
--version
Print the version number of Bison and exit.
--print-localedir
Print the name of the directory containing locale-dependent data.
--print-datadir
Print the name of the directory containing skeletons, CSS and XSLT.
-u
--update Update the grammar file (remove duplicates, update deprecated directives, etc.)
and exit (i.e., do not generate any of the output files). Leaves a backup of the
original file with a ~ appended. For instance:
$ cat foo.y
%error-verbose
%define parse.error verbose
%%
exp:;
$ bison -u foo.y
foo.y:1.1-14: warning: deprecated directive, use ’%define parse.error ver-
bose’ [-Wdeprecated]
1 | %error-verbose
| ^~~~~~~~~~~~~~
foo.y:2.1-27: warning: %define variable ’parse.error’ redefined [-
Wother]
2 | %define parse.error verbose
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
foo.y:1.1-14: previous definition
1 | %error-verbose
| ^~~~~~~~~~~~~~
bison: file ’foo.y’ was updated (backup: ’foo.y~’)
$ cat foo.y
%define parse.error verbose
%%
exp:;
See the documentation of --feature=fixit below for more details.
-f [feature]
--feature[=feature]
Activate miscellaneous features. Feature can be one of:
Chapter 9: Invoking Bison 156
caret
diagnostics-show-caret
Show caret errors, in a manner similar to GCC’s -fdiagnostics-
show-caret, or Clang’s -fcaret-diagnostics. The location pro-
vided with the message is used to quote the corresponding line of
the source file, underlining the important part of it with carets (‘^’).
Here is an example, using the following file in.y:
%nterm <ival> exp
%%
exp: exp ’+’ exp { $exp = $1 + $2; };
When invoked with -fcaret (or nothing), Bison will report:
in.y:3.20-23: error: ambiguous reference: ’$exp’
3 | exp: exp ’+’ exp { $exp = $1 + $2; };
| ^~~~
in.y:3.1-3: refers to: $exp at $$
3 | exp: exp ’+’ exp { $exp = $1 + $2; };
| ^~~
in.y:3.6-8: refers to: $exp at $1
3 | exp: exp ’+’ exp { $exp = $1 + $2; };
| ^~~
in.y:3.14-16: refers to: $exp at $3
3 | exp: exp ’+’ exp { $exp = $1 + $2; };
| ^~~
in.y:3.32-33: error: $2 of ’exp’ has no declared type
3 | exp: exp ’+’ exp { $exp = $1 + $2; };
| ^~
Whereas, when invoked with -fno-caret, Bison will only report:
in.y:3.20-23: error: ambiguous reference: ’$exp’
in.y:3.1-3: refers to: $exp at $$
in.y:3.6-8: refers to: $exp at $1
in.y:3.14-16: refers to: $exp at $3
in.y:3.32-33: error: $2 of ’exp’ has no declared type
This option is activated by default.
fixit
diagnostics-parseable-fixits
Show machine-readable fixes, in a manner similar to GCC’s and
Clang’s -fdiagnostics-parseable-fixits.
Fix-its are generated for duplicate directives:
$ cat foo.y
%define api.prefix {foo}
%define api.prefix {bar}
%%
exp:;
Chapter 9: Invoking Bison 157
9.1.2 Diagnostics
Options controlling the diagnostics.
-W [category]
--warnings[=category]
Output warnings falling in category. category can be one of:
conflicts-sr
conflicts-rr
S/R and R/R conflicts. These warnings are enabled by default.
However, if the %expect or %expect-rr directive is specified, an
Chapter 9: Invoking Bison 158
counterexamples
cex Provide counterexamples for conflicts. See Section 8.1 [Genera-
tion of Counterexamples], page 135. Counterexamples take time to
compute. The option -Wcex should be used by the developer when
working on the grammar; it hardly makes sense to use it in a CI.
dangling-alias
Report string literals that are not bound to a token symbol.
String literals, which allow for better error messages, are (too) lib-
erally accepted by Bison, which might result in silent errors. For
instance
%type <exVal> cond "condition"
does not define “condition” as a string alias to cond—nonterminal
symbols do not have string aliases. It is rather equivalent to
%nterm <exVal> cond
%token <exVal> "condition"
i.e., it gives the ‘"condition"’ token the type exVal.
Also, because string aliases do not need to be defined, typos such
as ‘"baz"’ instead of ‘"bar"’ will be not reported.
The option -Wdangling-alias catches these situations. On
%token BAR "bar"
%type <ival> foo "foo"
%%
foo: "baz" {}
bison -Wdangling-alias reports
warning: string literal not attached to a symbol
| %type <ival> foo "foo"
| ^~~~~
warning: string literal not attached to a symbol
| foo: "baz" {}
| ^~~~~
deprecated
Deprecated constructs whose support will be removed in future
versions of Bison.
empty-rule
Empty rules without %empty. See Section 3.3.2 [Empty Rules],
page 55. Disabled by default, but enabled by uses of %empty, unless
-Wno-empty-rule was specified.
Chapter 9: Invoking Bison 159
midrule-values
Warn about midrule values that are set but not used within any of
the actions of the parent rule. For example, warn about unused $2
in:
exp: ’1’ { $$ = 1; } ’+’ exp { $$ = $1 + $4; };
Also warn about midrule values that are used but not set. For
example, warn about unset $$ in the midrule action in:
exp: ’1’ { $1 = 1; } ’+’ exp { $$ = $2 + $4; };
These warnings are not enabled by default since they sometimes
prove to be false alarms in existing grammars employing the Yacc
constructs $0 or $-n (where n is some positive integer).
precedence
Useless precedence and associativity directives. Disabled by de-
fault.
Consider for instance the following grammar:
%nonassoc "="
%left "+"
%left "*"
%precedence "("
%%
stmt:
exp
| "var" "=" exp
;
exp:
exp "+" exp
| exp "*" "number"
| "(" exp ")"
| "number"
;
Bison reports:
%left "+"
%precedence "*"
yacc Incompatibilities with POSIX Yacc.
other All warnings not categorized above. These warnings are enabled
by default.
This category is provided merely for the sake of completeness. Fu-
ture releases of Bison may move warnings from this category to
new, more specific categories.
all All the warnings except counterexamples, dangling-alias and
yacc.
none Turn off all the warnings.
error See -Werror, below.
A category can be turned off by prefixing its name with ‘no-’. For instance,
-Wno-yacc will hide the warnings about POSIX Yacc incompatibilities.
-Werror Turn enabled warnings for every category into errors, unless they are explicitly
disabled by -Wno-error=category.
-Werror=category
Enable warnings falling in category, and treat them as errors.
category is the same as for --warnings, with the exception that it may not be
prefixed with ‘no-’ (see above).
Note that the precedence of the ‘=’ and ‘,’ operators is such that the following
commands are not equivalent, as the first will not treat S/R conflicts as errors.
$ bison -Werror=yacc,conflicts-sr input.y
$ bison -Werror=yacc,error=conflicts-sr input.y
-Wno-error
Do not turn enabled warnings for every category into errors, unless they are
explicitly enabled by -Werror=category.
-Wno-error=category
Deactivate the error treatment for this category. However, the warning itself
won’t be disabled, or enabled, by this option.
--color Equivalent to --color=always.
--color=when
Control whether diagnostics are colorized, depending on when:
always
yes Enable colorized diagnostics.
never
no Disable colorized diagnostics.
auto (default)
tty Diagnostics will be colorized if the output device is a tty, i.e. when
the output goes directly to a text screen or terminal emulator win-
dow.
Chapter 9: Invoking Bison 161
--style=file
Specifies the CSS style file to use when colorizing. It has an effect only when
the --color option is effective. The bison-default.css file provide a good
example from which to define your own style file. See the documentation of
libtextstyle for more details.
-t
--debug In the parser implementation file, define the macro YYDEBUG to 1 if it is not
already defined, so that the debugging facilities are compiled. See Section 8.5
[Tracing Your Parser], page 148.
-D name[=value]
--define=name[=value]
-F name[=value]
--force-define=name[=value]
Each of these is equivalent to ‘%define name value’ (see Section 3.7.14
[%define Summary], page 84). Note that the delimiters are part of
value: -Dapi.value.type=union, -Dapi.value.type={union} and
-Dapi.value.type="union" correspond to ‘%define api.value.type union’,
‘%define api.value.type {union}’ and ‘%define api.value.type "union"’.
Bison processes multiple definitions for the same name as follows:
• Bison quietly ignores all command-line definitions for name except the last.
• If that command-line definition is specified by a -D or --define, Bison
reports an error for any %define definition for name.
• If that command-line definition is specified by a -F or --force-define
instead, Bison quietly ignores all %define definitions for name.
• Otherwise, Bison reports an error if there are multiple %define definitions
for name.
You should avoid using -F and --force-define in your make files unless you
are confident that it is safe to quietly ignore any conflicting %define that may
be added to the grammar file.
-L language
--language=language
Specify the programming language for the generated parser, as if %language was
specified (see Section 3.7.13 [Bison Declaration Summary], page 79). Currently
supported languages include C, C++, and Java. language is case-insensitive.
--locations
Pretend that %locations was specified. See Section 3.7.13 [Bison Declaration
Summary], page 79.
Chapter 9: Invoking Bison 162
-p prefix
--name-prefix=prefix
Pretend that %name-prefix "prefix" was specified (see Section 3.7.13 [Bison
Declaration Summary], page 79). Obsoleted by -Dapi.prefix=prefix. See
Section 3.8 [Multiple Parsers in the Same Program], page 95.
-l
--no-lines
Don’t put any #line preprocessor commands in the parser implementation file.
Ordinarily Bison puts them in the parser implementation file so that the C
compiler and debuggers will associate errors with your source file, the grammar
file. This option causes them to associate errors with the parser implementation
file, treating it as an independent source file in its own right.
-S file
--skeleton=file
Specify the skeleton to use, similar to %skeleton (see Section 3.7.13 [Bison
Declaration Summary], page 79).
If file does not contain a /, file is the name of a skeleton file in the Bison
installation directory. If it does, file is an absolute file name or a file name
relative to the current working directory. This is similar to how most shells
resolve commands.
-k
--token-table
Pretend that %token-table was specified. See Section 3.7.13 [Bison Declaration
Summary], page 79.
-y
--yacc Act more like the traditional yacc command. This can cause different diag-
nostics to be generated (it implies -Wyacc), and may change behavior in other
minor ways. Most importantly, imitate Yacc’s output file name conventions, so
that the parser implementation file is called y.tab.c, and the other outputs are
called y.output and y.tab.h. Also, generate #define statements in addition
to an enum to associate token codes with token kind names. Thus, the following
shell script can substitute for Yacc, and the Bison distribution contains such a
script for compatibility with POSIX:
#! /bin/sh
bison -y "$@"
The -y/--yacc option is intended for use with traditional Yacc grammars. This
option only makes sense for the default C skeleton, yacc.c. If your grammar
uses Bison extensions Bison cannot be Yacc-compatible, even if this option is
specified.
--defines[=file]
Pretend that %defines was specified, i.e., write an extra output file containing
definitions for the token kind names defined in the grammar, as well as a few
other declarations. See Section 3.7.13 [Bison Declaration Summary], page 79.
-d This is the same as --defines except -d does not accept a file argument since
POSIX Yacc requires that -d can be bundled with other short options.
-b file-prefix
--file-prefix=prefix
Pretend that %file-prefix was specified, i.e., specify prefix to use for all Bison
output file names. See Section 3.7.13 [Bison Declaration Summary], page 79.
-r things
--report=things
Write an extra output file containing verbose description of the comma sepa-
rated list of things among:
state Description of the grammar, conflicts (resolved and unresolved),
and parser’s automaton.
itemset Implies state and augments the description of the automaton with
the full set of items for each state, instead of its core only.
lookahead
Implies state and augments the description of the automaton with
each rule’s lookahead set.
solved Implies state. Explain how conflicts were solved thanks to prece-
dence and associativity directives.
counterexamples
cex Look for counterexamples for the conflicts. See Section 8.1 [Gener-
ation of Counterexamples], page 135. Counterexamples take time
to compute. The option -rcex should be used by the developer
when working on the grammar; it hardly makes sense to use it in
a CI.
all Enable all the items.
none Do not generate the report.
--report-file=file
Specify the file for the verbose description.
-v
--verbose
Pretend that %verbose was specified, i.e., write an extra output file containing
verbose descriptions of the grammar and parser. See Section 3.7.13 [Bison
Declaration Summary], page 79.
-o file
--output=file
Specify the file for the parser implementation file.
Chapter 9: Invoking Bison 164
The names of the other output files are constructed from file as described under
the -v and -d options.
-g [file]
--graph[=file]
Output a graphical representation of the parser’s automaton computed by Bi-
son, in Graphviz (https://www.graphviz.org/) DOT (https://www.
graphviz.org/doc/info/lang.html) format. file is optional. If omitted
and the grammar file is foo.y, the output file will be foo.gv if the %required
version is 3.4 or better, foo.dot otherwise.
-x [file]
--xml[=file]
Output an XML report of the parser’s automaton computed by Bison. file
is optional. If omitted and the grammar file is foo.y, the output file will be
foo.xml.
-M old=new
--file-prefix-map=old=new
Replace prefix old with new when writing file paths in output files
--update -u
--verbose -v %verbose
--version -V
--warnings[=category] -W [category]
--xml[=file] -x [file]
--yacc -y %yacc
o << ’{’;
const char *sep = "";
for (const auto& s: ss)
{
o << sep << s;
sep = ", ";
}
return o << ’}’;
}
}
You may want to move it into the yy namespace to avoid leaking it in your default names-
pace. We recommend that you keep the actions simple, and move details into auxiliary
functions, as we did with operator<<.
Our list of strings will be built from two types of items: numbers and strings:
%nterm <std::string> item;
%token <std::string> TEXT;
%token <int> NUMBER;
item:
TEXT
| NUMBER { $$ = std::to_string ($1); }
;
In the case of TEXT, the implicit default action applies: $$ = $1.
Our scanner deserves some attention. The traditional interface of yylex is not type safe:
since the token kind and the token value are not correlated, you may return a NUMBER with
a string as semantic value. To avoid this, we use token constructors (see Section 10.1.7.2
[Complete Symbols], page 179). This directive:
%define api.token.constructor
requests that Bison generates the functions make_TEXT and make_NUMBER, but also make_
YYEOF, for the end of input.
Everything is in place for our scanner:
%code
{
namespace yy
{
// Return the next token.
auto yylex () -> parser::symbol_type
{
static int count = 0;
switch (int stage = count++)
{
case 0:
return parser::make_TEXT ("I have three numbers for you.");
case 1: case 2: case 3:
return parser::make_NUMBER (stage);
Chapter 10: Parsers Written In Other Languages 168
case 4:
return parser::make_TEXT ("And that’s all!");
default:
return parser::make_YYEOF ();
}
}
}
}
In the epilogue, the third part of a Bison grammar file, we leave simple details: the error
reporting function, and the main function.
%%
namespace yy
{
// Report an error to the user.
auto parser::error (const std::string& msg) -> void
{
std::cerr << msg << ’\n’;
}
}
int main ()
{
yy::parser parse;
return parse ();
}
Compile, and run!
$ bison simple.yy -o simple.cc
$ g++ -std=c++14 simple.cc -o simple
$ ./simple
{I have three numbers for you., 1, 2, 3, And that’s all!}
file.hh (Assuming the extension of the grammar file was ‘.yy’.) The declaration of the
C++ parser class and auxiliary types. By default, this file is not generated (see
Section 3.7.13 [Bison Declaration Summary], page 79).
file.cc The implementation of the C++ parser class. The basename and extension of
these two files (file.hh and file.cc) follow the same rules as with regular C
parsers (see Chapter 9 [Invoking Bison], page 154).
Chapter 10: Parsers Written In Other Languages 169
location.hh
Generated when both %defines and %locations are enabled, this file contains
the definition of the classes position and location, used for location tracking.
It is not generated if ‘%define api.location.file none’ is specified, or if user
defined locations are used. See Section 10.1.5 [++ Location Values-titlehunde-
finedi], page 172.
position.hh
stack.hh Useless legacy files. To get rid of then, use ‘%require "3.2"’ or newer.
All these files are documented using Doxygen; run doxygen for a complete and accurate
documentation.
Warning: We do not use Boost.Variant, for two reasons. First, it appeared unacceptable
to require Boost on the user’s machine (i.e., the machine on which the generated parser will
be compiled, not the machine on which bison was run). Second, for each possible semantic
value, Boost.Variant not only stores the value, but also a tag specifying its type. But the
parser already “knows” the type of the semantic value, so that would be duplicating the
information.
We do not use C++17’s std::variant either: we want to support all the C++ standards,
and of course std::variant also stores a tag to record the current type.
Therefore we developed light-weight variants whose type tag is external (so they are
really like unions for C++ actually). There is a number of limitations in (the current
implementation of) variants:
• Alignment must be enforced: values should be aligned in memory according to the
most demanding type. Computing the smallest alignment possible requires meta-
programming techniques that are not currently implemented in Bison, and therefore,
since, as far as we know, double is the most demanding type on all platforms, align-
ments are enforced for double whatever types are actually used. This may waste space
in some cases.
• There might be portability issues we are not aware of.
As far as we know, these limitations can be alleviated. All it takes is some time and/or
some talented C++ hacker willing to contribute to Bison.
the %define variable api.location.type is defined, then these classes will not be gener-
ated, and the user defined type will be used.
However this file is useful if, for instance, your parser builds an abstract syntax
tree decorated with locations: you may use Bison’s location type independently of
Bison’s parser. You may name the file differently, e.g., ‘%define api.location.file
"include/ast/location.hh"’: this name can have directory components, or even be
absolute. The way the location file is included is controlled by api.location.include.
This way it is possible to have several parsers share the same location file.
For instance, in src/foo/parser.yy, generate the include/ast/loc.hh file:
// src/foo/parser.yy
%locations
%define api.namespace {foo}
%define api.location.file "include/ast/loc.hh"
%define api.location.include {<ast/loc.hh>}
and use it in src/bar/parser.yy:
// src/bar/parser.yy
%locations
%define api.namespace {bar}
%code requires {#include <ast/loc.hh>}
%define api.location.type {bar::location}
Absolute file names are supported; it is safe in your Makefile to pass the flag
-Dapi.location.file=’"$(top_srcdir)/include/ast/loc.hh"’ to bison for
src/foo/parser.yy. The generated file will not have references to this absolute
path, thanks to ‘%define api.location.include {<ast/loc.hh>}’. Adding ‘-I
$(top_srcdir)/include’ to your CPPFLAGS will suffice for the compiler to find
ast/loc.hh.
In programs with several C++ parsers, you may also use the %define variable
api.location.type to share a common set of built-in definitions for position and
location. For instance, one parser master/parser.yy might use:
Chapter 10: Parsers Written In Other Languages 176
%defines
%locations
%define api.namespace {master::}
to generate the master/position.hh and master/location.hh files, reused by other
parsers as follows:
%define api.location.type {master::location}
%code requires { #include <master/location.hh> }
Use the following types and functions to build the error message.
You still must provide a yyerror function, used for instance to report memory exhaus-
tion.
Note that when using variants, the interface for yylex is the same, but yylval is handled
differently.
Regular union-based code in Lex scanner typically looks like:
[0-9]+ {
yylval->ival = text_to_int (yytext);
return yy::parser::token::INTEGER;
}
[a-z]+ {
yylval->sval = new std::string (yytext);
return yy::parser::token::IDENTIFIER;
}
Using variants, yylval is already constructed, but it is not initialized. So the code would
look like:
[0-9]+ {
yylval->emplace<int> () = text_to_int (yytext);
return yy::parser::token::INTEGER;
}
[a-z]+ {
yylval->emplace<std::string> () = yytext;
return yy::parser::token::IDENTIFIER;
}
or
[0-9]+ {
yylval->emplace (text_to_int (yytext));
return yy::parser::token::INTEGER;
}
[a-z]+ {
yylval->emplace (yytext);
Chapter 10: Parsers Written In Other Languages 179
return yy::parser::token::IDENTIFIER;
}
Correct matching between token kinds and value types is checked via assert; for in-
stance, ‘symbol_type (ID, 42)’ would abort. Named constructors are preferable (see be-
low), as they offer better type safety (for instance ‘make_ID (42)’ would not even compile),
but symbol type constructors may help when token kinds are discovered at run-time, e.g.,
[a-z]+ {
if (auto i = lookup_keyword (yytext))
return yy::parser::symbol_type (i, loc);
else
return yy::parser::make_ID (yytext, loc);
}
Note that it is possible to generate and compile type incorrect code (e.g. ‘symbol_type
(’:’, yytext, loc)’). It will fail at run time, provided the assertions are enabled (i.e.,
-DNDEBUG was not passed to the compiler). Bison supports an alternative that guarantees
that type incorrect code will not even compile. Indeed, it generates named constructors as
follows.
int result;
The main routine is of course calling the parser.
// Run the parser on file F. Return 0 on success.
int parse (const std::string& f);
// The name of the file being parsed.
std::string file;
// Whether to generate parser debug traces.
bool trace_parsing;
To encapsulate the coordination with the Flex scanner, it is useful to have member functions
to open and close the scanning phase.
// Handling the scanner.
void scan_begin ();
void scan_end ();
// Whether to generate scanner debug traces.
bool trace_scanning;
// The token’s location used by the scanner.
yy::location location;
};
#endif // ! DRIVER_HH
The implementation of the driver (driver.cc) is straightforward.
#include "driver.hh"
#include "parser.hh"
driver::driver ()
: trace_parsing (false), trace_scanning (false)
{
variables["one"] = 1;
variables["two"] = 2;
}
The parse member function deserves some attention.
int
driver::parse (const std::string &f)
{
file = f;
location.initialize (&file);
scan_begin ();
yy::parser parse (*this);
parse.set_debug_level (trace_parsing);
int res = parse ();
scan_end ();
return res;
}
Chapter 10: Parsers Written In Other Languages 183
User friendly names are provided for each symbol. To avoid name clashes in the generated
files (see Section 10.1.8.4 [++ Scanner-titlehundefinedi], page 185), prefix tokens with TOK_
(see Section 3.7.14 [%define Summary], page 84).
%define api.token.prefix {TOK_}
%token
ASSIGN ":="
MINUS "-"
PLUS "+"
STAR "*"
SLASH "/"
LPAREN "("
RPAREN ")"
;
Since we use variant-based semantic values, %union is not used, and %token, %nterm and
%type expect genuine types, not type tags.
%token <std::string> IDENTIFIER "identifier"
%token <int> NUMBER "number"
%nterm <int> exp
No %destructor is needed to enable memory deallocation during error recovery; the mem-
ory, for strings for instance, will be reclaimed by the regular destructors. All the values are
printed using their operator<< (see Section 3.7.8 [Printing Semantic Values], page 75).
%printer { yyo << $$; } <*>;
The grammar itself is straightforward (see Section 2.4 [Location Tracking Calculator:
ltcalc], page 35).
%%
%start unit;
unit: assignments exp { drv.result = $2; };
assignments:
%empty {}
| assignments assignment {};
assignment:
"identifier" ":=" exp { drv.variables[$1] = $3; };
%%
Finally the error member function reports the errors.
void
yy::parser::error (const location_type& l, const std::string& m)
{
std::cerr << l << ": " << m << ’\n’;
}
%%
%{
// A handy shortcut to the location held by the driver.
yy::location& loc = drv.location;
// Code run each time yylex is called.
loc.step ();
%}
{blank}+ loc.step ();
\n+ loc.lines (yyleng); loc.step ();
You should keep your rules simple, both in the parser and in the scanner. Throwing from
the auxiliary functions is then very handy to report errors.
yy::parser::symbol_type
make_NUMBER (const std::string &s, const yy::parser::location_type& loc)
{
errno = 0;
long n = strtol (s.c_str(), NULL, 10);
if (! (INT_MIN <= n && n <= INT_MAX && errno != ERANGE))
throw yy::parser::syntax_error (loc, "integer is out of range: " + s);
return yy::parser::make_NUMBER ((int) n, loc);
}
void
driver::scan_begin ()
{
yy_flex_debug = trace_scanning;
if (file.empty () || file == "-")
yyin = stdin;
else if (!(yyin = fopen (file.c_str (), "r")))
{
std::cerr << "cannot open " << file << ": " << strerror (errno) << ’\n’;
exit (EXIT_FAILURE);
}
}
void
driver::scan_end ()
{
fclose (yyin);
}
int
main (int argc, char *argv[])
{
int res = 0;
driver drv;
for (int i = 1; i < argc; ++i)
if (argv[i] == std::string ("-p"))
drv.trace_parsing = true;
else if (argv[i] == std::string ("-s"))
drv.trace_scanning = true;
else if (!drv.parse (argv[i]))
std::cout << drv.result << ’\n’;
else
res = 1;
return res;
}
When generating a Java parser, ‘bison basename.y’ will create a single Java source
file named basename.java containing the parser implementation. Using a grammar file
without a .y suffix is currently broken. The basename of the parser implementation file
can be changed by the %file-prefix directive or the -b/--file-prefix option. The
entire parser implementation file name can be changed by the %output directive or the
-o/--output option. The parser implementation file contains a single class for the parser.
You can create documentation for generated parsers using Javadoc.
Contrary to C parsers, Java parsers do not use global variables; the state of the parser is
always local to an instance of the parser class. Therefore, all Java parsers are “pure”, and
the %define api.pure directive does nothing when used in Java.
GLR parsers are currently unsupported in Java. Do not use the glr-parser directive.
No header file can be generated for Java parsers. Do not use the %defines directive or
the -d/--defines options.
Currently, support for tracing is always compiled in. Thus the ‘%define parse.trace’
and ‘%token-table’ directives and the -t/--debug and -k/--token-table options have
no effect. This may change in the future to eliminate unused code in the generated parser,
so use ‘%define parse.trace’ explicitly if needed. Also, in the future the %token-table
directive might enable a public interface to access the token names and codes.
Getting a “code too large” error from the Java compiler means the code hit the 64KB
bytecode per method limitation of the Java class file. Try reducing the amount of code in
actions and static initializers; otherwise, report a bug so that the parser skeleton will be
improved.
Never put more than argc elements into argv, and on success return the number of
tokens stored in argv. If there are more expected tokens than argc, fill argv up to
argc and return 0. If there are no expected tokens, also return 0, but set argv[0] to
null.
If argv is null, return the size needed to store all the possible values, which is always
less than YYNTOKENS.
cast to the declared subtype because casts are not allowed on the left-hand side of
Java assignments. Use an explicit Java cast if the correct subtype is needed. See
Section 10.2.2 [Java Semantic Values], page 188.
$<typealt>$ [Variable]
Same as $$ since Java always allow assigning to the base type. Perhaps we should
use this and $<>$ for the value and $$ for setting the value but there is currently no
easy way to distinguish these constructs. See Section 10.2.2 [Java Semantic Values],
page 188.
@n [Variable]
The location information of the nth component of the current rule. This may not be
assigned to. See Section 10.2.3 [Java Location Values], page 189.
@$ [Variable]
The location information of the grouping made by the current rule. See Section 10.2.3
[Java Location Values], page 189.
return YYABORT ; [Statement]
Return immediately from the parser, indicating failure. See Section 10.2.4 [Java
Parser Interface], page 189.
return YYACCEPT ; [Statement]
Return immediately from the parser, indicating success. See Section 10.2.4 [Java
Parser Interface], page 189.
return YYERROR ; [Statement]
Start error recovery (without printing an error message). See Chapter 6 [Error Re-
covery], page 130.
boolean recovering () [Function]
Return whether error recovery is being done. In this state, the parser reads token
until it reaches a known state, and then restarts normal operation. See Chapter 6
[Error Recovery], page 130.
void yyerror (String msg) [Function]
void yyerror (Position loc, String msg) [Function]
void yyerror (Location loc, String msg) [Function]
Print an error message using the yyerror method of the scanner instance in use. The
Location and Position parameters are available only if location tracking is active.
The value returned by the push_parse method is one of the following four constants:
YYABORT, YYACCEPT, YYERROR, or YYPUSH_MORE. This new value, YYPUSH_MORE, may be
returned if more input is required to finish parsing the grammar.
If api.push-pull is declared as both, then the generated parser class will also implement
the parse method. This method’s body is a loop that repeatedly invokes the scanner and
then passes the values obtained from the scanner to the push_parse method.
There is one additional complication. Technically, the push parser does not need to
know about the scanner (i.e. an object implementing the YYParser.Lexer interface), but
it does need access to the yyerror method. Currently, the yyerror method is defined in
the YYParser.Lexer interface. Hence, an implementation of that interface is still required
in order to provide an implementation of yyerror. The current approach (and subject
to change) is to require the YYParser constructor to be given an object implementing the
YYParser.Lexer interface. This object need only implement the yyerror method; the other
methods can be stubbed since they will never be invoked. The simplest way to do this is to
add a trivial scanner implementation to your grammar file using whatever implementation
of yyerror is desired. The following code sample shows a simple way to accomplish this.
%code lexer
{
public Object getLVal () {return null;}
public int yylex () {return 0;}
public void yyerror (String s) {System.err.println(s);}
}
• Java lacks a preprocessor, so the YYERROR, YYACCEPT, YYABORT symbols (see Appendix A
[Bison Symbols], page 209) cannot obviously be macros. Instead, they should be pre-
ceded by return when they appear in an action. The actual definition of these symbols
is opaque to the Bison grammar, and it might change in the future. The only meaning-
ful operation that you can do, is to return them. See Section 10.2.7 [Special Features
for Use in Java Actions], page 193.
Chapter 10: Parsers Written In Other Languages 196
Note that of these three symbols, only YYACCEPT and YYABORT will cause a return from
the yyparse method1 .
• Java lacks unions, so %union has no effect. Instead, semantic values have a common
base type: Object or as specified by ‘%define api.value.type’. Angle brackets on
%token, type, $n and $$ specify subtypes rather than fields of an union. The type of
$$, even with angle brackets, is the base type since Java casts are not allow on the
left-hand side of assignments. Also, $n and @n are not allowed on the left-hand side of
assignments. See Section 10.2.2 [Java Semantic Values], page 188, and Section 10.2.7
[Special Features for Use in Java Actions], page 193.
• The prologue declarations have a different meaning than in C/C++ code.
%code imports
blocks are placed at the beginning of the Java source code. They may
include copyright notices. For a package declarations, use ‘%define
api.package’ instead.
unqualified %code
blocks are placed inside the parser class.
%code lexer
blocks, if specified, should include the implementation of the scanner. If
there is no such block, the scanner can be any class that implements the ap-
propriate interface (see Section 10.2.6 [Java Scanner Interface], page 192).
Other %code blocks are not supported in Java parsers. In particular, %{ ... %} blocks
should not be used and may give an error in future versions of Bison.
The epilogue has the same meaning as in C/C++ code and it can be used to define
other classes used by the parser outside the parser class.
1
Java parsers include the actions in a separate method than yyparse in order to have an intuitive syntax
that corresponds to these C macros.
Chapter 10: Parsers Written In Other Languages 197
%% code . . . [Directive]
Code (after the second %%) appended to the end of the file, outside the parser class.
See Section 10.2.9 [Differences between C/C++ and Java Grammars], page 195.
%{ code . . . %} [Directive]
Not supported. Use %code imports instead. See Section 10.2.9 [Differences between
C/C++ and Java Grammars], page 195.
11.2 yacchack
One of the deficiencies of original Yacc was its inability to produce reentrant parsers. This
was first remedied by a set of drop-in modifications called “yacchack”, published by Eric
S. Raymond on USENET around 1983. This code was quickly forgotten when zoo and
Berkeley Yacc became available a few years later.
11.4 Bison
Robert Corbett actually wrote two (closely related) LALR parsers in 1985, both using the
DeRemer/Penello techniques. One was “zoo”, the other was “Byson”. In 1987 Richard
Stallman began working on Byson; the name changed to Bison and the interface became
Yacc-compatible.
The main visible difference between Yacc and Byson/Bison at the time of Byson’s first
release is that Byson supported the @n construct (giving access to the starting and ending
line number and character number associated with any of the symbols in the current rule).
There was also the command ‘%expect n’ which said not to mention the conflicts if
there are n shift/reduce conflicts and no reduce/reduce conflicts. In more recent versions
of Bison, %expect and its %expect-rr variant for reduce/reduce conflicts can be applied to
individual rules.
Later versions of Bison added many more new features.
Bison error reporting has been improved in various ways. Notably. ancestral Yacc and
Byson did not have carets in error messages.
Compared to Yacc Bison uses a faster but less space-efficient encoding for the parse tables
(see [Corbett 1984], page 230), and more modern techniques for generating the lookahead
sets (see [DeRemer 1982], page 230). This approach is the standard one since then.
(It has also been plausibly alleged the differences in the algorithms stem mainly from the
horrible kludges that Johnson had to perpetrate to make the original Yacc fit in a PDP-11.)
Named references, semantic predicates, %locations, %glr-parser, %printer, %destruc-
tor, dumps to DOT, %parse-param, %lex-param, and dumps to XSLT, LAC, and IELR(1)
generation are new in Bison.
Bison also has many features to support C++ that were not present in the ancestral Yacc
or Byson.
Bison obsolesced all previous Yacc variants and workalikes generating C by 1995.
Several questions about Bison come up occasionally. Here some of them are addressed.
This question is already addressed elsewhere, see Section 3.3.3 [Recursive Rules], page 55.
I invoke yyparse several times, and on correct input it works properly; but
when a parse error is found, all the other calls fail too. How can I reset the
error flag of yyparse?
or
These problems typically come not from Bison itself, but from Lex-generated scanners.
Because these scanners use large buffers for speed, they might not notice a change of input
file. As a demonstration, consider the following source file, first-line.l:
%{
#include <stdio.h>
#include <stdlib.h>
%}
%%
.*\n ECHO; return 1;
%%
int
yyparse (char const *file)
{
yyin = fopen (file, "r");
if (!yyin)
{
perror ("fopen");
exit (EXIT_FAILURE);
}
Chapter 13: Frequently Asked Questions 203
int
main (void)
{
yyparse ("input");
yyparse ("input");
return 0;
}
If the file input contains
input:1: Hello,
input:2: World!
then instead of getting the first line twice, you get:
$ flex -ofirst-line.c first-line.l
$ gcc -ofirst-line first-line.c -ll
$ ./first-line
input:1: Hello,
input:2: World!
Therefore, whenever you change yyin, you must tell the Lex-generated scanner to discard
its current buffer and switch to the new one. This depends upon your implementation of
Lex; see its documentation for more. For Flex, it suffices to call ‘YY_FLUSH_BUFFER’ after
each change to yyin. If your Flex-generated scanner needs to read from several input
streams to handle features like include files, you might consider using Flex functions like
‘yy_switch_to_buffer’ that manipulate multiple input buffers.
If your Flex-generated scanner uses start conditions (see Section “Start conditions” in
The Flex Manual), you might also want to reset the scanner’s state, i.e., go back to the
initial start condition, through a call to ‘BEGIN (0)’.
%{
#include <stdio.h>
char *yylval = NULL;
%}
%%
.* yylval = yytext; return 1;
\n continue;
%%
int
main ()
{
/* Similar to using $1, $2 in a Bison action. */
char *fst = (yylex (), yylval);
char *snd = (yylex (), yylval);
printf ("\"%s\", \"%s\"\n", fst, snd);
return 0;
}
If you compile and run this code, you get:
$ flex -osplit-lines.c split-lines.l
$ gcc -osplit-lines split-lines.c -ll
$ printf ’one\ntwo\n’ | ./split-lines
"one
two", "two"
this is because yytext is a buffer provided for reading in the action, but if you want to keep
it, you have to duplicate it (e.g., using strdup). Note that the output may depend on how
your implementation of Lex handles yytext. For instance, when given the Lex compatibility
option -l (which triggers the option ‘%array’) Flex generates a different behavior:
$ flex -l -osplit-lines.c split-lines.l
$ gcc -osplit-lines split-lines.c -ll
$ printf ’one\ntwo\n’ | ./split-lines
"two", "two"
This topic is way beyond the scope of this manual, and the reader is invited to consult
the dedicated literature.
machine, to his home directory, and have it work correctly (including i18n). So many users
need to go through configure; make; make install with all its dependencies, options,
and hurdles.
Red Hat, Debian, and similar package systems solve the “ease of installation” problem,
but they hardwire path names, usually to /usr or /usr/local. This means that users need
root privileges to install a binary package, and prevents installing two different versions of
the same binary package.
A relocatable program can be moved or copied to a different location on the file system.
It is possible to make symlinks to the installed and moved programs, and invoke them
through the symlink. It is possible to do the same thing with a hard link only if the hard
link file is in the same directory as the real program.
To configure a program to be relocatable, add --enable-relocatable to the configure
command line.
On some OSes the executables remember the location of shared libraries and prefer them
over any other search path. Therefore, such an executable will look for its shared libraries
first in the original installation directory and only then in the current installation directory.
Thus, for reliability, it is best to also give a --prefix option pointing to a directory that
does not exist now and which never will be created, e.g. --prefix=/nonexistent. You may
use DESTDIR=dest-dir on the make command line to avoid installing into that directory.
We do not recommend using a prefix writable by unprivileged users (e.g. /tmp/inst$$)
because such a directory can be recreated by an unprivileged user after the original directory
has been removed. We also do not recommend prefixes that might be behind an automounter
(e.g. $HOME/inst$$) because of the performance impact of directory searching.
Here’s a sample installation run that takes into account all these recommendations:
./configure --enable-relocatable --prefix=/nonexistent
make
make install DESTDIR=/tmp/inst$$
Installation with --enable-relocatable will not work for setuid or setgid executables,
because such executables search only system library paths for security reasons. Also, in-
stallation with --enable-relocatable might not work on OpenBSD, when the package
contains shared libraries and libtool versions 1.5.xx are used.
The runtime penalty and size penalty are negligible on GNU/Linux (just one system call
more when an executable is launched), and small on other systems (the wrapper program
just sets an environment variable and executes the real program).
Except for GLR parsers (which require C99), the C code that Bison generates requires
only C89 or later. However, Bison itself requires common C99 features such as declarations
after statements. Bison’s configure script attempts to enable C99 (or later) support on
compilers that default to pre-C99. If your compiler lacks these C99 features entirely, GCC
may well be a better choice; or you can try upgrading to your compiler’s latest version.
/* . . . */ [Construct]
// . . . [Construct]
Comments, as in C/C++.
: [Delimiter]
Separates a rule’s result from its components. See Section 3.3 [Grammar Rules],
page 54.
; [Delimiter]
Terminates a rule. See Section 3.3 [Grammar Rules], page 54.
| [Delimiter]
Separates alternate rules for the same result nonterminal. See Section 3.3 [Grammar
Rules], page 54.
<*> [Directive]
Used to define a default tagged %destructor or default tagged %printer.
See Section 3.7.7 [Freeing Discarded Symbols], page 73.
<> [Directive]
Used to define a default tagless %destructor or default tagless %printer.
See Section 3.7.7 [Freeing Discarded Symbols], page 73.
$accept [Symbol]
The predefined nonterminal whose only rule is ‘$accept: start $end’, where start is
the start symbol. See Section 3.7.10 [The Start-Symbol], page 77. It cannot be used
in the grammar.
%code {code} [Directive]
%code qualifier {code} [Directive]
Insert code verbatim into the output parser source at the default location or at the
location specified by qualifier. See Section 3.7.15 [%code Summary], page 94.
%debug [Directive]
Equip the parser for debugging. See Section 3.7.13 [Bison Declaration Summary],
page 79.
%define variable [Directive]
%define variable value [Directive]
%define variable {value} [Directive]
%define variable "value" [Directive]
Define a variable to adjust Bison’s behavior. See Section 3.7.14 [%define Summary],
page 84.
%defines [Directive]
Bison declaration to create a parser header file, which is usually meant for the scanner.
See Section 3.7.13 [Bison Declaration Summary], page 79.
%defines defines-file [Directive]
Same as above, but save in the file defines-file. See Section 3.7.13 [Bison Declaration
Summary], page 79.
Appendix A: Bison Symbols 211
%destructor [Directive]
Specify how the parser should reclaim the memory associated to discarded symbols.
See Section 3.7.7 [Freeing Discarded Symbols], page 73.
%dprec [Directive]
Bison declaration to assign a precedence to a rule that is used at parse time to resolve
reduce/reduce conflicts. See Section 1.5 [Writing GLR Parsers], page 17.
%empty [Directive]
Bison declaration to declare make explicit that a rule has an empty right-hand side.
See Section 3.3.2 [Empty Rules], page 55.
$end [Symbol]
The predefined token marking the end of the token stream. It cannot be used in the
grammar.
error [Symbol]
A token name reserved for error recovery. This token may be used in grammar rules
so as to allow the Bison parser to recognize an error in the grammar without halting
the process. In effect, a sentence containing an error may be recognized as valid.
On a syntax error, the token error becomes the current lookahead token. Actions
corresponding to error are then executed, and the lookahead token is reset to the
token that originally caused the violation. See Chapter 6 [Error Recovery], page 130.
%error-verbose [Directive]
An obsolete directive standing for ‘%define parse.error verbose’.
%glr-parser [Directive]
Bison declaration to produce a GLR parser. See Section 1.5 [Writing GLR Parsers],
page 17.
%initial-action [Directive]
Run user code before parsing. See Section 3.7.6 [Performing Actions before Parsing],
page 73.
%language [Directive]
Specify the programming language for the generated parser. See Section 3.7.13 [Bison
Declaration Summary], page 79.
%left [Directive]
Bison declaration to assign precedence and left associativity to token(s). See
Section 3.7.3 [Operator Precedence], page 71.
%merge [Directive]
Bison declaration to assign a merging function to a rule. If there is a reduce/reduce
conflict with a rule having the same merging function, the function is applied to the
two semantic values to get a single result. See Section 1.5 [Writing GLR Parsers],
page 17.
%name-prefix "prefix" [Directive]
Obsoleted by the %define variable api.prefix (see Section 3.8 [Multiple Parsers in
the Same Program], page 95).
Rename the external symbols (variables and functions) used in the parser so that
they start with prefix instead of ‘yy’. Contrary to api.prefix, do no rename types
and macros.
The precise list of symbols renamed in C parsers is yyparse, yylex, yyerror,
yynerrs, yylval, yychar, yydebug, and (if locations are used) yylloc. If you
use a push parser, yypush_parse, yypull_parse, yypstate, yypstate_new and
yypstate_delete will also be renamed. For example, if you use ‘%name-prefix
"c_"’, the names become c_parse, c_lex, and so on. For C++ parsers, see the
%define api.namespace documentation in this section.
%no-lines [Directive]
Bison declaration to avoid generating #line directives in the parser implementation
file. See Section 3.7.13 [Bison Declaration Summary], page 79.
%nonassoc [Directive]
Bison declaration to assign precedence and nonassociativity to token(s). See
Section 3.7.3 [Operator Precedence], page 71.
%nterm [Directive]
Bison declaration to declare nonterminals. See Section 3.7.4 [Nonterminal Symbols],
page 72.
%output "file" [Directive]
Bison declaration to set the name of the parser implementation file. See Section 3.7.13
[Bison Declaration Summary], page 79.
%param {argument-declaration} . . . [Directive]
Bison declaration to specify additional arguments that both yylex and yyparse
should accept. See Section 4.1 [The Parser Function yyparse], page 98.
%parse-param {argument-declaration} . . . [Directive]
Bison declaration to specify additional arguments that yyparse should accept. See
Section 4.1 [The Parser Function yyparse], page 98.
%prec [Directive]
Bison declaration to assign a precedence to a specific rule. See Section 5.4 [Context-
Dependent Precedence], page 116.
%precedence [Directive]
Bison declaration to assign precedence to token(s), but no associativity See
Section 3.7.3 [Operator Precedence], page 71.
Appendix A: Bison Symbols 213
%pure-parser [Directive]
Deprecated version of ‘%define api.pure’ (see Section 3.7.14 [%define Summary],
page 84), for which Bison is more careful to warn about unreasonable usage.
%require "version" [Directive]
Require version version or higher of Bison. See Section 3.7.1 [Require a Version of
Bison], page 70.
%right [Directive]
Bison declaration to assign precedence and right associativity to token(s). See
Section 3.7.3 [Operator Precedence], page 71.
%skeleton [Directive]
Specify the skeleton to use; usually for development. See Section 3.7.13 [Bison Dec-
laration Summary], page 79.
%start [Directive]
Bison declaration to specify the start symbol. See Section 3.7.10 [The Start-Symbol],
page 77.
%token [Directive]
Bison declaration to declare token(s) without specifying precedence. See Section 3.7.2
[Token Kind Names], page 70.
%token-table [Directive]
Bison declaration to include a token name table in the parser implementation file.
See Section 3.7.13 [Bison Declaration Summary], page 79.
%type [Directive]
Bison declaration to declare symbol value types. See Section 3.7.4 [Nonterminal
Symbols], page 72.
$undefined [Symbol]
The predefined token onto which all undefined values returned by yylex are mapped.
It cannot be used in the grammar, rather, use error.
%union [Directive]
Bison declaration to specify several possible data types for semantic values. See
Section 3.4.4 [The Union Declaration], page 58.
YYABORT [Macro]
Macro to pretend that an unrecoverable syntax error has occurred, by making yyparse
return 1 immediately. The error reporting function yyerror is not called. See
Section 4.1 [The Parser Function yyparse], page 98.
For Java parsers, this functionality is invoked using return YYABORT; instead.
YYACCEPT [Macro]
Macro to pretend that a complete utterance of the language has been read, by mak-
ing yyparse return 0 immediately. See Section 4.1 [The Parser Function yyparse],
page 98.
For Java parsers, this functionality is invoked using return YYACCEPT; instead.
Appendix A: Bison Symbols 214
YYBACKUP [Macro]
Macro to discard a value from the parser stack and fake a lookahead token. See
Section 4.5 [Special Features for Use in Actions], page 107.
YYBISON [Macro]
The version of Bison as an integer, for instance 30704 for version 3.7.4. Defined in
yacc.c only. Before version 3.7.4, YYBISON was defined to 1.
yychar [Variable]
External integer variable that contains the integer value of the lookahead token. (In
a pure parser, it is a local variable within yyparse.) Error-recovery rule actions may
examine this variable. See Section 4.5 [Special Features for Use in Actions], page 107.
yyclearin [Variable]
Macro used in error-recovery rule actions. It clears the previous lookahead token. See
Chapter 6 [Error Recovery], page 130.
YYDEBUG [Macro]
Macro to define to equip the parser with tracing code. See Section 8.5 [Tracing Your
Parser], page 148.
yydebug [Variable]
External integer variable set to zero by default. If yydebug is given a nonzero value,
the parser will output information on input symbols and parser action. See Section 8.5
[Tracing Your Parser], page 148.
YYEMPTY [Value]
The pseudo token kind when there is no lookahead token.
YYEOF [Value]
The token kind denoting is the end of the input stream.
yyerrok [Macro]
Macro to cause parser to recover immediately to its normal mode after a syntax error.
See Chapter 6 [Error Recovery], page 130.
YYERROR [Macro]
Cause an immediate syntax error. This statement initiates error recovery just as if
the parser itself had detected an error; however, it does not call yyerror, and does
not print any message. If you want to print an error message, call yyerror explicitly
before the ‘YYERROR;’ statement. See Chapter 6 [Error Recovery], page 130.
For Java parsers, this functionality is invoked using return YYERROR; instead.
yyerror [Function]
User-supplied function to be called by yyparse on error. See Section 4.4.1 [The Error
Reporting Function yyerror], page 104.
YYFPRINTF [Macro]
Macro used to output run-time traces in C. See Section 8.5.1 [Enabling Traces],
page 148.
Appendix A: Bison Symbols 215
YYINITDEPTH [Macro]
Macro for specifying the initial size of the parser stack. See Section 5.10 [Memory
Management, and How to Avoid Memory Exhaustion], page 128.
yylex [Function]
User-supplied lexical analyzer function, called with no arguments to get the next
token. See Section 4.3 [The Lexical Analyzer Function yylex], page 100.
yylloc [Variable]
External variable in which yylex should place the line and column numbers associated
with a token. (In a pure parser, it is a local variable within yyparse, and its address
is passed to yylex.) You can ignore this variable if you don’t use the ‘@’ feature
in the grammar actions. See Section 4.3.5 [Textual Locations of Tokens], page 102.
In semantic actions, it stores the location of the lookahead token. See Section 3.5.2
[Actions and Locations], page 66.
YYLTYPE [Type]
Data type of yylloc. By default in C, a structure with four members (start/end
line/column). See Section 3.5.1 [Data Type of Locations], page 66.
yylval [Variable]
External variable in which yylex should place the semantic value associated with
a token. (In a pure parser, it is a local variable within yyparse, and its address
is passed to yylex.) See Section 4.3.4 [Semantic Values of Tokens], page 102. In
semantic actions, it stores the semantic value of the lookahead token. See Section 3.4.6
[Actions], page 59.
YYMAXDEPTH [Macro]
Macro for specifying the maximum size of the parser stack. See Section 5.10 [Memory
Management, and How to Avoid Memory Exhaustion], page 128.
yynerrs [Variable]
Global variable which Bison increments each time it reports a syntax error. (In a pure
parser, it is a local variable within yyparse. In a pure push parser, it is a member of
yypstate.) See Section 4.4.1 [The Error Reporting Function yyerror], page 104.
yyparse [Function]
The parser function produced by Bison; call this function to start parsing. See
Section 4.1 [The Parser Function yyparse], page 98.
YYPRINT [Macro]
Macro used to output token semantic values. For yacc.c only. Deprecated, use
%printer instead (see Section 3.7.8 [Printing Semantic Values], page 75). See
Section 8.5.3 [The YYPRINT Macro], page 152.
yypstate_delete [Function]
The function to delete a parser instance, produced by Bison in push mode; call this
function to delete the memory associated with a parser. See [yypstate_delete],
page 99. Does nothing when called with a null pointer.
Appendix A: Bison Symbols 216
yypstate_new [Function]
The function to create a parser instance, produced by Bison in push mode; call this
function to create a new parser. See [yypstate_new], page 99.
yypull_parse [Function]
The parser function produced by Bison in push mode; call this function to parse the
rest of the input stream. See [yypull_parse], page 100.
yypush_parse [Function]
The parser function produced by Bison in push mode; call this function to parse a
single token. See [yypush_parse], page 99.
YYRECOVERING [Macro]
The expression YYRECOVERING () yields 1 when the parser is recovering from a syntax
error, and 0 otherwise. See Section 4.5 [Special Features for Use in Actions], page 107.
YYSTACK_USE_ALLOCA [Macro]
Macro used to control the use of alloca when the deterministic parser in C needs to
extend its stacks. If defined to 0, the parser will use malloc to extend its stacks and
memory exhaustion occurs if malloc fails (see Section 5.10 [Memory Management,
and How to Avoid Memory Exhaustion], page 128). If defined to 1, the parser will
use alloca. Values other than 0 and 1 are reserved for future Bison extensions. If
not defined, YYSTACK_USE_ALLOCA defaults to 0.
In the all-too-common case where your code may run on a host with a limited stack
and with unreliable stack-overflow checking, you should set YYMAXDEPTH to a value
that cannot possibly result in unchecked stack overflow on any of your target hosts
when alloca is called. You can inspect the code that Bison generates in order to
determine the proper numeric values. This will require some expertise in low-level
implementation details.
YYSTYPE [Type]
Deprecated in favor of the %define variable api.value.type. Data type of semantic
values; int by default. See Section 3.4.1 [Data Types of Semantic Values], page 56.
yysymbol_kind_t [Type]
An enum of all the symbols, tokens and nonterminals, of the grammar. See
Section 4.4.2 [The Syntax Error Reporting Function yyreport_syntax_error],
page 105. The symbol kinds are used internally by the parser, and should not be
confused with the token kinds: the symbol kind of a terminal symbol is not equal to
its token kind! (Unless ‘%define api.token.raw’ was used.)
yytoken_kind_t [Type]
An enum of all the token kinds declared with %token (see Section 3.7.2 [Token Kind
Names], page 70). These are the return values for yylex. They should not be confused
with the symbol kinds, used internally by the parser.
YYUNDEF [Value]
The token kind denoting an unknown token.
217
Appendix B Glossary
Accepting state
A state whose only action is the accept action. The accepting state is thus a
consistent state. See Section 8.2 [Understanding Your Parser], page 138.
Backus-Naur Form (BNF; also called “Backus Normal Form”)
Formal method of specifying context-free grammars originally proposed by John
Backus, and slightly improved by Peter Naur in his 1960-01-02 committee doc-
ument contributing to what became the Algol 60 report. See Section 1.1 [Lan-
guages and Context-Free Grammars], page 14.
Consistent state
A state containing only one possible action. See Section 5.8.2 [Default Reduc-
tions], page 124.
Context-free grammars
Grammars specified as rules that can be applied regardless of context. Thus, if
there is a rule which says that an integer can be used as an expression, integers
are allowed anywhere an expression is permitted. See Section 1.1 [Languages
and Context-Free Grammars], page 14.
Counterexample
A sequence of tokens and/or nonterminals, with one dot, that demonstrates a
conflict. The dot marks the place where the conflict occurs.
A unifying counterexample is a single string that has two different parses; its
existence proves that the grammar is ambiguous. When a unifying counterex-
ample cannot be found in reasonable time, a nonunifying counterexample is
built: two different string sharing the prefix up to the dot.
See Section 8.1 [Generation of Counterexamples], page 135,
Default reduction
The reduction that a parser should perform if the current parser state contains
no other action for the lookahead token. In permitted parser states, Bison
declares the reduction with the largest lookahead set to be the default reduc-
tion and removes that lookahead set. See Section 5.8.2 [Default Reductions],
page 124.
Defaulted state
A consistent state with a default reduction. See Section 5.8.2 [Default Reduc-
tions], page 124.
Dynamic allocation
Allocation of memory that occurs during execution, rather than at compile time
or on entry to a function.
Empty string
Analogous to the empty set in set theory, the empty string is a character string
of length zero.
Appendix B: Glossary 218
Parser A function that recognizes valid sentences of a language by analyzing the syntax
structure of a set of tokens passed to it from a lexical analyzer.
Postfix operator
An arithmetic operator that is placed after the operands upon which it performs
some operation.
Reduction Replacing a string of nonterminals and/or terminals with a single nonterminal,
according to a grammar rule. See Chapter 5 [The Bison Parser Algorithm],
page 111.
Reentrant A reentrant subprogram is a subprogram which can be in invoked any number
of times in parallel, without interference between the various invocations. See
Section 3.7.11 [A Pure (Reentrant) Parser], page 77.
Reverse Polish Notation
A language in which all operators are postfix operators.
Right recursion
A rule whose result symbol is also its last component symbol; for example,
‘expseq1: exp ’,’ expseq1;’. See Section 3.3.3 [Recursive Rules], page 55.
Semantics In computer languages, the semantics are specified by the actions taken for each
instance of the language, i.e., the meaning of each statement. See Section 3.4
[Defining Language Semantics], page 56.
Shift A parser is said to shift when it makes the choice of analyzing further input from
the stream rather than reducing immediately some already-recognized rule. See
Chapter 5 [The Bison Parser Algorithm], page 111.
Single-character literal
A single character that is recognized and interpreted as is. See Section 1.2
[From Formal Rules to Bison Input], page 15.
Start symbol
The nonterminal symbol that stands for a complete valid utterance in the lan-
guage being parsed. The start symbol is usually listed as the first nonterminal
symbol in a language specification. See Section 3.7.10 [The Start-Symbol],
page 77.
Symbol kind
A (finite) enumeration of the grammar symbols, as processed by the parser.
See Section 3.2 [Symbols, Terminal and Nonterminal], page 52.
Symbol table
A data structure where symbol names and associated data are stored during
parsing to allow for recognition and use of existing information in repeated uses
of a symbol. See Section 2.5 [Multi-Function Calculator: mfcalc], page 39.
Syntax error
An error encountered during parsing of an input stream due to invalid syntax.
See Chapter 6 [Error Recovery], page 130.
Appendix B: Glossary 221
Terminal symbol
A grammar symbol that has no rules in the grammar and therefore is gram-
matically indivisible. The piece of text it represents is a token. See Section 1.1
[Languages and Context-Free Grammars], page 14.
Token A basic, grammatically indivisible unit of a language. The symbol that describes
a token in the grammar is a terminal symbol. The input of the Bison parser
is a stream of tokens which comes from the lexical analyzer. See Section 3.2
[Symbols, Terminal and Nonterminal], page 52.
Token kind
A (finite) enumeration of the grammar terminals, as discriminated by the scan-
ner. See Section 3.2 [Symbols, Terminal and Nonterminal], page 52.
Unreachable state
A parser state to which there does not exist a sequence of transitions from the
parser’s start state. A state can become unreachable during conflict resolution.
See Section 5.8.4 [Unreachable States], page 127.
222
under this License. If a section does not fit the above definition of Secondary then it is
not allowed to be designated as Invariant. The Document may contain zero Invariant
Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under
this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may
be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented
in a format whose specification is available to the general public, that is suitable for
revising the document straightforwardly with generic text editors or (for images com-
posed of pixels) generic paint programs or (for drawings) some widely available drawing
editor, and that is suitable for input to text formatters or for automatic translation to
a variety of formats suitable for input to text formatters. A copy made in an otherwise
Transparent file format whose markup, or absence of markup, has been arranged to
thwart or discourage subsequent modification by readers is not Transparent. An image
format is not Transparent if used for any substantial amount of text. A copy that is
not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTEX input format, SGML or XML using a publicly
available DTD, and standard-conforming simple HTML, PostScript or PDF designed
for human modification. Examples of transparent image formats include PNG, XCF
and JPG. Opaque formats include proprietary formats that can be read and edited
only by proprietary word processors, SGML or XML for which the DTD and/or pro-
cessing tools are not generally available, and the machine-generated HTML, PostScript
or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the
title page. For works in formats which do not have any title page as such, “Title Page”
means the text near the most prominent appearance of the work’s title, preceding the
beginning of the body of the text.
The “publisher” means any person or entity that distributes copies of the Document
to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title either
is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in
another language. (Here XYZ stands for a specific section name mentioned below, such
as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve
the Title” of such a section when you modify the Document means that it remains a
section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to
be included by reference in this License, but only as regards disclaiming warranties:
any other implication that these Warranty Disclaimers may have is void and has no
effect on the meaning of this License.
2. VERBATIM COPYING
Appendix C: GNU Free Documentation License 224
You may copy and distribute the Document in any medium, either commercially or
noncommercially, provided that this License, the copyright notices, and the license
notice saying this License applies to the Document are reproduced in all copies, and
that you add no other conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further copying of the copies
you make or distribute. However, you may accept compensation in exchange for copies.
If you distribute a large enough number of copies you must also follow the conditions
in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly
display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of
the Document, numbering more than 100, and the Document’s license notice requires
Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all
these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify you as the publisher
of these copies. The front cover must present the full title with all words of the title
equally prominent and visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve the title of the
Document and satisfy these conditions, can be treated as verbatim copying in other
respects.
If the required texts for either cover are too voluminous to fit legibly, you should put
the first ones listed (as many as fit reasonably) on the actual cover, and continue the
rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100,
you must either include a machine-readable Transparent copy along with each Opaque
copy, or state in or with each Opaque copy a computer-network location from which
the general network-using public has access to download using public-standard network
protocols a complete Transparent copy of the Document, free of added material. If
you use the latter option, you must take reasonably prudent steps, when you begin
distribution of Opaque copies in quantity, to ensure that this Transparent copy will
remain thus accessible at the stated location until at least one year after the last time
you distribute an Opaque copy (directly or through your agents or retailers) of that
edition to the public.
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you
with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions
of sections 2 and 3 above, provided that you release the Modified Version under precisely
this License, with the Modified Version filling the role of the Document, thus licensing
distribution and modification of the Modified Version to whoever possesses a copy of
it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any,
Appendix C: GNU Free Documentation License 225
be listed in the History section of the Document). You may use the same title as
a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modifications in the Modified Version, together with at least five
of the principal authors of the Document (all of its principal authors, if it has fewer
than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the
publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modified Version under the terms of this License, in the form
shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified Version
as given on the Title Page. If there is no section Entitled “History” in the Docu-
ment, create one stating the title, year, authors, and publisher of the Document
as given on its Title Page, then add an item describing the Modified Version as
stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to
a Transparent copy of the Document, and likewise the network locations given in
the Document for previous versions it was based on. These may be placed in the
“History” section. You may omit a network location for a work that was published
at least four years before the Document itself, or if the original publisher of the
version it refers to gives permission.
K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title
of the section, and preserve in the section all the substance and tone of each of the
contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and
in their titles. Section numbers or the equivalent are not considered part of the
section titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included
in the Modified Version.
N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in
title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify
as Secondary Sections and contain no material copied from the Document, you may at
your option designate some or all of these sections as invariant. To do this, add their
Appendix C: GNU Free Documentation License 226
titles to the list of Invariant Sections in the Modified Version’s license notice. These
titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but
endorsements of your Modified Version by various parties—for example, statements of
peer review or that the text has been approved by an organization as the authoritative
definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up
to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified
Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be
added by (or through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or by arrangement
made by the same entity you are acting on behalf of, you may not add another; but
you may replace the old one, on explicit permission from the previous publisher that
added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission
to use their names for publicity for or to assert or imply endorsement of any Modified
Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License,
under the terms defined in section 4 above for modified versions, provided that you
include in the combination all of the Invariant Sections of all of the original documents,
unmodified, and list them all as Invariant Sections of your combined work in its license
notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant
Sections with the same name but different contents, make the title of each such section
unique by adding at the end of it, in parentheses, the name of the original author or
publisher of that section if known, or else a unique number. Make the same adjustment
to the section titles in the list of Invariant Sections in the license notice of the combined
work.
In the combination, you must combine any sections Entitled “History” in the vari-
ous original documents, forming one section Entitled “History”; likewise combine any
sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You
must delete all sections Entitled “Endorsements.”
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various
documents with a single copy that is included in the collection, provided that you
follow the rules of this License for verbatim copying of each of the documents in all
other respects.
You may extract a single document from such a collection, and distribute it individu-
ally under this License, provided you insert a copy of this License into the extracted
document, and follow this License in all other respects regarding verbatim copying of
that document.
Appendix C: GNU Free Documentation License 227
Bibliography
[Corbett 1984]
Robert Paul Corbett, Static Semantics in Compiler Error Recovery Ph.D. Dis-
sertation, Report No. UCB/CSD 85/251, Department of Electrical Engineer-
ing and Computer Science, Compute Science Division, University of California,
Berkeley, California (June 1985). https://digicoll.lib.berkeley.edu/
record/135875
[Denny 2008]
Joel E. Denny and Brian A. Malloy, IELR(1): Practical LR(1) Parser Tables
for Non-LR(1) Grammars with Conflict Resolution, in Proceedings of the 2008
ACM Symposium on Applied Computing (SAC’08), ACM, New York, NY,
USA, pp. 240–245. https://dx.doi.org/10.1145/1363686.1363747
[Denny 2010 May]
Joel E. Denny, PSLR(1): Pseudo-Scannerless Minimal LR(1) for the Determin-
istic Parsing of Composite Languages, Ph.D. Dissertation, Clemson University,
Clemson, SC, USA (May 2010). https://tigerprints.clemson.edu/
all_dissertations/519/
[Denny 2010 November]
Joel E. Denny and Brian A. Malloy, The IELR(1) Algorithm for Generating
Minimal LR(1) Parser Tables for Non-LR(1) Grammars with Conflict Resolu-
tion, in Science of Computer Programming, Vol. 75, Issue 11 (November 2010),
pp. 943–979. https://dx.doi.org/10.1016/j.scico.2009.08.001
[DeRemer 1982]
Frank DeRemer and Thomas Pennello, Efficient Computation of LALR(1)
Look-Ahead Sets, in ACM Transactions on Programming Languages and
Systems, Vol. 4, No. 4 (October 1982), pp. 615–649. https://dx.doi.org/
10.1145/69622.357187
[Isradisaikul 2015]
Chinawat Isradisaikul, Andrew Myers, Finding Counterexamples from Parsing
Conflicts, in Proceedings of the 36th ACM SIGPLAN Conference on Program-
ming Language Design and Implementation (PLDI ’15), ACM, pp. 555–564.
https://www.cs.cornell.edu/andru/papers/cupex/cupex.pdf
[Johnson 1978]
Steven C. Johnson, A portable compiler: theory and practice, in Proceedings
of the 5th ACM SIGACT-SIGPLAN symposium on Principles of programming
languages (POPL ’78), pp. 97–104. https://dx.doi.org/10.1145/512760.
512771.
[Knuth 1965]
Donald E. Knuth, On the Translation of Languages from Left to Right, in Infor-
mation and Control, Vol. 8, Issue 6 (December 1965), pp. 607–639. https://
dx.doi.org/10.1016/S0019-9958(65)90426-2
Bibliography 231
[Scott 2000]
Elizabeth Scott, Adrian Johnstone, and Shamsa Sadaf Hussain, Tomita-Style
Generalised LR Parsers, Royal Holloway, University of London, Department of
Computer Science, TR-00-12 (December 2000). https://www.cs.rhul.ac.
uk/research/languages/publications/tomita_style_1.ps
232
Index of Terms
/ B
/* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Backus-Naur form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
/* ... */ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 begin of Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
// . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 begin of location . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
// ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Bison declaration summary . . . . . . . . . . . . . . . . . . . . 79
Bison declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Bison declarations (introduction) . . . . . . . . . . . . . . . 51
Bison grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
: Bison invocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Bison parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Bison parser algorithm . . . . . . . . . . . . . . . . . . . . . . . . 111
Bison symbols, table of . . . . . . . . . . . . . . . . . . . . . . . 209
Bison utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
; bison-i18n.m4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 bison-po . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
BISON_I18N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
BISON_LOCALEDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
bisonSkeleton of YYParser . . . . . . . . . . . . . . . . . . . 190
< bisonVersion of YYParser . . . . . . . . . . . . . . . . . . . . 190
BNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
<*> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73, 75, 210
braced code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
<> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73, 75, 210
byacc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
@ C
@$ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66, 108, 194, 209 C code, section for additional . . . . . . . . . . . . . . . . . . 51
@[name] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66, 209 C-language interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
@n . . . . . . . . . . . . . . . . . . . . . . . . . . . 63, 66, 109, 194, 209 calc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
@name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66, 209 calculator, infix notation . . . . . . . . . . . . . . . . . . . . . . . 33
calculator, location tracking . . . . . . . . . . . . . . . . . . . . 35
calculator, multi-function . . . . . . . . . . . . . . . . . . . . . . 39
calculator, simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
| canonical LR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121, 122
cex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
| . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54, 210
character token . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
column of position . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
columns on location . . . . . . . . . . . . . . . . . . . . . . . . . 174
A columns on position . . . . . . . . . . . . . . . . . . . . . . . . . 173
comment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
abstract syntax tree. . . . . . . . . . . . . . . . . . . . . . . . . . . 204 compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
accepting state. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 compiling the parser . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 conflict counterexamples . . . . . . . . . . . . . . . . . . . . . . 135
action data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . 17, 18, 20, 112
action features summary . . . . . . . . . . . . . . . . . . . . . . 107 conflicts, reduce/reduce . . . . . . . . . . . . . . . . . . . . . . . 117
actions in midrule . . . . . . . . . . . . . . . . . . . . . . . . . . 61, 74 conflicts, suppressing warnings of . . . . . . . . . . . . . . . 76
actions, location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 consistent states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
actions, semantic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 context of parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
additional C code section . . . . . . . . . . . . . . . . . . . . . . 51 context-dependent precedence . . . . . . . . . . . . . . . . . 116
algorithm of parser . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 context-free grammar . . . . . . . . . . . . . . . . . . . . . . . . . . 14
ambiguous grammars . . . . . . . . . . . . . . . . . . . . . . 14, 127 controlling function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
associativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 core, item set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
AST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 counter_type of position . . . . . . . . . . . . . . . . . . . . 173
counterexample, nonunifying . . . . . . . . . . . . . . . . . . 217
counterexample, unifying. . . . . . . . . . . . . . . . . . . . . . 217
counterexamples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Index of Terms 234
D F
dangling else. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 file format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
data type of locations . . . . . . . . . . . . . . . . . . . . . . . . . . 66 file of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
data types in actions . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 filename_type of position . . . . . . . . . . . . . . . . . . . 173
data types of semantic values . . . . . . . . . . . . . . . . . . 56 finite-state machine . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
debug_level on parser . . . . . . . . . . . . . . . . . . . . . . . 170 formal grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
debug_stream on parser . . . . . . . . . . . . . . . . . . . . . . 170 format of grammar file . . . . . . . . . . . . . . . . . . . . . . . . . 26
debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 freeing discarded symbols . . . . . . . . . . . . . . . . . . . . . . 73
declaration summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 frequently asked questions . . . . . . . . . . . . . . . . . . . . 202
declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
declarations section . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
declarations, Bison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
declarations, Bison (introduction) . . . . . . . . . . . . . . 51 G
declaring literal string tokens . . . . . . . . . . . . . . . . . . . 70 generalized LR (GLR) parsing . . . . . . . . . 14, 17, 127
declaring operator precedence . . . . . . . . . . . . . . . . . . 71 generalized LR (GLR) parsing,
declaring the start symbol. . . . . . . . . . . . . . . . . . . . . . 77 ambiguous grammars . . . . . . . . . . . . . . . . . . . . . . . . 20
declaring token kind names . . . . . . . . . . . . . . . . . . . . 70 generalized LR (GLR) parsing,
declaring value types. . . . . . . . . . . . . . . . . . . . 57, 58, 59 unambiguous grammars . . . . . . . . . . . . . . . . . . . . . . 18
declaring value types, nonterminals . . . . . . . . . . . . . 72 getDebugLevel on YYParser . . . . . . . . . . . . . . . . . . 190
default action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
getDebugStream on YYParser . . . . . . . . . . . . . . . . . 190
default data type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
getEndPos on Lexer . . . . . . . . . . . . . . . . . . . . . . . . . . 192
default location type . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
getErrorVerbose on YYParser . . . . . . . . . . . . . . . . 190
default reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
getExpectedTokens on YYParser.Context . . . . 191
default stack limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
getLocation on YYParser.Context . . . . . . . . . . . 191
default start symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
getLVal on Lexer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
defaulted states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
getName on YYParser.SymbolKind. . . . . . . . . . . . . 191
deferred semantic actions . . . . . . . . . . . . . . . . . . . . . . 23
defining language semantics . . . . . . . . . . . . . . . . . . . . 56 getStartPos on Lexer . . . . . . . . . . . . . . . . . . . . . . . . 192
delayed syntax error detection . . . . . . . . . . . . 123, 124 gettext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
delayed yylex invocations . . . . . . . . . . . . . . . . . . . . . 124 getToken on YYParser.Context . . . . . . . . . . . . . . . 191
discarded symbols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
discarded symbols, midrule actions . . . . . . . . . . . . . 62 GLR parsers and yychar . . . . . . . . . . . . . . . . . . . . . . . 23
dot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 GLR parsers and yyclearin . . . . . . . . . . . . . . . . . . . 23
dotted rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 GLR parsers and YYERROR . . . . . . . . . . . . . . . . . . . . . . 23
GLR parsers and yylloc . . . . . . . . . . . . . . . . . . . . . . . 23
GLR parsers and YYLLOC_DEFAULT . . . . . . . . . . . . . . 68
E GLR parsers and yylval . . . . . . . . . . . . . . . . . . . . . . . 23
GLR parsing . . . . . . . . . . . . . . . . . . . . . . . . . . 14, 17, 127
else, dangling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
GLR parsing, ambiguous grammars . . . . . . . . . . . . 20
emplace<T, U> on semantic_type . . . . . . . . . . . . . 172
GLR parsing, unambiguous grammars . . . . . . . . . . 18
emplace<T> on semantic_type . . . . . . . . . . . . . . . . 172
GLR with LALR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
empty rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
grammar file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
end of location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
grammar rule syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
end of Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
grammar rules section . . . . . . . . . . . . . . . . . . . . . . . . . . 51
epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
grammar, Bison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130, 211
error on parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 grammar, context-free . . . . . . . . . . . . . . . . . . . . . . . . . . 14
error recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 grouping, syntactic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
error recovery, midrule actions . . . . . . . . . . . . . . . . . 62
error recovery, simple . . . . . . . . . . . . . . . . . . . . . . . . . . 35
error reporting function . . . . . . . . . . . . . . . . . . . . . . . 104 H
error reporting routine . . . . . . . . . . . . . . . . . . . . . . . . . 32
examples, simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Header guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
expected_tokens on context . . . . . . . . . . . . . . . . . 177
Index of Terms 235
I M
i18n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 main function in simple example . . . . . . . . . . . . . . . 31
i18n of YYParser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 make_token on parser . . . . . . . . . . . . . . . . . . . . . . . . 180
IELR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121, 122 memory exhaustion . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
IELR grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 memory management . . . . . . . . . . . . . . . . . . . . . . . . . 128
infix notation calculator . . . . . . . . . . . . . . . . . . . . . . . . 33 mfcalc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
initialize on location . . . . . . . . . . . . . . . . . . . . . . 174 midrule actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61, 74
initialize on position . . . . . . . . . . . . . . . . . . . . . . 173 multi-function calculator . . . . . . . . . . . . . . . . . . . . . . . 39
interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 multicharacter literal . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
internationalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 mutual recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Mysterious Conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
invoking Bison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Mysterious Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . 120
item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
item set core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
N
name on symbol_type . . . . . . . . . . . . . . . . . . . . . . . . . 179
named references. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
K NLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
kernel, item set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 nondeterministic parsing . . . . . . . . . . . . . . . . . . 14, 127
kind on symbol_type . . . . . . . . . . . . . . . . . . . . . . . . . 179 nonterminal symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
nonterminal, useless. . . . . . . . . . . . . . . . . . . . . . . . . . . 139
nonunifying counterexample . . . . . . . . . . . . . . . . . . . 217
L
LAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123, 124, 125 O
LALR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121, 122 operator precedence . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
LALR grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 operator precedence, declaring . . . . . . . . . . . . . . . . . 71
language semantics, defining . . . . . . . . . . . . . . . . . . . 56 operator!= on location . . . . . . . . . . . . . . . . . . . . . . 174
layout of Bison grammar . . . . . . . . . . . . . . . . . . . . . . . 26 operator!= on position . . . . . . . . . . . . . . . . . . . . . . 173
left recursion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 operator() on parser . . . . . . . . . . . . . . . . . . . . . . . . 170
lexical analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 operator+ on location . . . . . . . . . . . . . . . . . . . . . . . 174
lexical analyzer, purpose . . . . . . . . . . . . . . . . . . . . . . . 24 operator+ on position . . . . . . . . . . . . . . . . . . . . . . . 173
lexical analyzer, writing . . . . . . . . . . . . . . . . . . . . . . . . 30 operator+= on location . . . . . . . . . . . . . . . . . . . . . . 174
lexical tie-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 operator+= on position . . . . . . . . . . . . . . . . . . . . . . 173
line of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 operator- on location . . . . . . . . . . . . . . . . . . . . . . . 174
lines on location . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 operator- on position . . . . . . . . . . . . . . . . . . . . . . . 173
lines on position . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 operator-= on location . . . . . . . . . . . . . . . . . . . . . . 174
literal string token . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 operator-= on position . . . . . . . . . . . . . . . . . . . . . . 173
literal token. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 operator<< . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 174
location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24, 66 operator== on location . . . . . . . . . . . . . . . . . . . . . . 174
location actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 operator== on position . . . . . . . . . . . . . . . . . . . . . . 173
location on context . . . . . . . . . . . . . . . . . . . . . . . . . 177 options for invoking Bison . . . . . . . . . . . . . . . . . . . . 154
Location on Location . . . . . . . . . . . . . . . . . . . . . . . . 189 overflow of parser stack . . . . . . . . . . . . . . . . . . . . . . . 128
location on location . . . . . . . . . . . . . . . . . . . . . . . . 174
location tracking calculator . . . . . . . . . . . . . . . . . . . . 35
location, textual . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24, 66 P
location_type of parser . . . . . . . . . . . . . . . . . . . . . 170 parse error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
lookahead correction . . . . . . . . . . . . . . . . . . . . . . . . . . 125 parse on parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
lookahead on context . . . . . . . . . . . . . . . . . . . . . . . . 176 parse on YYParser . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
lookahead token . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
LR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 parser on parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
LR grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 parser stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
ltcalc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 parser stack overflow . . . . . . . . . . . . . . . . . . . . . . . . . . 128
parser state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
position on position . . . . . . . . . . . . . . . . . . . . . . . . 173
precedence declarations . . . . . . . . . . . . . . . . . . . . . . . . 71
precedence of operators . . . . . . . . . . . . . . . . . . . . . . . 114
Index of Terms 236