Parser Algorithm PreviousNext

As geyacc reads tokens, it pushes them onto a stack along with their semantic values. The stack is called the parser stack. Pushing a token is traditionally called shifting. For example, suppose the infix calculator has read 1 + 5 *, with a 3 to come. The stack will have four elements, one for each token that was shifted.

But the stack does not always have an element for each token read. When the last N tokens and groupings shifted match the components of a grammar rule, they can be combined according to that rule. This is called reduction. Those tokens and groupings are replaced on the stack by a single grouping whose symbol is the result (left hand side) of that rule. Running the rule's action is part of the process of reduction, because this is what computes the semantic value of the resulting grouping. For example, if the infix calculator's parser stack contains 1 + 5 * 3 and the next input token is a newline character, then the last three elements can be reduced to 15 via the rule:

expr: expr '*' expr ;

Then the stack contains just these three elements 1 + 15. At this point, another reduction can be made, resulting in the single value 16. Then the newline token can be shifted.

The parser tries, by shifts and reductions, to reduce the entire input down to a single grouping whose symbol is the grammar's start symbol. This kind of parser is known in the literature as a bottom-up parser.

Look-Ahead Tokens

The geyacc parser does not always reduce immediately as soon as the last N tokens and groupings match a rule. This is because such a simple strategy is inadequate to handle most languages. Instead, when a reduction is possible, the parser sometimes looks ahead at the next token in order to decide what to do.

When a token is read, it is not immediately shifted; first it becomes the look-ahead token, which is not on the stack. Now the parser can perform one or more reductions of tokens and groupings on the stack, while the look-ahead token remains off to the side. When no more reductions should take place, the look-ahead token is shifted onto the stack. This does not mean that all possible reductions have been done; depending on the token type of the look-ahead token, some rules may choose to delay their application.

Here is a simple case where look-ahead is needed. These three rules define expressions which contain binary addition operators and postfix unary factorial operators !, and allow parentheses for grouping.

expr: term '+' expr
    | term
    ;

term: '(' expr ')'
    | term '!'
    | NUMBER
    ;

Suppose that the tokens 1 + 2 have been read and shifted; what should be done? If the following token is ), then the first three tokens must be reduced to form an expr. This is the only valid course, because shifting the ) would produce a sequence of symbols term ), and no rule allows this. If the following token is !, then it must be shifted immediately so that 2 ! can be reduced to make a term. If instead the parser were to reduce before shifting, 1 + 2 would become an expr. It would then be impossible to shift the ! because doing so would produce on the stack the sequence of symbols expr !. No rule allows that sequence.

The current look-ahead token is stored in the variable last_token.

Shift/Reduce Conflicts

Suppose we are parsing a language which has if-then and if-then-else statements, with a pair of rules like this:

if_stmt: IF expr THEN stmt
    | IF expr THEN stmt ELSE stmt
    ;

Here we assume that IF, THEN and ELSE are terminal symbols for specific keyword tokens. When the ELSE token is read and becomes the look-ahead token, the contents of the stack (assuming the input is valid) are just right for reduction by the first rule. But it is also legitimate to shift the ELSE, because that would lead to eventual reduction by the second rule.

This situation, where either a shift or a reduction would be valid, is called a shift/reduce conflict. Geyacc is designed to resolve these conflicts by choosing to shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let's contrast it with the other alternative. Since the parser prefers to shift the ELSE, the result is to attach the else-clause to the innermost if-statement, making these two inputs equivalent:

if x then if y then win (); else lose;

if x then
    do 
        if y then win (); else lose;
    end;

But if the parser chose to reduce when possible rather than shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:

if x then if y then win (); else lose;

if x then
    do
        if y then win ();
    end;
else
    lose;

The conflict exists because the grammar as written is ambiguous: either parsing of the simple nested if-statement is legitimate. The established convention is that these ambiguities are resolved by attaching the else-clause to the innermost if-statement; this is what geyacc accomplishes by choosing to shift rather than reduce. (It would ideally be cleaner to write an unambiguous grammar, but that is very hard to do in this case.) This particular ambiguity was first encountered in the specifications of Algol 60 and is called the dangling `else' ambiguity.

To avoid warnings from geyacc about predictable, legitimate shift/reduce conflicts, use the %expect N declaration. There will be no warning as long as the number of shift/reduce conflicts is exactly N.

The definition of if_stmt above is solely to blame for the conflict, but the conflict does not actually appear without additional rules. Here is a complete geyacc input file that actually manifests the conflict:

%token IF THEN ELSE VARIABLE
%%
stmt: expr
    | if_stmt
    ;

if_stmt: IF expr THEN stmt
    | IF expr THEN stmt ELSE stmt
    ;

expr: VARIABLE
    ;

Another situation where shift/reduce conflicts appear is in arithmetic expressions. Here shifting is not always the preferred resolution; the geyacc declarations for operator precedence allow you to specify when to shift and when to reduce.

Parser States

The routine parse from class YY_PARSER_SKELETON is implemented using a finite-state machine. The values pushed on the parser stack are not simply token type codes; they represent the entire sequence of terminal and nonterminal symbols at or near the top of the stack. The current state collects all the information about previous input which is relevant to deciding what to do next.

Each time a look-ahead token is read, the current parser state together with the type of look-ahead token are looked up in a table. This table entry can say, "Shift the look-ahead token". In this case, it also specifies the new parser state, which is pushed onto the top of the parser stack. Or it can say, "Reduce using rule number N". This means that a certain number of tokens or groupings are taken off the top of the stack, and replaced by one grouping. In other words, that number of states are popped from the stack, and one new state is pushed.

There is one other alternative: the table can say that the look-ahead token is erroneous in the current state. This causes error processing to begin.

Reduce/Reduce Conflicts

A reduce/reduce conflict occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar. For example, here is an erroneous attempt to define a sequence of zero or more word groupings.

sequence: -- Empty
        { print ("empty sequence%N") }
    | maybeword
    | sequence word
        {
            print ("added word ")
            print ($2)
            print ('%N')
        }
    ;

maybeword: -- Empty
        { print ("empty maybeword%N") }
    | word
        {
            print ("single word ")
            print ($1)
            print ('%N')
        }
    ;

The error is an ambiguity: there is more than one way to parse a single word into a sequence. It could be reduced to a maybeword and then into a sequence via the second rule. Alternatively, nothing-at-all could be reduced into a sequence via the first rule, and this could be combined with the word using the third rule for sequence.

There is also more than one way to reduce nothing-at-all into a sequence. This can be done directly via the first rule, or indirectly via maybeword and then the second rule. You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes.

Geyacc resolves a reduce/reduce conflict by choosing to use the rule that appears first in the grammar, but it is very risky to rely on this. Every reduce/reduce conflict must be studied and usually eliminated. Here is the proper way to define sequence:

sequence: -- Empty
        { print ("empty sequence%N") }
    | sequence word
        {
            print ("added word ")
            print ($2)
            print ('%N')
        }
    ;

Here is another common error that yields a reduce/reduce conflict:

sequence: -- Empty
    | sequence words
    | sequence redirects
    ;

words: -- Empty
    | words word
    ;

redirects: -- Empty
    | redirects redirect
    ;

The intention here is to define a sequence which can contain word and/or redirect groupings. The individual definitions of sequence, words and redirects are error-free, but the three together make a subtle ambiguity: even an empty input can be parsed in infinitely many ways! Consider: nothing-at-all could be a words. Or it could be two words in a row, or three, or any number. It could equally well be a redirects, or two, or any number. Or it could be a words followed by three redirects and another words. And so on.

Here are two ways to correct these rules. First, to make it a single level of sequence:

sequence: -- Empty
    | sequence word
    | sequence redirect
    ;

Second, to prevent either a words or a redirects from being empty:

sequence: -- Empty
    | sequence words
    | sequence redirects
    ;

words: word
    | words word
    ;

redirects: redirect
    | redirects redirect
    ;

Mysterious Reduce/Reduce Conflicts

Sometimes reduce/reduce conflicts can occur that don't look warranted. Here is an example:

%token ID

%%
def: param_spec return_spec ','
    ;
param_spec: type
    | name_list ':' type
    ;
return_spec: type
    | name ':' type
    ;
type: ID
    ;
name: ID
    ;
name_list: name
    | name ',' name_list
    ;

It would seem that this grammar can be parsed with only a single token of look-ahead: when a param_spec is being read, an ID is a name if a comma or colon follows, or a type if another ID follows. In other words, this grammar is LR(1).

However, geyacc, like most parser generators, cannot actually handle all LR(1) grammars. In this grammar, two contexts, that after an ID at the beginning of a param_spec and likewise at the beginning of a return_spec, are similar enough that geyacc assumes they are the same. They appear similar because the same set of rules would be active - the rule for reducing to a name and that for reducing to a type. Geyacc is unable to determine at that stage of processing that the rules would require different look-ahead tokens in the two contexts, so it makes a single parser state for them both. Combining the two contexts causes a conflict later. In parser terminology, this occurrence means that the grammar is not LALR(1).

In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, geyacc is more useful as it is now.

When the problem arises, you can often fix it by identifying the two parser states that are being confused, and adding something to make them look distinct. In the above example, adding one rule to return_spec as follows makes the problem go away:

%token BOGUS
...
%%
...
return_spec: type
    | name ':' type
        -- This rule is never used.
    | ID BOGUS
    ;

This corrects the problem because it introduces the possibility of an additional active rule in the context after the ID at the beginning of return_spec. This rule is not active in the corresponding context in a param_spec, so the two contexts receive distinct parser states. As long as the token BOGUS is never generated by read_token, the added rule cannot alter the way actual input is parsed.

In this particular example, there is another way to solve the problem: rewrite the rule for return_spec to use ID directly instead of via name. This also causes the two confusing contexts to have different sets of active rules, because the one for return_spec activates the altered rule for return_spec rather than the one for name.

param_spec: type
    | name_list ':' type
    ;
return_spec: type
    | ID ':' type
    ;

Copyright © 1998, Eric Bezault
mailto:
ericb@gobosoft.com
http:
//www.gobosoft.com
Last Updated: 5 August 1998

HomeTocPreviousNext