.TH yecc 3erl "parsetools 2.2" "Ericsson AB" "Erlang Module Definition" .SH NAME yecc \- LALR-1 Parser Generator .SH DESCRIPTION .LP An LALR-1 parser generator for Erlang, similar to \fIyacc\fR\&\&. Takes a BNF grammar definition as input, and produces Erlang code for a parser\&. .LP To understand this text, you also have to look at the \fIyacc\fR\& documentation in the UNIX(TM) manual\&. This is most probably necessary in order to understand the idea of a parser generator, and the principle and problems of LALR parsing with finite look-ahead\&. .SH EXPORTS .LP .B file(Grammarfile [, Options]) -> YeccRet .br .RS .LP Types: .RS 3 Grammarfile = filename() .br Options = Option | [Option] .br Option = - see below - .br YeccRet = {ok, Parserfile} | {ok, Parserfile, Warnings} | error | {error, Errors, Warnings} .br Parserfile = filename() .br Warnings = Errors = [{filename(), [ErrorInfo]}] .br ErrorInfo = {ErrorLine, module(), Reason} .br ErrorLine = integer() .br Reason = - formatable by format_error/1 - .br .RE .RE .RS .LP \fIGrammarfile\fR\& is the file of declarations and grammar rules\&. Returns \fIok\fR\& upon success, or \fIerror\fR\& if there are errors\&. An Erlang file containing the parser is created if there are no errors\&. The options are: .RS 2 .TP 2 .B \fI{parserfile, Parserfile}\fR\&\&.: \fIParserfile\fR\& is the name of the file that will contain the Erlang parser code that is generated\&. The default (\fI""\fR\&) is to add the extension \fI\&.erl\fR\& to \fIGrammarfile\fR\& stripped of the \fI\&.yrl\fR\& extension\&. .TP 2 .B \fI{includefile, Includefile}\fR\&\&.: Indicates a customized prologue file which the user may want to use instead of the default file \fIlib/parsetools/include/yeccpre\&.hrl\fR\& which is otherwise included at the beginning of the resulting parser file\&. \fIN\&.B\&.\fR\& The \fIIncludefile\fR\& is included \&'as is\&' in the parser file, so it must not have a module declaration of its own, and it should not be compiled\&. It must, however, contain the necessary export declarations\&. The default is indicated by \fI""\fR\&\&. .TP 2 .B \fI{report_errors, bool()}\fR\&\&.: Causes errors to be printed as they occur\&. Default is \fItrue\fR\&\&. .TP 2 .B \fI{report_warnings, bool()}\fR\&\&.: Causes warnings to be printed as they occur\&. Default is \fItrue\fR\&\&. .TP 2 .B \fI{report, bool()}\fR\&\&.: This is a short form for both \fIreport_errors\fR\& and \fIreport_warnings\fR\&\&. .TP 2 .B \fIwarnings_as_errors\fR\&: Causes warnings to be treated as errors\&. .TP 2 .B \fI{return_errors, bool()}\fR\&\&.: If this flag is set, \fI{error, Errors, Warnings}\fR\& is returned when there are errors\&. Default is \fIfalse\fR\&\&. .TP 2 .B \fI{return_warnings, bool()}\fR\&\&.: If this flag is set, an extra field containing \fIWarnings\fR\& is added to the tuple returned upon success\&. Default is \fIfalse\fR\&\&. .TP 2 .B \fI{return, bool()}\fR\&\&.: This is a short form for both \fIreturn_errors\fR\& and \fIreturn_warnings\fR\&\&. .TP 2 .B \fI{verbose, bool()}\fR\&\&. : Determines whether the parser generator should give full information about resolved and unresolved parse action conflicts (\fItrue\fR\&), or only about those conflicts that prevent a parser from being generated from the input grammar (\fIfalse\fR\&, the default)\&. .RE .LP Any of the Boolean options can be set to \fItrue\fR\& by stating the name of the option\&. For example, \fIverbose\fR\& is equivalent to \fI{verbose, true}\fR\&\&. .LP The value of the \fIParserfile\fR\& option stripped of the \fI\&.erl\fR\& extension is used by Yecc as the module name of the generated parser file\&. .LP Yecc will add the extension \fI\&.yrl\fR\& to the \fIGrammarfile\fR\& name, the extension \fI\&.hrl\fR\& to the \fIIncludefile\fR\& name, and the extension \fI\&.erl\fR\& to the \fIParserfile\fR\& name, unless the extension is already there\&. .RE .LP .B format_error(Reason) -> Chars .br .RS .LP Types: .RS 3 Reason = - as returned by yecc:file/1,2 - .br Chars = [char() | Chars] .br .RE .RE .RS .LP Returns a descriptive string in English of an error tuple returned by \fIyecc:file/1,2\fR\&\&. This function is mainly used by the compiler invoking Yecc\&. .RE .SH "PRE-PROCESSING" .LP A \fIscanner\fR\& to pre-process the text (program, etc\&.) to be parsed is not provided in the \fIyecc\fR\& module\&. The scanner serves as a kind of lexicon look-up routine\&. It is possible to write a grammar that uses only character tokens as terminal symbols, thereby eliminating the need for a scanner, but this would make the parser larger and slower\&. .LP The user should implement a scanner that segments the input text, and turns it into one or more lists of tokens\&. Each token should be a tuple containing information about syntactic category, position in the text (e\&.g\&. line number), and the actual terminal symbol found in the text: \fI{Category, LineNumber, Symbol}\fR\&\&. .LP If a terminal symbol is the only member of a category, and the symbol name is identical to the category name, the token format may be \fI{Symbol, LineNumber}\fR\&\&. .LP A list of tokens produced by the scanner should end with a special \fIend_of_input\fR\& tuple which the parser is looking for\&. The format of this tuple should be \fI{Endsymbol, LastLineNumber}\fR\&, where \fIEndsymbol\fR\& is an identifier that is distinguished from all the terminal and non-terminal categories of the syntax rules\&. The \fIEndsymbol\fR\& may be declared in the grammar file (see below)\&. .LP The simplest case is to segment the input string into a list of identifiers (atoms) and use those atoms both as categories and values of the tokens\&. For example, the input string \fIaaa bbb 777, X\fR\& may be scanned (tokenized) as: .LP .nf [{aaa, 1}, {bbb, 1}, {777, 1}, {',' , 1}, {'X', 1}, {'$end', 1}]. .fi .LP This assumes that this is the first line of the input text, and that \fI\&'$end\&'\fR\& is the distinguished \fIend_of_input\fR\& symbol\&. .LP The Erlang scanner in the \fIio\fR\& module can be used as a starting point when writing a new scanner\&. Study \fIyeccscan\&.erl\fR\& in order to see how a filter can be added on top of \fIio:scan_erl_form/3\fR\& to provide a scanner for Yecc that tokenizes grammar files before parsing them with the Yecc parser\&. A more general approach to scanner implementation is to use a scanner generator\&. A scanner generator in Erlang called \fIleex\fR\& is under development\&. .SH "GRAMMAR DEFINITION FORMAT" .LP Erlang style \fIcomments\fR\&, starting with a \fI\&'%\&'\fR\&, are allowed in grammar files\&. .LP Each \fIdeclaration\fR\& or \fIrule\fR\& ends with a dot (the character \fI\&'\&.\&'\fR\&)\&. .LP The grammar starts with an optional \fIheader\fR\& section\&. The header is put first in the generated file, before the module declaration\&. The purpose of the header is to provide a means to make the documentation generated by EDoc look nicer\&. Each header line should be enclosed in double quotes, and newlines will be inserted between the lines\&. For example: .LP .nf Header "%% Copyright (C)" "%% @private" "%% @Author John". .fi .LP Next comes a declaration of the \fInonterminal categories\fR\& to be used in the rules\&. For example: .LP .nf Nonterminals sentence nounphrase verbphrase. .fi .LP A non-terminal category can be used at the left hand side (= \fIlhs\fR\&, or \fIhead\fR\&) of a grammar rule\&. It can also appear at the right hand side of rules\&. .LP Next comes a declaration of the \fIterminal categories\fR\&, which are the categories of tokens produced by the scanner\&. For example: .LP .nf Terminals article adjective noun verb. .fi .LP Terminal categories may only appear in the right hand sides (= \fIrhs\fR\&) of grammar rules\&. .LP Next comes a declaration of the \fIrootsymbol\fR\&, or start category of the grammar\&. For example: .LP .nf Rootsymbol sentence. .fi .LP This symbol should appear in the lhs of at least one grammar rule\&. This is the most general syntactic category which the parser ultimately will parse every input string into\&. .LP After the rootsymbol declaration comes an optional declaration of the \fIend_of_input\fR\& symbol that your scanner is expected to use\&. For example: .LP .nf Endsymbol '$end'. .fi .LP Next comes one or more declarations of \fIoperator precedences\fR\&, if needed\&. These are used to resolve shift/reduce conflicts (see \fIyacc\fR\& documentation)\&. .LP Examples of operator declarations: .LP .nf Right 100 '='. Nonassoc 200 '==' '=/='. Left 300 '+'. Left 400 '*'. Unary 500 '-'. .fi .LP These declarations mean that \fI\&'=\&'\fR\& is defined as a \fIright associative binary\fR\& operator with precedence 100, \fI\&'==\&'\fR\& and \fI\&'=/=\&'\fR\& are operators with \fIno associativity\fR\&, \fI\&'+\&'\fR\& and \fI\&'*\&'\fR\& are \fIleft associative binary\fR\& operators, where \fI\&'*\&'\fR\& takes precedence over \fI\&'+\&'\fR\& (the normal case), and \fI\&'-\&'\fR\& is a \fIunary\fR\& operator of higher precedence than \fI\&'*\&'\fR\&\&. The fact that \&'==\&' has no associativity means that an expression like \fIa == b == c\fR\& is considered a syntax error\&. .LP Certain rules are assigned precedence: each rule gets its precedence from the last terminal symbol mentioned in the right hand side of the rule\&. It is also possible to declare precedence for non-terminals, "one level up"\&. This is practical when an operator is overloaded (see also example 3 below)\&. .LP Next come the \fIgrammar rules\fR\&\&. Each rule has the general form .LP .nf Left_hand_side -> Right_hand_side : Associated_code. .fi .LP The left hand side is a non-terminal category\&. The right hand side is a sequence of one or more non-terminal or terminal symbols with spaces between\&. The associated code is a sequence of zero or more Erlang expressions (with commas \fI\&',\&'\fR\& as separators)\&. If the associated code is empty, the separating colon \fI\&':\&'\fR\& is also omitted\&. A final dot marks the end of the rule\&. .LP Symbols such as \fI\&'{\&'\fR\&, \fI\&'\&.\&'\fR\&, etc\&., have to be enclosed in single quotes when used as terminal or non-terminal symbols in grammar rules\&. The use of the symbols \fI\&'$empty\&'\fR\&, \fI\&'$end\&'\fR\&, and \fI\&'$undefined\&'\fR\& should be avoided\&. .LP The last part of the grammar file is an optional section with Erlang code (= function definitions) which is included \&'as is\&' in the resulting parser file\&. This section must start with the pseudo declaration, or key words .LP .nf Erlang code. .fi .LP No syntax rule definitions or other declarations may follow this section\&. To avoid conflicts with internal variables, do not use variable names beginning with two underscore characters (\&'__\&') in the Erlang code in this section, or in the code associated with the individual syntax rules\&. .LP The optional \fIexpect\fR\& declaration can be placed anywhere before the last optional section with Erlang code\&. It is used for suppressing the warning about conflicts that is ordinarily given if the grammar is ambiguous\&. An example: .LP .nf Expect 2. .fi .LP The warning is given if the number of shift/reduce conflicts differs from 2, or if there are reduce/reduce conflicts\&. .SH "EXAMPLES" .LP A grammar to parse list expressions (with empty associated code): .LP .nf Nonterminals list elements element. Terminals atom '(' ')'. Rootsymbol list. list -> '(' ')'. list -> '(' elements ')'. elements -> element. elements -> element elements. element -> atom. element -> list. .fi .LP This grammar can be used to generate a parser which parses list expressions, such as \fI(), (a), (peter charles), (a (b c) d (())), \&.\&.\&.\fR\& provided that your scanner tokenizes, for example, the input \fI(peter charles)\fR\& as follows: .LP .nf [{'(', 1} , {atom, 1, peter}, {atom, 1, charles}, {')', 1}, {'$end', 1}] .fi .LP When a grammar rule is used by the parser to parse (part of) the input string as a grammatical phrase, the associated code is evaluated, and the value of the last expression becomes the value of the parsed phrase\&. This value may be used by the parser later to build structures that are values of higher phrases of which the current phrase is a part\&. The values initially associated with terminal category phrases, i\&.e\&. input tokens, are the token tuples themselves\&. .LP Below is an example of the grammar above with structure building code added: .LP .nf list -> '(' ')' : nil. list -> '(' elements ')' : '$2'. elements -> element : {cons, '$1', nil}. elements -> element elements : {cons, '$1', '$2'}. element -> atom : '$1'. element -> list : '$1'. .fi .LP With this code added to the grammar rules, the parser produces the following value (structure) when parsing the input string \fI(a b c)\&.\fR\&\&. This still assumes that this was the first input line that the scanner tokenized: .LP .nf {cons, {atom, 1, a,} {cons, {atom, 1, b}, {cons, {atom, 1, c}, nil}}} .fi .LP The associated code contains \fIpseudo variables\fR\& \fI\&'$1\&'\fR\&, \fI\&'$2\&'\fR\&, \fI\&'$3\&'\fR\&, etc\&. which refer to (are bound to) the values associated previously by the parser with the symbols of the right hand side of the rule\&. When these symbols are terminal categories, the values are token tuples of the input string (see above)\&. .LP The associated code may not only be used to build structures associated with phrases, but may also be used for syntactic and semantic tests, printout actions (for example for tracing), etc\&. during the parsing process\&. Since tokens contain positional (line number) information, it is possible to produce error messages which contain line numbers\&. If there is no associated code after the right hand side of the rule, the value \fI\&'$undefined\&'\fR\& is associated with the phrase\&. .LP The right hand side of a grammar rule may be empty\&. This is indicated by using the special symbol \fI\&'$empty\&'\fR\& as rhs\&. Then the list grammar above may be simplified to: .LP .nf list -> '(' elements ')' : '$2'. elements -> element elements : {cons, '$1', '$2'}. elements -> '$empty' : nil. element -> atom : '$1'. element -> list : '$1'. .fi .SH "GENERATING A PARSER" .LP To call the parser generator, use the following command: .LP .nf yecc:file(Grammarfile). .fi .LP An error message from Yecc will be shown if the grammar is not of the LALR type (for example too ambiguous)\&. Shift/reduce conflicts are resolved in favor of shifting if there are no operator precedence declarations\&. Refer to the \fIyacc\fR\& documentation on the use of operator precedence\&. .LP The output file contains Erlang source code for a parser module with module name equal to the \fIParserfile\fR\& parameter\&. After compilation, the parser can be called as follows (the module name is assumed to be \fImyparser\fR\&): .LP .nf myparser:parse(myscanner:scan(Inport)) .fi .LP The call format may be different if a customized prologue file has been included when generating the parser instead of the default file \fIlib/parsetools/include/yeccpre\&.hrl\fR\&\&. .LP With the standard prologue, this call will return either \fI{ok, Result}\fR\&, where \fIResult\fR\& is a structure that the Erlang code of the grammar file has built, or \fI{error, {Line_number, Module, Message}}\fR\& if there was a syntax error in the input\&. .LP \fIMessage\fR\& is something which may be converted into a string by calling \fIModule:format_error(Message)\fR\& and printed with \fIio:format/3\fR\&\&. .LP .RS -4 .B Note: .RE By default, the parser that was generated will not print out error messages to the screen\&. The user will have to do this either by printing the returned error messages, or by inserting tests and print instructions in the Erlang code associated with the syntax rules of the grammar file\&. .LP It is also possible to make the parser ask for more input tokens when needed if the following call format is used: .LP .nf myparser:parse_and_scan({Function, Args}) myparser:parse_and_scan({Mod, Tokenizer, Args}) .fi .LP The tokenizer \fIFunction\fR\& is either a fun or a tuple \fI{Mod, Tokenizer}\fR\&\&. The call \fIapply(Function, Args)\fR\& or \fIapply({Mod, Tokenizer}, Args)\fR\& is executed whenever a new token is needed\&. This, for example, makes it possible to parse from a file, token by token\&. .LP The tokenizer used above has to be implemented so as to return one of the following: .LP .nf {ok, Tokens, Endline} {eof, Endline} {error, Error_description, Endline} .fi .LP This conforms to the format used by the scanner in the Erlang \fIio\fR\& library module\&. .LP If \fI{eof, Endline}\fR\& is returned immediately, the call to \fIparse_and_scan/1\fR\& returns \fI{ok, eof}\fR\&\&. If \fI{eof, Endline}\fR\& is returned before the parser expects end of input, \fIparse_and_scan/1\fR\& will, of course, return an error message (see above)\&. Otherwise \fI{ok, Result}\fR\& is returned\&. .SH "MORE EXAMPLES" .LP 1\&. A grammar for parsing infix arithmetic expressions into prefix notation, without operator precedence: .LP .nf Nonterminals E T F. Terminals '+' '*' '(' ')' number. Rootsymbol E. E -> E '+' T: {'$2', '$1', '$3'}. E -> T : '$1'. T -> T '*' F: {'$2', '$1', '$3'}. T -> F : '$1'. F -> '(' E ')' : '$2'. F -> number : '$1'. .fi .LP 2\&. The same with operator precedence becomes simpler: .LP .nf Nonterminals E. Terminals '+' '*' '(' ')' number. Rootsymbol E. Left 100 '+'. Left 200 '*'. E -> E '+' E : {'$2', '$1', '$3'}. E -> E '*' E : {'$2', '$1', '$3'}. E -> '(' E ')' : '$2'. E -> number : '$1'. .fi .LP 3\&. An overloaded minus operator: .LP .nf Nonterminals E uminus. Terminals '*' '-' number. Rootsymbol E. Left 100 '-'. Left 200 '*'. Unary 300 uminus. E -> E '-' E. E -> E '*' E. E -> uminus. E -> number. uminus -> '-' E. .fi .LP 4\&. The Yecc grammar that is used for parsing grammar files, including itself: .LP .nf Nonterminals grammar declaration rule head symbol symbols attached_code token tokens. Terminals atom float integer reserved_symbol reserved_word string char var \&'->' ':' dot. Rootsymbol grammar. Endsymbol '$end'. grammar -> declaration : '$1'. grammar -> rule : '$1'. declaration -> symbol symbols dot: {'$1', '$2'}. rule -> head '->' symbols attached_code dot: {rule, ['$1' | '$3'], '$4'}. head -> symbol : '$1'. symbols -> symbol : ['$1']. symbols -> symbol symbols : ['$1' | '$2']. attached_code -> ':' tokens : {erlang_code, '$2'}. attached_code -> '$empty' : {erlang_code, [{atom, 0, '$undefined'}]}. tokens -> token : ['$1']. tokens -> token tokens : ['$1' | '$2']. symbol -> var : value_of('$1'). symbol -> atom : value_of('$1'). symbol -> integer : value_of('$1'). symbol -> reserved_word : value_of('$1'). token -> var : '$1'. token -> atom : '$1'. token -> float : '$1'. token -> integer : '$1'. token -> string : '$1'. token -> char : '$1'. token -> reserved_symbol : {value_of('$1'), line_of('$1')}. token -> reserved_word : {value_of('$1'), line_of('$1')}. token -> '->' : {'->', line_of('$1')}. token -> ':' : {':', line_of('$1')}. Erlang code. value_of(Token) -> element(3, Token). line_of(Token) -> element(2, Token). .fi .LP .RS -4 .B Note: .RE The symbols \fI\&'->\&'\fR\&, and \fI\&':\&'\fR\& have to be treated in a special way, as they are meta symbols of the grammar notation, as well as terminal symbols of the Yecc grammar\&. .LP 5\&. The file \fIerl_parse\&.yrl\fR\& in the \fIlib/stdlib/src\fR\& directory contains the grammar for Erlang\&. .LP .RS -4 .B Note: .RE Syntactic tests are used in the code associated with some rules, and an error is thrown (and caught by the generated parser to produce an error message) when a test fails\&. The same effect can be achieved with a call to \fIreturn_error(Error_line, Message_string)\fR\&, which is defined in the \fIyeccpre\&.hrl\fR\& default header file\&. .SH "FILES" .LP .nf lib/parsetools/include/yeccpre.hrl .fi .SH "SEE ALSO" .LP Aho & Johnson: \&'LR Parsing\&', ACM Computing Surveys, vol\&. 6:2, 1974\&.