.\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "LARK" "7" "Jan 12, 2024" "" "Lark" .SH NAME lark \- Lark Documentation .SH PHILOSOPHY .sp Parsers are innately complicated and confusing. They\(aqre difficult to understand, difficult to write, and difficult to use. Even experts on the subject can become baffled by the nuances of these complicated state\-machines. .sp Lark\(aqs mission is to make the process of writing them as simple and abstract as possible, by following these design principles: .SS Design Principles .INDENT 0.0 .IP \(bu 2 Readability matters .IP \(bu 2 Keep the grammar clean and simple .IP \(bu 2 Don\(aqt force the user to decide on things that the parser can figure out on its own .IP \(bu 2 Usability is more important than performance .IP \(bu 2 Performance is still very important .IP \(bu 2 Follow the Zen of Python, whenever possible and applicable .UNINDENT .sp In accordance with these principles, I arrived at the following design choices: .sp .ce ---- .ce 0 .sp .SS Design Choices .SS 1. Separation of code and grammar .sp Grammars are the de\-facto reference for your language, and for the structure of your parse\-tree. For any non\-trivial language, the conflation of code and grammar always turns out convoluted and difficult to read. .sp The grammars in Lark are EBNF\-inspired, so they are especially easy to read & work with. .SS 2. Always build a parse\-tree (unless told not to) .sp Trees are always simpler to work with than state\-machines. .INDENT 0.0 .IP \(bu 2 Trees allow you to see the \(dqstate\-machine\(dq visually .IP \(bu 2 Trees allow your computation to be aware of previous and future states .IP \(bu 2 Trees allow you to process the parse in steps, instead of forcing you to do it all at once. .UNINDENT .sp And anyway, every parse\-tree can be replayed as a state\-machine, so there is no loss of information. .sp See this answer in more detail \fI\%here\fP\&. .sp To improve performance, you can skip building the tree for LALR(1), by providing Lark with a transformer (see the \fI\%JSON example\fP). .SS 3. Earley is the default .sp The Earley algorithm can accept \fIany\fP context\-free grammar you throw at it (i.e. any grammar you can write in EBNF, it can parse). That makes it extremely friendly to beginners, who are not aware of the strange and arbitrary restrictions that LALR(1) places on its grammars. .sp As the users grow to understand the structure of their grammar, the scope of their target language, and their performance requirements, they may choose to switch over to LALR(1) to gain a huge performance boost, possibly at the cost of some language features. .sp Both Earley and LALR(1) can use the same grammar, as long as all constraints are satisfied. .sp In short, \(dqPremature optimization is the root of all evil.\(dq .SS Other design features .INDENT 0.0 .IP \(bu 2 Automatically resolve terminal collisions whenever possible .IP \(bu 2 Automatically keep track of line & column numbers .UNINDENT .SH FEATURES .SS Main Features .INDENT 0.0 .IP \(bu 2 Earley parser, capable of parsing any context\-free grammar .INDENT 2.0 .IP \(bu 2 Implements SPPF, for efficient parsing and storing of ambiguous grammars. .UNINDENT .IP \(bu 2 LALR(1) parser, limited in power of expression, but very efficient in space and performance (O(n)). .INDENT 2.0 .IP \(bu 2 Implements a parse\-aware lexer that provides a better power of expression than traditional LALR implementations (such as ply). .UNINDENT .IP \(bu 2 EBNF\-inspired grammar, with extra features (See: \fI\%Grammar Reference\fP) .IP \(bu 2 Builds a parse\-tree (AST) automagically based on the grammar .IP \(bu 2 Stand\-alone parser generator \- create a small independent parser to embed in your project. (\fI\%read more\fP) .IP \(bu 2 Flexible error handling by using an interactive parser interface (LALR only) .IP \(bu 2 Automatic line & column tracking (for both tokens and matched rules) .IP \(bu 2 Automatic terminal collision resolution .IP \(bu 2 Warns on regex collisions using the optional \fBinteregular\fP library. (\fI\%read more\fP) .IP \(bu 2 Grammar composition \- Import terminals and rules from other grammars (see \fI\%example\fP). .IP \(bu 2 Standard library of terminals (strings, numbers, names, etc.) .IP \(bu 2 Unicode fully supported .IP \(bu 2 Extensive test suite .IP \(bu 2 Type annotations (MyPy support) .IP \(bu 2 Pure\-Python implementation .UNINDENT .sp \fI\%Read more about the parsers\fP .SS Extra features .INDENT 0.0 .IP \(bu 2 Support for external regex module (\fI\%see here\fP) .IP \(bu 2 Import grammars from Nearley.js (\fI\%read more\fP) .IP \(bu 2 CYK parser .IP \(bu 2 Visualize your parse trees as dot or png files (\fI\%see_example\fP) .IP \(bu 2 Automatic reconstruction of input from parse\-tree (see \fI\%example\fP and \fI\%another example\fP) .IP \(bu 2 Use Lark grammars in \fI\%Julia\fP and \fI\%Javascript\fP\&. .UNINDENT .SH PARSERS .sp Lark implements the following parsing algorithms: Earley, LALR(1), and CYK .SS Earley .sp An \fI\%Earley Parser\fP is a chart parser capable of parsing any context\-free grammar at O(n^3), and O(n^2) when the grammar is unambiguous. It can parse most LR grammars at O(n). Most programming languages are LR, and can be parsed at a linear time. .sp Lark\(aqs Earley implementation runs on top of a skipping chart parser, which allows it to use regular expressions, instead of matching characters one\-by\-one. This is a huge improvement to Earley that is unique to Lark. This feature is used by default, but can also be requested explicitly using \fBlexer=\(aqdynamic\(aq\fP\&. .sp It\(aqs possible to bypass the dynamic lexing, and use the regular Earley parser with a basic lexer, that tokenizes as an independent first step. Doing so will provide a speed benefit, but will tokenize without using Earley\(aqs ambiguity\-resolution ability. So choose this only if you know why! Activate with \fBlexer=\(aqbasic\(aq\fP .sp \fBSPPF & Ambiguity resolution\fP .sp Lark implements the Shared Packed Parse Forest data\-structure for the Earley parser, in order to reduce the space and computation required to handle ambiguous grammars. .sp You can read more about SPPF \fI\%here\fP .sp As a result, Lark can efficiently parse and store every ambiguity in the grammar, when using Earley. .sp Lark provides the following options to combat ambiguity: .INDENT 0.0 .IP \(bu 2 Lark will choose the best derivation for you (default). Users can choose between different disambiguation strategies, and can prioritize (or demote) individual rules over others, using the rule\-priority syntax. .IP \(bu 2 Users may choose to receive the set of all possible parse\-trees (using ambiguity=\(aqexplicit\(aq), and choose the best derivation themselves. While simple and flexible, it comes at the cost of space and performance, and so it isn\(aqt recommended for highly ambiguous grammars, or very long inputs. .IP \(bu 2 As an advanced feature, users may use specialized visitors to iterate the SPPF themselves. .UNINDENT .sp \fBlexer=\(dqdynamic_complete\(dq\fP .sp Earley\(aqs \(dqdynamic\(dq lexer uses regular expressions in order to tokenize the text. It tries every possible combination of terminals, but it matches each terminal exactly once, returning the longest possible match. .sp That means, for example, that when \fBlexer=\(dqdynamic\(dq\fP (which is the default), the terminal \fB/a+/\fP, when given the text \fB\(dqaa\(dq\fP, will return one result, \fBaa\fP, even though \fBa\fP would also be correct. .sp This behavior was chosen because it is much faster, and it is usually what you would expect. .sp Setting \fBlexer=\(dqdynamic_complete\(dq\fP instructs the lexer to consider every possible regexp match. This ensures that the parser will consider and resolve every ambiguity, even inside the terminals themselves. This lexer provides the same capabilities as scannerless Earley, but with different performance tradeoffs. .sp Warning: This lexer can be much slower, especially for open\-ended terminals such as \fB/.*/\fP .SS LALR(1) .sp \fI\%LALR(1)\fP is a very efficient, true\-and\-tested parsing algorithm. It\(aqs incredibly fast and requires very little memory. It can parse most programming languages (For example: Python and Java). .sp LALR(1) stands for: .INDENT 0.0 .IP \(bu 2 Left\-to\-right parsing order .IP \(bu 2 Rightmost derivation, bottom\-up .IP \(bu 2 Lookahead of 1 token .UNINDENT .sp Lark comes with an efficient implementation that outperforms every other parsing library for Python (including PLY) .sp Lark extends the traditional YACC\-based architecture with a \fIcontextual lexer\fP, which processes feedback from the parser, making the LALR(1) algorithm stronger than ever. .sp The contextual lexer communicates with the parser, and uses the parser\(aqs lookahead prediction to narrow its choice of terminals. So at each point, the lexer only matches the subgroup of terminals that are legal at that parser state, instead of all of the terminals. It’s surprisingly effective at resolving common terminal collisions, and allows one to parse languages that LALR(1) was previously incapable of parsing. .sp (If you\(aqre familiar with YACC, you can think of it as automatic lexer\-states) .sp This is an improvement to LALR(1) that is unique to Lark. .SS Grammar constraints in LALR(1) .sp Due to having only a lookahead of one token, LALR is limited in its ability to choose between rules, when they both match the input. .sp Tips for writing a conforming grammar: .INDENT 0.0 .IP \(bu 2 Try to avoid writing different rules that can match the same sequence of characters. .IP \(bu 2 For the best performance, prefer left\-recursion over right\-recursion. .IP \(bu 2 Consider setting terminal priority only as a last resort. .UNINDENT .sp For a better understanding of these constraints, it\(aqs recommended to learn how a SLR parser works. SLR is very similar to LALR but much simpler. .SS CYK Parser .sp A \fI\%CYK parser\fP can parse any context\-free grammar at O(n^3*|G|). .sp Its too slow to be practical for simple grammars, but it offers good performance for highly ambiguous grammars. .SH JSON PARSER - TUTORIAL .sp Lark is a parser \- a program that accepts a grammar and text, and produces a structured tree that represents that text. In this tutorial we will write a JSON parser in Lark, and explore Lark\(aqs various features in the process. .sp It has 5 parts. .INDENT 0.0 .IP \(bu 2 Writing the grammar .IP \(bu 2 Creating the parser .IP \(bu 2 Shaping the tree .IP \(bu 2 Evaluating the tree .IP \(bu 2 Optimizing .UNINDENT .sp Knowledge assumed: .INDENT 0.0 .IP \(bu 2 Using Python .IP \(bu 2 A basic understanding of how to use regular expressions .UNINDENT .SS Part 1 \- The Grammar .sp Lark accepts its grammars in a format called \fI\%EBNF\fP\&. It basically looks like this: .INDENT 0.0 .INDENT 3.5 .sp .EX rule_name : list of rules and TERMINALS to match | another possible list of items | etc. TERMINAL: \(dqsome text to match\(dq .EE .UNINDENT .UNINDENT .sp (\fIa terminal is a string or a regular expression\fP) .sp The parser will try to match each rule (left\-part) by matching its items (right\-part) sequentially, trying each alternative (In practice, the parser is predictive so we don\(aqt have to try every alternative). .sp How to structure those rules is beyond the scope of this tutorial, but often it\(aqs enough to follow one\(aqs intuition. .sp In the case of JSON, the structure is simple: A json document is either a list, or a dictionary, or a string/number/etc. .sp The dictionaries and lists are recursive, and contain other json documents (or \(dqvalues\(dq). .sp Let\(aqs write this structure in EBNF form: .INDENT 0.0 .INDENT 3.5 .sp .EX value: dict | list | STRING | NUMBER | \(dqtrue\(dq | \(dqfalse\(dq | \(dqnull\(dq list : \(dq[\(dq [value (\(dq,\(dq value)*] \(dq]\(dq dict : \(dq{\(dq [pair (\(dq,\(dq pair)*] \(dq}\(dq pair : STRING \(dq:\(dq value .EE .UNINDENT .UNINDENT .sp A quick explanation of the syntax: .INDENT 0.0 .IP \(bu 2 Parenthesis let us group rules together. .IP \(bu 2 rule* means \fIany amount\fP\&. That means, zero or more instances of that rule. .IP \(bu 2 [rule] means \fIoptional\fP\&. That means zero or one instance of that rule. .UNINDENT .sp Lark also supports the rule+ operator, meaning one or more instances. It also supports the rule? operator which is another way to say \fIoptional\fP\&. .sp Of course, we still haven\(aqt defined \(dqSTRING\(dq and \(dqNUMBER\(dq\&. Luckily, both these literals are already defined in Lark\(aqs common library: .INDENT 0.0 .INDENT 3.5 .sp .EX %import common.ESCAPED_STRING \-> STRING %import common.SIGNED_NUMBER \-> NUMBER .EE .UNINDENT .UNINDENT .sp The arrow (\->) renames the terminals. But that only adds obscurity in this case, so going forward we\(aqll just use their original names. .sp We\(aqll also take care of the white\-space, which is part of the text, by simply matching and then throwing it away. .INDENT 0.0 .INDENT 3.5 .sp .EX %import common.WS %ignore WS .EE .UNINDENT .UNINDENT .sp We tell our parser to ignore whitespace. Otherwise, we\(aqd have to fill our grammar with WS terminals. .sp By the way, if you\(aqre curious what these terminals signify, they are roughly equivalent to this: .INDENT 0.0 .INDENT 3.5 .sp .EX NUMBER : /\-?\ed+(\e.\ed+)?([eE][+\-]?\ed+)?/ STRING : /\(dq.*?(?>> text = \(aq{\(dqkey\(dq: [\(dqitem0\(dq, \(dqitem1\(dq, 3.14]}\(aq >>> json_parser.parse(text) Tree(value, [Tree(dict, [Tree(pair, [Token(STRING, \(dqkey\(dq), Tree(value, [Tree(list, [Tree(value, [Token(STRING, \(dqitem0\(dq)]), Tree(value, [Token(STRING, \(dqitem1\(dq)]), Tree(value, [Token(NUMBER, 3.14)])])])])])]) >>> print( _.pretty() ) value dict pair \(dqkey\(dq value list value \(dqitem0\(dq value \(dqitem1\(dq value 3.14 .EE .UNINDENT .UNINDENT .sp As promised, Lark automagically creates a tree that represents the parsed text. .sp But something is suspiciously missing from the tree. Where are the curly braces, the commas and all the other punctuation literals? .sp Lark automatically filters out literals from the tree, based on the following criteria: .INDENT 0.0 .IP \(bu 2 Filter out string literals without a name, or with a name that starts with an underscore. .IP \(bu 2 Keep regexps, even unnamed ones, unless their name starts with an underscore. .UNINDENT .sp Unfortunately, this means that it will also filter out literals like \(dqtrue\(dq and \(dqfalse\(dq, and we will lose that information. The next section, \(dqShaping the tree\(dq deals with this issue, and others. .SS Part 3 \- Shaping the Tree .sp We now have a parser that can create a parse tree (or: AST), but the tree has some issues: .INDENT 0.0 .IP \(bu 2 \(dqtrue\(dq, \(dqfalse\(dq and \(dqnull\(dq are filtered out (test it out yourself!) .IP \(bu 2 Is has useless branches, like \fIvalue\fP, that clutter\-up our view. .UNINDENT .sp I\(aqll present the solution, and then explain it: .INDENT 0.0 .INDENT 3.5 .sp .EX ?value: dict | list | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null ... string : ESCAPED_STRING .EE .UNINDENT .UNINDENT .INDENT 0.0 .IP \(bu 2 Those little arrows signify \fIaliases\fP\&. An alias is a name for a specific part of the rule. In this case, we will name the \fItrue/false/null\fP matches, and this way we won\(aqt lose the information. We also alias \fISIGNED_NUMBER\fP to mark it for later processing. .IP \(bu 2 The question\-mark prefixing \fIvalue\fP (\(dq?value\(dq) tells the tree\-builder to inline this branch if it has only one member. In this case, \fIvalue\fP will always have only one member, and will always be inlined. .IP \(bu 2 We turned the \fIESCAPED_STRING\fP terminal into a rule. This way it will appear in the tree as a branch. This is equivalent to aliasing (like we did for the number), but now \fIstring\fP can also be used elsewhere in the grammar (namely, in the \fIpair\fP rule). .UNINDENT .sp Here is the new grammar: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark json_parser = Lark(r\(dq\(dq\(dq ?value: dict | list | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null list : \(dq[\(dq [value (\(dq,\(dq value)*] \(dq]\(dq dict : \(dq{\(dq [pair (\(dq,\(dq pair)*] \(dq}\(dq pair : string \(dq:\(dq value string : ESCAPED_STRING %import common.ESCAPED_STRING %import common.SIGNED_NUMBER %import common.WS %ignore WS \(dq\(dq\(dq, start=\(aqvalue\(aq) .EE .UNINDENT .UNINDENT .sp And let\(aqs test it out: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> text = \(aq{\(dqkey\(dq: [\(dqitem0\(dq, \(dqitem1\(dq, 3.14, true]}\(aq >>> print( json_parser.parse(text).pretty() ) dict pair string \(dqkey\(dq list string \(dqitem0\(dq string \(dqitem1\(dq number 3.14 true .EE .UNINDENT .UNINDENT .sp Ah! That is much much nicer. .SS Part 4 \- Evaluating the tree .sp It\(aqs nice to have a tree, but what we really want is a JSON object. .sp The way to do it is to evaluate the tree, using a Transformer. .sp A transformer is a class with methods corresponding to branch names. For each branch, the appropriate method will be called with the children of the branch as its argument, and its return value will replace the branch in the tree. .sp So let\(aqs write a partial transformer, that handles lists and dictionaries: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Transformer class MyTransformer(Transformer): def list(self, items): return list(items) def pair(self, key_value): k, v = key_value return k, v def dict(self, items): return dict(items) .EE .UNINDENT .UNINDENT .sp And when we run it, we get this: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> tree = json_parser.parse(text) >>> MyTransformer().transform(tree) {Tree(string, [Token(ANONRE_1, \(dqkey\(dq)]): [Tree(string, [Token(ANONRE_1, \(dqitem0\(dq)]), Tree(string, [Token(ANONRE_1, \(dqitem1\(dq)]), Tree(number, [Token(ANONRE_0, 3.14)]), Tree(true, [])]} .EE .UNINDENT .UNINDENT .sp This is pretty close. Let\(aqs write a full transformer that can handle the terminals too. .sp Also, our definitions of list and dict are a bit verbose. We can do better: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Transformer class TreeToJson(Transformer): def string(self, s): (s,) = s return s[1:\-1] def number(self, n): (n,) = n return float(n) list = list pair = tuple dict = dict null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False .EE .UNINDENT .UNINDENT .sp And when we run it: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> tree = json_parser.parse(text) >>> TreeToJson().transform(tree) {u\(aqkey\(aq: [u\(aqitem0\(aq, u\(aqitem1\(aq, 3.14, True]} .EE .UNINDENT .UNINDENT .sp Magic! .SS Part 5 \- Optimizing .SS Step 1 \- Benchmark .sp By now, we have a fully working JSON parser, that can accept a string of JSON, and return its Pythonic representation. .sp But how fast is it? .sp Now, of course there are JSON libraries for Python written in C, and we can never compete with them. But since this is applicable to any parser you would write in Lark, let\(aqs see how far we can take this. .sp The first step for optimizing is to have a benchmark. For this benchmark I\(aqm going to take data from \fI\%json\-generator.com/\fP\&. I took their default suggestion and changed it to 5000 objects. The result is a 6.6MB sparse JSON file. .sp Our first program is going to be just a concatenation of everything we\(aqve done so far: .INDENT 0.0 .INDENT 3.5 .sp .EX import sys from lark import Lark, Transformer json_grammar = r\(dq\(dq\(dq ?value: dict | list | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null list : \(dq[\(dq [value (\(dq,\(dq value)*] \(dq]\(dq dict : \(dq{\(dq [pair (\(dq,\(dq pair)*] \(dq}\(dq pair : string \(dq:\(dq value string : ESCAPED_STRING %import common.ESCAPED_STRING %import common.SIGNED_NUMBER %import common.WS %ignore WS \(dq\(dq\(dq class TreeToJson(Transformer): def string(self, s): (s,) = s return s[1:\-1] def number(self, n): (n,) = n return float(n) list = list pair = tuple dict = dict null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False json_parser = Lark(json_grammar, start=\(aqvalue\(aq, lexer=\(aqbasic\(aq) if __name__ == \(aq__main__\(aq: with open(sys.argv[1]) as f: tree = json_parser.parse(f.read()) print(TreeToJson().transform(tree)) .EE .UNINDENT .UNINDENT .sp We run it and get this: .INDENT 0.0 .INDENT 3.5 .sp .EX $ time python tutorial_json.py json_data > /dev/null real 0m36.257s user 0m34.735s sys 0m1.361s .EE .UNINDENT .UNINDENT .sp That\(aqs unsatisfactory time for a 6MB file. Maybe if we were parsing configuration or a small DSL, but we\(aqre trying to handle large amount of data here. .sp Well, turns out there\(aqs quite a bit we can do about it! .SS Step 2 \- LALR(1) .sp So far we\(aqve been using the Earley algorithm, which is the default in Lark. Earley is powerful but slow. But it just so happens that our grammar is LR\-compatible, and specifically LALR(1) compatible. .sp So let\(aqs switch to LALR(1) and see what happens: .INDENT 0.0 .INDENT 3.5 .sp .EX json_parser = Lark(json_grammar, start=\(aqvalue\(aq, parser=\(aqlalr\(aq) .EE .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX $ time python tutorial_json.py json_data > /dev/null real 0m7.554s user 0m7.352s sys 0m0.148s .EE .UNINDENT .UNINDENT .sp Ah, that\(aqs much better. The resulting JSON is of course exactly the same. You can run it for yourself and see. .sp It\(aqs important to note that not all grammars are LR\-compatible, and so you can\(aqt always switch to LALR(1). But there\(aqs no harm in trying! If Lark lets you build the grammar, it means you\(aqre good to go. .SS Step 3 \- Tree\-less LALR(1) .sp So far, we\(aqve built a full parse tree for our JSON, and then transformed it. It\(aqs a convenient method, but it\(aqs not the most efficient in terms of speed and memory. Luckily, Lark lets us avoid building the tree when parsing with LALR(1). .sp Here\(aqs the way to do it: .INDENT 0.0 .INDENT 3.5 .sp .EX json_parser = Lark(json_grammar, start=\(aqvalue\(aq, parser=\(aqlalr\(aq, transformer=TreeToJson()) if __name__ == \(aq__main__\(aq: with open(sys.argv[1]) as f: print( json_parser.parse(f.read()) ) .EE .UNINDENT .UNINDENT .sp We\(aqve used the transformer we\(aqve already written, but this time we plug it straight into the parser. Now it can avoid building the parse tree, and just send the data straight into our transformer. The \fIparse()\fP method now returns the transformed JSON, instead of a tree. .sp Let\(aqs benchmark it: .INDENT 0.0 .INDENT 3.5 .sp .EX real 0m4.866s user 0m4.722s sys 0m0.121s .EE .UNINDENT .UNINDENT .sp That\(aqs a measurable improvement! Also, this way is more memory efficient. Check out the benchmark table at the end to see just how much. .sp As a general practice, it\(aqs recommended to work with parse trees, and only skip the tree\-builder when your transformer is already working. .SS Step 4 \- PyPy .sp PyPy is a JIT engine for running Python, and it\(aqs designed to be a drop\-in replacement. .sp Lark is written purely in Python, which makes it very suitable for PyPy. .sp Let\(aqs get some free performance: .INDENT 0.0 .INDENT 3.5 .sp .EX $ time pypy tutorial_json.py json_data > /dev/null real 0m1.397s user 0m1.296s sys 0m0.083s .EE .UNINDENT .UNINDENT .sp PyPy is awesome! .SS Conclusion .sp We\(aqve brought the run\-time down from 36 seconds to 1.1 seconds, in a series of small and simple steps. .sp Now let\(aqs compare the benchmarks in a nicely organized table. .sp I measured memory consumption using a little script called \fI\%memusg\fP .sp I added a few other parsers for comparison. PyParsing and funcparselib fair pretty well in their memory usage (they don\(aqt build a tree), but they can\(aqt compete with the run\-time speed of LALR(1). .sp These benchmarks are for Lark\(aqs alpha version. I already have several optimizations planned that will significantly improve run\-time speed. .sp Once again, shout\-out to PyPy for being so effective. .SS Afterword .sp This is the end of the tutorial. I hoped you liked it and learned a little about Lark. .sp To see what else you can do with Lark, check out the \fI\%examples\fP\&. .sp Read the documentation here: https://lark\-parser.readthedocs.io/en/latest/ .SH HOW TO USE LARK - GUIDE .SS Work process .sp This is the recommended process for working with Lark: .INDENT 0.0 .IP \(bu 2 Collect or create input samples, that demonstrate key features or behaviors in the language you\(aqre trying to parse. .IP \(bu 2 Write a grammar. Try to aim for a structure that is intuitive, and in a way that imitates how you would explain your language to a fellow human. .IP \(bu 2 Try your grammar in Lark against each input sample. Make sure the resulting parse\-trees make sense. .IP \(bu 2 Use Lark\(aqs grammar features to \fI\%shape the tree\fP: Get rid of superfluous rules by inlining them, and use aliases when specific cases need clarification. .sp You can perform steps 1\-4 repeatedly, gradually growing your grammar to include more sentences. .IP \(bu 2 Create a transformer to evaluate the parse\-tree into a structure you\(aqll be comfortable to work with. This may include evaluating literals, merging branches, or even converting the entire tree into your own set of AST classes. .UNINDENT .sp Of course, some specific use\-cases may deviate from this process. Feel free to suggest these cases, and I\(aqll add them to this page. .SS Getting started .sp Browse the \fI\%Examples\fP to find a template that suits your purposes. .sp Read the tutorials to get a better understanding of how everything works. (links in the \fI\%main page\fP) .sp Use the \fI\%Cheatsheet (PDF)\fP for quick reference. .sp Use the reference pages for more in\-depth explanations. (links in the \fI\%main page\fP) .SS Debug .sp Grammars may contain non\-obvious bugs, usually caused by rules or terminals interfering with each other in subtle ways. .sp When trying to debug a misbehaving grammar, the following methodology is recommended: .INDENT 0.0 .IP \(bu 2 Create a copy of the grammar, so you can change the parser/grammar without any worries .IP \(bu 2 Find the minimal input that creates the error .IP \(bu 2 Slowly remove rules from the grammar, while making sure the error still occurs. .UNINDENT .sp Usually, by the time you get to a minimal grammar, the problem becomes clear. .sp But if it doesn\(aqt, feel free to ask us on gitter, or even open an issue. Post a reproducing code, with the minimal grammar and input, and we\(aqll do our best to help. .SS Regex collisions .sp A likely source of bugs occurs when two regexes in a grammar can match the same input. If both terminals have the same priority, most lexers would arbitrarily choose the first one that matches, which isn\(aqt always the desired one. (a notable exception is the \fBdynamic_complete\fP lexer, which always tries all variations. But its users pay for that with performance.) .sp These collisions can be hard to notice, and their effects can be difficult to debug, as they are subtle and sometimes hard to reproduce. .sp To help with these situations, Lark can utilize a new external library called \fBinteregular\fP\&. If it is installed, Lark uses it to check for collisions, and warn about any conflicts that it can find: .INDENT 0.0 .INDENT 3.5 .sp .EX import logging from lark import Lark, logger logger.setLevel(logging.WARN) collision_grammar = \(aq\(aq\(aq start: A | B A: /a+/ B: /[ab]+/ \(aq\(aq\(aq p = Lark(collision_grammar, parser=\(aqlalr\(aq) # Output: # Collision between Terminals B and A. The lexer will choose between them arbitrarily # Example Collision: a .EE .UNINDENT .UNINDENT .sp You can install interegular for Lark using \fBpip install \(aqlark[interegular]\(aq\fP\&. .sp Note 1: Interegular currently only runs when the lexer is \fBbasic\fP or \fBcontextual\fP\&. .sp Note 2: Some advanced regex features, such as lookahead and lookbehind, may prevent interegular from detecting existing collisions. .SS Shift/Reduce collisions .sp By default Lark automatically resolves Shift/Reduce conflicts as Shift. It produces notifications as debug messages. .sp when users pass \fBdebug=True\fP, those notifications are written as warnings. .sp Either way, to get the messages printed you have to configure the \fBlogger\fP beforehand. For example: .INDENT 0.0 .INDENT 3.5 .sp .EX import logging from lark import Lark, logger logger.setLevel(logging.DEBUG) collision_grammar = \(aq\(aq\(aq start: as as as: a* a: \(dqa\(dq \(aq\(aq\(aq p = Lark(collision_grammar, parser=\(aqlalr\(aq, debug=True) # Shift/Reduce conflict for terminal A: (resolving as shift) # * # Shift/Reduce conflict for terminal A: (resolving as shift) # * .EE .UNINDENT .UNINDENT .SS Strict\-Mode .sp Lark, by default, accepts grammars with unresolved Shift/Reduce collisions (which it always resolves to shift), and regex collisions. .sp Strict\-mode allows users to validate that their grammars don\(aqt contain these collisions. .sp When Lark is initialized with \fBstrict=True\fP, it raises an exception on any Shift/Reduce or regex collision. .sp If \fBinteregular\fP isn\(aqt installed, an exception is thrown. .sp When using strict\-mode, users will be expected to resolve their collisions manually: .INDENT 0.0 .IP \(bu 2 To resolve Shift/Reduce collisions, adjust the priority weights of the rules involved, until there are no more collisions. .IP \(bu 2 To resolve regex collisions, change the involved regexes so that they can no longer both match the same input (Lark provides an example). .UNINDENT .sp Strict\-mode only applies to LALR for now. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark collision_grammar = \(aq\(aq\(aq start: as as as: a* a: \(dqa\(dq \(aq\(aq\(aq p = Lark(collision_grammar, parser=\(aqlalr\(aq, strict=True) # Traceback (most recent call last): # ... # lark.exceptions.GrammarError: Shift/Reduce conflict for terminal A. [strict\-mode] .EE .UNINDENT .UNINDENT .SS Tools .SS Stand\-alone parser .sp Lark can generate a stand\-alone LALR(1) parser from a grammar. .sp The resulting module provides the same interface as Lark, but with a fixed grammar, and reduced functionality. .sp Run using: .INDENT 0.0 .INDENT 3.5 .sp .EX python \-m lark.tools.standalone .EE .UNINDENT .UNINDENT .sp For a play\-by\-play, read the \fI\%tutorial\fP .SS Import Nearley.js grammars .sp It is possible to import Nearley grammars into Lark. The Javascript code is translated using Js2Py. .sp See the \fI\%tools page\fP for more information. .SH HOW TO DEVELOP LARK - GUIDE .sp There are many ways you can help the project: .INDENT 0.0 .IP \(bu 2 Help solve issues .IP \(bu 2 Improve the documentation .IP \(bu 2 Write new grammars for Lark\(aqs library .IP \(bu 2 Write a blog post introducing Lark to your audience .IP \(bu 2 Port Lark to another language .IP \(bu 2 Help with code development .UNINDENT .sp If you\(aqre interested in taking one of these on, contact us on \fI\%Gitter\fP or \fI\%Github Discussion\fP, and we will provide more details and assist you in the process. .SS Code Style .sp Lark does not follow a predefined code style. We accept any code style that makes sense, as long as it\(aqs Pythonic and easy to read. .SS Unit Tests .sp Lark comes with an extensive set of tests. Many of the tests will run several times, once for each parser configuration. .sp To run the tests, just go to the lark project root, and run the command: .INDENT 0.0 .INDENT 3.5 .sp .EX python \-m tests .EE .UNINDENT .UNINDENT .sp or .INDENT 0.0 .INDENT 3.5 .sp .EX pypy \-m tests .EE .UNINDENT .UNINDENT .sp For a list of supported interpreters, you can consult the \fBtox.ini\fP file. .sp You can also run a single unittest using its class and method name, for example: .INDENT 0.0 .INDENT 3.5 .sp .EX ## test_package test_class_name.test_function_name python \-m tests TestLalrBasic.test_keep_all_tokens .EE .UNINDENT .UNINDENT .SS tox .sp To run all Unit Tests with tox, install tox and Python 2.7 up to the latest python interpreter supported (consult the file tox.ini). Then, run the command \fBtox\fP on the root of this project (where the main setup.py file is on). .sp And, for example, if you would like to only run the Unit Tests for Python version 2.7, you can run the command \fBtox \-e py27\fP .SS pytest .sp You can also run the tests using pytest: .INDENT 0.0 .INDENT 3.5 .sp .EX pytest tests .EE .UNINDENT .UNINDENT .SS Using setup.py .sp Another way to run the tests is using setup.py: .INDENT 0.0 .INDENT 3.5 .sp .EX python setup.py test .EE .UNINDENT .UNINDENT .SH RECIPES .sp A collection of recipes to use Lark and its various features .SS Use a transformer to parse integer tokens .sp Transformers are the common interface for processing matched rules and tokens. .sp They can be used during parsing for better performance. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Transformer class T(Transformer): def INT(self, tok): \(dqConvert the value of \(gatok\(ga from string to int, while maintaining line number & column.\(dq return tok.update(value=int(tok)) parser = Lark(\(dq\(dq\(dq start: INT* %import common.INT %ignore \(dq \(dq \(dq\(dq\(dq, parser=\(dqlalr\(dq, transformer=T()) print(parser.parse(\(aq3 14 159\(aq)) .EE .UNINDENT .UNINDENT .sp Prints out: .INDENT 0.0 .INDENT 3.5 .sp .EX Tree(start, [Token(INT, 3), Token(INT, 14), Token(INT, 159)]) .EE .UNINDENT .UNINDENT .SS Collect all comments with lexer_callbacks .sp \fBlexer_callbacks\fP can be used to interface with the lexer as it generates tokens. .sp It accepts a dictionary of the form .INDENT 0.0 .INDENT 3.5 .sp .EX {TOKEN_TYPE: callback} .EE .UNINDENT .UNINDENT .sp Where callback is of type \fBf(Token) \-> Token\fP .sp It only works with the basic and contextual lexers. .sp This has the same effect of using a transformer, but can also process ignored tokens. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark comments = [] parser = Lark(\(dq\(dq\(dq start: INT* COMMENT: /#.*/ %import common (INT, WS) %ignore COMMENT %ignore WS \(dq\(dq\(dq, parser=\(dqlalr\(dq, lexer_callbacks={\(aqCOMMENT\(aq: comments.append}) parser.parse(\(dq\(dq\(dq 1 2 3 # hello # world 4 5 6 \(dq\(dq\(dq) print(comments) .EE .UNINDENT .UNINDENT .sp Prints out: .INDENT 0.0 .INDENT 3.5 .sp .EX [Token(COMMENT, \(aq# hello\(aq), Token(COMMENT, \(aq# world\(aq)] .EE .UNINDENT .UNINDENT .sp \fINote: We don\(aqt have to return a token, because comments are ignored\fP .SS CollapseAmbiguities .sp Parsing ambiguous texts with earley and \fBambiguity=\(aqexplicit\(aq\fP produces a single tree with \fB_ambig\fP nodes to mark where the ambiguity occurred. .sp However, it\(aqs sometimes more convenient instead to work with a list of all possible unambiguous trees. .sp Lark provides a utility transformer for that purpose: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Tree, Transformer from lark.visitors import CollapseAmbiguities grammar = \(dq\(dq\(dq !start: x y !x: \(dqa\(dq \(dqb\(dq | \(dqab\(dq | \(dqabc\(dq !y: \(dqc\(dq \(dqd\(dq | \(dqcd\(dq | \(dqd\(dq \(dq\(dq\(dq parser = Lark(grammar, ambiguity=\(aqexplicit\(aq) t = parser.parse(\(aqabcd\(aq) for x in CollapseAmbiguities().transform(t): print(x.pretty()) .EE .UNINDENT .UNINDENT .sp This prints out: .INDENT 0.0 .INDENT 3.5 .sp .EX start x a b y c d start x ab y cd start x abc y d .EE .UNINDENT .UNINDENT .sp While convenient, this should be used carefully, as highly ambiguous trees will soon create an exponential explosion of such unambiguous derivations. .SS Keeping track of parents when visiting .sp The following visitor assigns a \fBparent\fP attribute for every node in the tree. .sp If your tree nodes aren\(aqt unique (if there is a shared Tree instance), the assert will fail. .INDENT 0.0 .INDENT 3.5 .sp .EX class Parent(Visitor): def __default__(self, tree): for subtree in tree.children: if isinstance(subtree, Tree): assert not hasattr(subtree, \(aqparent\(aq) subtree.parent = proxy(tree) .EE .UNINDENT .UNINDENT .SS Unwinding VisitError after a transformer/visitor exception .sp Errors that happen inside visitors and transformers get wrapped inside a \fBVisitError\fP exception. .sp This can often be inconvenient, if you wish the actual error to propagate upwards, or if you want to catch it. .sp But, it\(aqs easy to unwrap it at the point of calling the transformer, by catching it and raising the \fBVisitError.orig_exc\fP attribute. .sp For example: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Transformer from lark.visitors import VisitError tree = Lark(\(aqstart: \(dqa\(dq\(aq).parse(\(aqa\(aq) class T(Transformer): def start(self, x): raise KeyError(\(dqOriginal Exception\(dq) t = T() try: print( t.transform(tree)) except VisitError as e: raise e.orig_exc .EE .UNINDENT .UNINDENT .SS Adding a Progress Bar to Parsing with tqdm .sp Parsing large files can take a long time, even with the \fBparser=\(aqlalr\(aq\fP option. To make this process more user\-friendly, it\(aqs useful to add a progress bar. One way to achieve this is to use the \fBInteractiveParser\fP to display each token as it is processed. In this example, we use \fI\%tqdm\fP, but a similar approach should work with GUIs. .INDENT 0.0 .INDENT 3.5 .sp .EX from tqdm import tqdm def parse_with_progress(parser: Lark, text: str, start=None): last = 0 progress = tqdm(total=len(text)) pi = parser.parse_interactive(text, start=start) for token in pi.iter_parse(): if token.end_pos is not None: progress.update(token.end_pos \- last) last = token.end_pos return pi.result .EE .UNINDENT .UNINDENT .sp Note that we don\(aqt simply wrap the iterable because tqdm would not be able to determine the total. Additionally, keep in mind that this implementation relies on the \fBInteractiveParser\fP and, therefore, only works with the \fBLALR(1)\fP parser, not \fBearley\fP\&. .SH EXAMPLES FOR LARK .sp \fBHow to run the examples\fP: .sp After cloning the repo, open the terminal into the root directory of the project, and run the following: .INDENT 0.0 .INDENT 3.5 .sp .EX [lark]$ python \-m examples. .EE .UNINDENT .UNINDENT .sp For example, the following will parse all the Python files in the standard library of your local installation: .INDENT 0.0 .INDENT 3.5 .sp .EX [lark]$ python \-m examples.advanced.python_parser .EE .UNINDENT .UNINDENT .SS Beginner Examples .SS Parsing Indentation .sp A demonstration of parsing indentation (“whitespace significant” language) and the usage of the Indenter class. .sp Since indentation is context\-sensitive, a postlex stage is introduced to manufacture INDENT/DEDENT tokens. .sp It is crucial for the indenter that the NL_type matches the spaces (and tabs) after the newline. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark from lark.indenter import Indenter tree_grammar = r\(dq\(dq\(dq ?start: _NL* tree tree: NAME _NL [_INDENT tree+ _DEDENT] %import common.CNAME \-> NAME %import common.WS_INLINE %declare _INDENT _DEDENT %ignore WS_INLINE _NL: /(\er?\en[\et ]*)+/ \(dq\(dq\(dq class TreeIndenter(Indenter): NL_type = \(aq_NL\(aq OPEN_PAREN_types = [] CLOSE_PAREN_types = [] INDENT_type = \(aq_INDENT\(aq DEDENT_type = \(aq_DEDENT\(aq tab_len = 8 parser = Lark(tree_grammar, parser=\(aqlalr\(aq, postlex=TreeIndenter()) test_tree = \(dq\(dq\(dq a b c d e f g \(dq\(dq\(dq def test(): print(parser.parse(test_tree).pretty()) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Lark Grammar .sp A reference implementation of the Lark grammar (using LALR(1)) .INDENT 0.0 .INDENT 3.5 .sp .EX import lark from pathlib import Path examples_path = Path(__file__).parent lark_path = Path(lark.__file__).parent parser = lark.Lark.open(lark_path / \(aqgrammars/lark.lark\(aq, rel_to=__file__, parser=\(dqlalr\(dq) grammar_files = [ examples_path / \(aqadvanced/python2.lark\(aq, examples_path / \(aqrelative\-imports/multiples.lark\(aq, examples_path / \(aqrelative\-imports/multiple2.lark\(aq, examples_path / \(aqrelative\-imports/multiple3.lark\(aq, examples_path / \(aqtests/no_newline_at_end.lark\(aq, examples_path / \(aqtests/negative_priority.lark\(aq, examples_path / \(aqstandalone/json.lark\(aq, lark_path / \(aqgrammars/common.lark\(aq, lark_path / \(aqgrammars/lark.lark\(aq, lark_path / \(aqgrammars/unicode.lark\(aq, lark_path / \(aqgrammars/python.lark\(aq, ] def test(): for grammar_file in grammar_files: tree = parser.parse(open(grammar_file).read()) print(\(dqAll grammars parsed successfully\(dq) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Handling Ambiguity .sp A demonstration of ambiguity .sp This example shows how to use get explicit ambiguity from Lark\(aqs Earley parser. .INDENT 0.0 .INDENT 3.5 .sp .EX import sys from lark import Lark, tree grammar = \(dq\(dq\(dq sentence: noun verb noun \-> simple | noun verb \(dqlike\(dq noun \-> comparative noun: adj? NOUN verb: VERB adj: ADJ NOUN: \(dqflies\(dq | \(dqbananas\(dq | \(dqfruit\(dq VERB: \(dqlike\(dq | \(dqflies\(dq ADJ: \(dqfruit\(dq %import common.WS %ignore WS \(dq\(dq\(dq parser = Lark(grammar, start=\(aqsentence\(aq, ambiguity=\(aqexplicit\(aq) sentence = \(aqfruit flies like bananas\(aq def make_png(filename): tree.pydot__tree_to_png( parser.parse(sentence), filename) def make_dot(filename): tree.pydot__tree_to_dot( parser.parse(sentence), filename) if __name__ == \(aq__main__\(aq: print(parser.parse(sentence).pretty()) # make_png(sys.argv[1]) # make_dot(sys.argv[1]) # Output: # # _ambig # comparative # noun fruit # verb flies # noun bananas # simple # noun # fruit # flies # verb like # noun bananas # # (or view a nicer version at \(dq./fruitflies.png\(dq) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Basic calculator .sp A simple example of a REPL calculator .sp This example shows how to write a basic calculator with variables. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Transformer, v_args try: input = raw_input # For Python2 compatibility except NameError: pass calc_grammar = \(dq\(dq\(dq ?start: sum | NAME \(dq=\(dq sum \-> assign_var ?sum: product | sum \(dq+\(dq product \-> add | sum \(dq\-\(dq product \-> sub ?product: atom | product \(dq*\(dq atom \-> mul | product \(dq/\(dq atom \-> div ?atom: NUMBER \-> number | \(dq\-\(dq atom \-> neg | NAME \-> var | \(dq(\(dq sum \(dq)\(dq %import common.CNAME \-> NAME %import common.NUMBER %import common.WS_INLINE %ignore WS_INLINE \(dq\(dq\(dq @v_args(inline=True) # Affects the signatures of the methods class CalculateTree(Transformer): from operator import add, sub, mul, truediv as div, neg number = float def __init__(self): self.vars = {} def assign_var(self, name, value): self.vars[name] = value return value def var(self, name): try: return self.vars[name] except KeyError: raise Exception(\(dqVariable not found: %s\(dq % name) calc_parser = Lark(calc_grammar, parser=\(aqlalr\(aq, transformer=CalculateTree()) calc = calc_parser.parse def main(): while True: try: s = input(\(aq> \(aq) except EOFError: break print(calc(s)) def test(): print(calc(\(dqa = 1+2\(dq)) print(calc(\(dq1+a*\-3\(dq)) if __name__ == \(aq__main__\(aq: # test() main() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Turtle DSL .sp Implements a LOGO\-like toy language for Python’s turtle, with interpreter. .INDENT 0.0 .INDENT 3.5 .sp .EX try: input = raw_input # For Python2 compatibility except NameError: pass import turtle from lark import Lark turtle_grammar = \(dq\(dq\(dq start: instruction+ instruction: MOVEMENT NUMBER \-> movement | \(dqc\(dq COLOR [COLOR] \-> change_color | \(dqfill\(dq code_block \-> fill | \(dqrepeat\(dq NUMBER code_block \-> repeat code_block: \(dq{\(dq instruction+ \(dq}\(dq MOVEMENT: \(dqf\(dq|\(dqb\(dq|\(dql\(dq|\(dqr\(dq COLOR: LETTER+ %import common.LETTER %import common.INT \-> NUMBER %import common.WS %ignore WS \(dq\(dq\(dq parser = Lark(turtle_grammar) def run_instruction(t): if t.data == \(aqchange_color\(aq: turtle.color(*t.children) # We just pass the color names as\-is elif t.data == \(aqmovement\(aq: name, number = t.children { \(aqf\(aq: turtle.fd, \(aqb\(aq: turtle.bk, \(aql\(aq: turtle.lt, \(aqr\(aq: turtle.rt, }[name](int(number)) elif t.data == \(aqrepeat\(aq: count, block = t.children for i in range(int(count)): run_instruction(block) elif t.data == \(aqfill\(aq: turtle.begin_fill() run_instruction(t.children[0]) turtle.end_fill() elif t.data == \(aqcode_block\(aq: for cmd in t.children: run_instruction(cmd) else: raise SyntaxError(\(aqUnknown instruction: %s\(aq % t.data) def run_turtle(program): parse_tree = parser.parse(program) for inst in parse_tree.children: run_instruction(inst) def main(): while True: code = input(\(aq> \(aq) try: run_turtle(code) except Exception as e: print(e) def test(): text = \(dq\(dq\(dq c red yellow fill { repeat 36 { f200 l170 }} \(dq\(dq\(dq run_turtle(text) if __name__ == \(aq__main__\(aq: # test() main() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Simple JSON Parser .sp The code is short and clear, and outperforms every other parser (that\(aqs written in Python). For an explanation, check out the JSON parser tutorial at /docs/json_tutorial.md .INDENT 0.0 .INDENT 3.5 .sp .EX import sys from lark import Lark, Transformer, v_args json_grammar = r\(dq\(dq\(dq ?start: value ?value: object | array | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null array : \(dq[\(dq [value (\(dq,\(dq value)*] \(dq]\(dq object : \(dq{\(dq [pair (\(dq,\(dq pair)*] \(dq}\(dq pair : string \(dq:\(dq value string : ESCAPED_STRING %import common.ESCAPED_STRING %import common.SIGNED_NUMBER %import common.WS %ignore WS \(dq\(dq\(dq class TreeToJson(Transformer): @v_args(inline=True) def string(self, s): return s[1:\-1].replace(\(aq\e\e\(dq\(aq, \(aq\(dq\(aq) array = list pair = tuple object = dict number = v_args(inline=True)(float) null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False ### Create the JSON parser with Lark, using the Earley algorithm # json_parser = Lark(json_grammar, parser=\(aqearley\(aq, lexer=\(aqbasic\(aq) # def parse(x): # return TreeToJson().transform(json_parser.parse(x)) ### Create the JSON parser with Lark, using the LALR algorithm json_parser = Lark(json_grammar, parser=\(aqlalr\(aq, # Using the basic lexer isn\(aqt required, and isn\(aqt usually recommended. # But, it\(aqs good enough for JSON, and it\(aqs slightly faster. lexer=\(aqbasic\(aq, # Disabling propagate_positions and placeholders slightly improves speed propagate_positions=False, maybe_placeholders=False, # Using an internal transformer is faster and more memory efficient transformer=TreeToJson()) parse = json_parser.parse def test(): test_json = \(aq\(aq\(aq { \(dqempty_object\(dq : {}, \(dqempty_array\(dq : [], \(dqbooleans\(dq : { \(dqYES\(dq : true, \(dqNO\(dq : false }, \(dqnumbers\(dq : [ 0, 1, \-2, 3.3, 4.4e5, 6.6e\-7 ], \(dqstrings\(dq : [ \(dqThis\(dq, [ \(dqAnd\(dq , \(dqThat\(dq, \(dqAnd a \e\e\(dqb\(dq ] ], \(dqnothing\(dq : null } \(aq\(aq\(aq j = parse(test_json) print(j) import json assert j == json.loads(test_json) if __name__ == \(aq__main__\(aq: # test() with open(sys.argv[1]) as f: print(parse(f.read())) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Advanced Examples .SS LALR’s contextual lexer .sp This example demonstrates the power of LALR\(aqs contextual lexer, by parsing a toy configuration language. .sp The terminals \fINAME\fP and \fIVALUE\fP overlap. They can match the same input. A basic lexer would arbitrarily choose one over the other, based on priority, which would lead to a (confusing) parse error. However, due to the unambiguous structure of the grammar, Lark\(aqs LALR(1) algorithm knows which one of them to expect at each point during the parse. The lexer then only matches the tokens that the parser expects. The result is a correct parse, something that is impossible with a regular lexer. .sp Another approach is to use the Earley algorithm. It will handle more cases than the contextual lexer, but at the cost of performance. See examples/conf_earley.py for an example of that approach. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark parser = Lark(r\(dq\(dq\(dq start: _NL? section+ section: \(dq[\(dq NAME \(dq]\(dq _NL item+ item: NAME \(dq=\(dq VALUE? _NL NAME: /\ew/+ VALUE: /./+ %import common.NEWLINE \-> _NL %import common.WS_INLINE %ignore WS_INLINE \(dq\(dq\(dq, parser=\(dqlalr\(dq) sample_conf = \(dq\(dq\(dq [bla] a=Hello this=\(dqthat\(dq,4 empty= \(dq\(dq\(dq print(parser.parse(sample_conf).pretty()) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Templates .sp This example shows how to use Lark\(aqs templates to achieve cleaner grammars .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark grammar = r\(dq\(dq\(dq start: list | dict list: \(dq[\(dq _seperated{atom, \(dq,\(dq} \(dq]\(dq dict: \(dq{\(dq _seperated{key_value, \(dq,\(dq} \(dq}\(dq key_value: atom \(dq:\(dq atom _seperated{x, sep}: x (sep x)* // Define a sequence of \(aqx sep x sep x ...\(aq atom: NUMBER | ESCAPED_STRING %import common (NUMBER, ESCAPED_STRING, WS) %ignore WS \(dq\(dq\(dq parser = Lark(grammar) print(parser.parse(\(aq[1, \(dqa\(dq, 2]\(aq)) print(parser.parse(\(aq{\(dqa\(dq: 2, \(dqb\(dq: 6}\(aq)) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Earley’s dynamic lexer .sp Demonstrates the power of Earley’s dynamic lexer on a toy configuration language .sp Using a lexer for configuration files is tricky, because values don\(aqt have to be surrounded by delimiters. Using a basic lexer for this just won\(aqt work. .sp In this example we use a dynamic lexer and let the Earley parser resolve the ambiguity. .sp Another approach is to use the contextual lexer with LALR. It is less powerful than Earley, but it can handle some ambiguity when lexing and it\(aqs much faster. See examples/conf_lalr.py for an example of that approach. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark parser = Lark(r\(dq\(dq\(dq start: _NL? section+ section: \(dq[\(dq NAME \(dq]\(dq _NL item+ item: NAME \(dq=\(dq VALUE? _NL NAME: /\ew/+ VALUE: /./+ %import common.NEWLINE \-> _NL %import common.WS_INLINE %ignore WS_INLINE \(dq\(dq\(dq, parser=\(dqearley\(dq) def test(): sample_conf = \(dq\(dq\(dq [bla] a=Hello this=\(dqthat\(dq,4 empty= \(dq\(dq\(dq r = parser.parse(sample_conf) print (r.pretty()) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Error handling using an interactive parser .sp This example demonstrates error handling using an interactive parser in LALR .sp When the parser encounters an UnexpectedToken exception, it creates a an interactive parser with the current parse\-state, and lets you control how to proceed step\-by\-step. When you\(aqve achieved the correct parse\-state, you can resume the run by returning True. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Token from _json_parser import json_parser def ignore_errors(e): if e.token.type == \(aqCOMMA\(aq: # Skip comma return True elif e.token.type == \(aqSIGNED_NUMBER\(aq: # Try to feed a comma and retry the number e.interactive_parser.feed_token(Token(\(aqCOMMA\(aq, \(aq,\(aq)) e.interactive_parser.feed_token(e.token) return True # Unhandled error. Will stop parse and raise exception return False def main(): s = \(dq[0 1, 2,, 3,,, 4, 5 6 ]\(dq res = json_parser.parse(s, on_error=ignore_errors) print(res) # prints [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] main() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Reconstruct a JSON .sp Demonstrates the experimental text\-reconstruction feature .sp The Reconstructor takes a parse tree (already filtered from punctuation, of course), and reconstructs it into correct text, that can be parsed correctly. It can be useful for creating \(dqhooks\(dq to alter data before handing it to other parsers. You can also use it to generate samples from scratch. .INDENT 0.0 .INDENT 3.5 .sp .EX import json from lark import Lark from lark.reconstruct import Reconstructor from _json_parser import json_grammar test_json = \(aq\(aq\(aq { \(dqempty_object\(dq : {}, \(dqempty_array\(dq : [], \(dqbooleans\(dq : { \(dqYES\(dq : true, \(dqNO\(dq : false }, \(dqnumbers\(dq : [ 0, 1, \-2, 3.3, 4.4e5, 6.6e\-7 ], \(dqstrings\(dq : [ \(dqThis\(dq, [ \(dqAnd\(dq , \(dqThat\(dq, \(dqAnd a \e\e\(dqb\(dq ] ], \(dqnothing\(dq : null } \(aq\(aq\(aq def test_earley(): json_parser = Lark(json_grammar, maybe_placeholders=False) tree = json_parser.parse(test_json) new_json = Reconstructor(json_parser).reconstruct(tree) print (new_json) print (json.loads(new_json) == json.loads(test_json)) def test_lalr(): json_parser = Lark(json_grammar, parser=\(aqlalr\(aq, maybe_placeholders=False) tree = json_parser.parse(test_json) new_json = Reconstructor(json_parser).reconstruct(tree) print (new_json) print (json.loads(new_json) == json.loads(test_json)) test_earley() test_lalr() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Custom lexer .sp Demonstrates using a custom lexer to parse a non\-textual stream of data .sp You can use a custom lexer to tokenize text when the lexers offered by Lark are too slow, or not flexible enough. .sp You can also use it (as shown in this example) to tokenize streams of objects. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Transformer, v_args from lark.lexer import Lexer, Token class TypeLexer(Lexer): def __init__(self, lexer_conf): pass def lex(self, data): for obj in data: if isinstance(obj, int): yield Token(\(aqINT\(aq, obj) elif isinstance(obj, (type(\(aq\(aq), type(u\(aq\(aq))): yield Token(\(aqSTR\(aq, obj) else: raise TypeError(obj) parser = Lark(\(dq\(dq\(dq start: data_item+ data_item: STR INT* %declare STR INT \(dq\(dq\(dq, parser=\(aqlalr\(aq, lexer=TypeLexer) class ParseToDict(Transformer): @v_args(inline=True) def data_item(self, name, *numbers): return name.value, [n.value for n in numbers] start = dict def test(): data = [\(aqalice\(aq, 1, 27, 3, \(aqbob\(aq, 4, \(aqcarrie\(aq, \(aqdan\(aq, 8, 6] print(data) tree = parser.parse(data) res = ParseToDict().transform(tree) print(\(aq\-\->\(aq) print(res) # prints {\(aqalice\(aq: [1, 27, 3], \(aqbob\(aq: [4], \(aqcarrie\(aq: [], \(aqdan\(aq: [8, 6]} if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Transform a Forest .sp This example demonstrates how to subclass \fBTreeForestTransformer\fP to directly transform a SPPF. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark from lark.parsers.earley_forest import TreeForestTransformer, handles_ambiguity, Discard class CustomTransformer(TreeForestTransformer): @handles_ambiguity def sentence(self, trees): return next(tree for tree in trees if tree.data == \(aqsimple\(aq) def simple(self, children): children.append(\(aq.\(aq) return self.tree_class(\(aqsimple\(aq, children) def adj(self, children): return Discard def __default_token__(self, token): return token.capitalize() grammar = \(dq\(dq\(dq sentence: noun verb noun \-> simple | noun verb \(dqlike\(dq noun \-> comparative noun: adj? NOUN verb: VERB adj: ADJ NOUN: \(dqflies\(dq | \(dqbananas\(dq | \(dqfruit\(dq VERB: \(dqlike\(dq | \(dqflies\(dq ADJ: \(dqfruit\(dq %import common.WS %ignore WS \(dq\(dq\(dq parser = Lark(grammar, start=\(aqsentence\(aq, ambiguity=\(aqforest\(aq) sentence = \(aqfruit flies like bananas\(aq forest = parser.parse(sentence) tree = CustomTransformer(resolve_ambiguity=False).transform(forest) print(tree.pretty()) # Output: # # simple # noun Flies # verb Like # noun Bananas # . # .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Simple JSON Parser .sp The code is short and clear, and outperforms every other parser (that\(aqs written in Python). For an explanation, check out the JSON parser tutorial at /docs/json_tutorial.md .sp (this is here for use by the other examples) .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, Transformer, v_args json_grammar = r\(dq\(dq\(dq ?start: value ?value: object | array | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null array : \(dq[\(dq [value (\(dq,\(dq value)*] \(dq]\(dq object : \(dq{\(dq [pair (\(dq,\(dq pair)*] \(dq}\(dq pair : string \(dq:\(dq value string : ESCAPED_STRING %import common.ESCAPED_STRING %import common.SIGNED_NUMBER %import common.WS %ignore WS \(dq\(dq\(dq class TreeToJson(Transformer): @v_args(inline=True) def string(self, s): return s[1:\-1].replace(\(aq\e\e\(dq\(aq, \(aq\(dq\(aq) array = list pair = tuple object = dict number = v_args(inline=True)(float) null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False ### Create the JSON parser with Lark, using the LALR algorithm json_parser = Lark(json_grammar, parser=\(aqlalr\(aq, # Using the basic lexer isn\(aqt required, and isn\(aqt usually recommended. # But, it\(aqs good enough for JSON, and it\(aqs slightly faster. lexer=\(aqbasic\(aq, # Disabling propagate_positions and placeholders slightly improves speed propagate_positions=False, maybe_placeholders=False, # Using an internal transformer is faster and more memory efficient transformer=TreeToJson()) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Custom SPPF Prioritizer .sp This example demonstrates how to subclass \fBForestVisitor\fP to make a custom SPPF node prioritizer to be used in conjunction with \fBTreeForestTransformer\fP\&. .sp Our prioritizer will count the number of descendants of a node that are tokens. By negating this count, our prioritizer will prefer nodes with fewer token descendants. Thus, we choose the more specific parse. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark from lark.parsers.earley_forest import ForestVisitor, TreeForestTransformer class TokenPrioritizer(ForestVisitor): def visit_symbol_node_in(self, node): # visit the entire forest by returning node.children return node.children def visit_packed_node_in(self, node): return node.children def visit_symbol_node_out(self, node): priority = 0 for child in node.children: # Tokens do not have a priority attribute # count them as \-1 priority += getattr(child, \(aqpriority\(aq, \-1) node.priority = priority def visit_packed_node_out(self, node): priority = 0 for child in node.children: priority += getattr(child, \(aqpriority\(aq, \-1) node.priority = priority def on_cycle(self, node, path): raise Exception(\(dqOops, we encountered a cycle.\(dq) grammar = \(dq\(dq\(dq start: hello \(dq \(dq world | hello_world hello: \(dqHello\(dq world: \(dqWorld\(dq hello_world: \(dqHello World\(dq \(dq\(dq\(dq parser = Lark(grammar, parser=\(aqearley\(aq, ambiguity=\(aqforest\(aq) forest = parser.parse(\(dqHello World\(dq) print(\(dqDefault prioritizer:\(dq) tree = TreeForestTransformer(resolve_ambiguity=True).transform(forest) print(tree.pretty()) forest = parser.parse(\(dqHello World\(dq) print(\(dqCustom prioritizer:\(dq) tree = TreeForestTransformer(resolve_ambiguity=True, prioritizer=TokenPrioritizer()).transform(forest) print(tree.pretty()) # Output: # # Default prioritizer: # start # hello Hello # # world World # # Custom prioritizer: # start # hello_world Hello World .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Python 3 to Python 2 converter (tree templates) .sp This example demonstrates how to translate between two trees using tree templates. It parses Python 3, translates it to a Python 2 AST, and then outputs the result as Python 2 code. .sp Uses reconstruct_python.py for generating the final Python 2 code. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark from lark.tree_templates import TemplateConf, TemplateTranslator from lark.indenter import PythonIndenter from reconstruct_python import PythonReconstructor # # 1. Define a Python parser that also accepts template vars in the code (in the form of $var) # TEMPLATED_PYTHON = r\(dq\(dq\(dq %import python (single_input, file_input, eval_input, atom, var, stmt, expr, testlist_star_expr, _NEWLINE, _INDENT, _DEDENT, COMMENT, NAME) %extend atom: TEMPLATE_NAME \-> var TEMPLATE_NAME: \(dq$\(dq NAME ?template_start: (stmt | testlist_star_expr _NEWLINE) %ignore /[\et \ef]+/ // WS %ignore /\e\e[\et \ef]*\er?\en/ // LINE_CONT %ignore COMMENT \(dq\(dq\(dq parser = Lark(TEMPLATED_PYTHON, parser=\(aqlalr\(aq, start=[\(aqsingle_input\(aq, \(aqfile_input\(aq, \(aqeval_input\(aq, \(aqtemplate_start\(aq], postlex=PythonIndenter(), maybe_placeholders=False) def parse_template(s): return parser.parse(s + \(aq\en\(aq, start=\(aqtemplate_start\(aq) def parse_code(s): return parser.parse(s + \(aq\en\(aq, start=\(aqfile_input\(aq) # # 2. Define translations using templates (each template code is parsed to a template tree) # pytemplate = TemplateConf(parse=parse_template) translations_3to2 = { \(aqyield from $a\(aq: \(aqfor _tmp in $a: yield _tmp\(aq, \(aqraise $e from $x\(aq: \(aqraise $e\(aq, \(aq$a / $b\(aq: \(aqfloat($a) / $b\(aq, } translations_3to2 = {pytemplate(k): pytemplate(v) for k, v in translations_3to2.items()} # # 3. Translate and reconstruct Python 3 code into valid Python 2 code # python_reconstruct = PythonReconstructor(parser) def translate_py3to2(code): tree = parse_code(code) tree = TemplateTranslator(translations_3to2).translate(tree) return python_reconstruct.reconstruct(tree) # # Test Code # _TEST_CODE = \(aq\(aq\(aq if a / 2 > 1: yield from [1,2,3] else: raise ValueError(a) from e \(aq\(aq\(aq def test(): print(_TEST_CODE) print(\(aq \-\-\-\-\-> \(aq) print(translate_py3to2(_TEST_CODE)) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Grammar\-complete Python Parser .sp A fully\-working Python 2 & 3 parser (but not production ready yet!) .sp This example demonstrates usage of the included Python grammars .INDENT 0.0 .INDENT 3.5 .sp .EX import sys import os, os.path from io import open import glob, time from lark import Lark from lark.indenter import PythonIndenter kwargs = dict(postlex=PythonIndenter(), start=\(aqfile_input\(aq) # Official Python grammar by Lark python_parser3 = Lark.open_from_package(\(aqlark\(aq, \(aqpython.lark\(aq, [\(aqgrammars\(aq], parser=\(aqlalr\(aq, **kwargs) # Local Python2 grammar python_parser2 = Lark.open(\(aqpython2.lark\(aq, rel_to=__file__, parser=\(aqlalr\(aq, **kwargs) python_parser2_earley = Lark.open(\(aqpython2.lark\(aq, rel_to=__file__, parser=\(aqearley\(aq, lexer=\(aqbasic\(aq, **kwargs) try: xrange except NameError: chosen_parser = python_parser3 else: chosen_parser = python_parser2 def _read(fn, *args): kwargs = {\(aqencoding\(aq: \(aqiso\-8859\-1\(aq} with open(fn, *args, **kwargs) as f: return f.read() def _get_lib_path(): if os.name == \(aqnt\(aq: if \(aqPyPy\(aq in sys.version: return os.path.join(sys.base_prefix, \(aqlib\-python\(aq, sys.winver) else: return os.path.join(sys.base_prefix, \(aqLib\(aq) else: return [x for x in sys.path if x.endswith(\(aq%s.%s\(aq % sys.version_info[:2])][0] def test_python_lib(): path = _get_lib_path() start = time.time() files = glob.glob(path+\(aq/*.py\(aq) total_kb = 0 for f in files: r = _read(os.path.join(path, f)) kb = len(r) / 1024 print( \(aq%s \-\et%.1f kb\(aq % (f, kb)) chosen_parser.parse(r + \(aq\en\(aq) total_kb += kb end = time.time() print( \(dqtest_python_lib (%d files, %.1f kb), time: %.2f secs\(dq%(len(files), total_kb, end\-start) ) def test_earley_equals_lalr(): path = _get_lib_path() files = glob.glob(path+\(aq/*.py\(aq) for f in files: print( f ) tree1 = python_parser2.parse(_read(os.path.join(path, f)) + \(aq\en\(aq) tree2 = python_parser2_earley.parse(_read(os.path.join(path, f)) + \(aq\en\(aq) assert tree1 == tree2 if __name__ == \(aq__main__\(aq: test_python_lib() # test_earley_equals_lalr() # python_parser3.parse(_read(sys.argv[1]) + \(aq\en\(aq) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Creating an AST from the parse tree .INDENT 0.0 .INDENT 3.5 This example demonstrates how to transform a parse\-tree into an AST using \fIlark.ast_utils\fP\&. .sp create_transformer() collects every subclass of \fIAst\fP subclass from the module, and creates a Lark transformer that builds the AST with no extra code. .sp This example only works with Python 3. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX import sys from typing import List from dataclasses import dataclass from lark import Lark, ast_utils, Transformer, v_args from lark.tree import Meta this_module = sys.modules[__name__] # # Define AST # class _Ast(ast_utils.Ast): # This will be skipped by create_transformer(), because it starts with an underscore pass class _Statement(_Ast): # This will be skipped by create_transformer(), because it starts with an underscore pass @dataclass class Value(_Ast, ast_utils.WithMeta): \(dqUses WithMeta to include line\-number metadata in the meta attribute\(dq meta: Meta value: object @dataclass class Name(_Ast): name: str @dataclass class CodeBlock(_Ast, ast_utils.AsList): # Corresponds to code_block in the grammar statements: List[_Statement] @dataclass class If(_Statement): cond: Value then: CodeBlock @dataclass class SetVar(_Statement): # Corresponds to set_var in the grammar name: str value: Value @dataclass class Print(_Statement): value: Value class ToAst(Transformer): # Define extra transformation functions, for rules that don\(aqt correspond to an AST class. def STRING(self, s): # Remove quotation marks return s[1:\-1] def DEC_NUMBER(self, n): return int(n) @v_args(inline=True) def start(self, x): return x # # Define Parser # parser = Lark(\(dq\(dq\(dq start: code_block code_block: statement+ ?statement: if | set_var | print if: \(dqif\(dq value \(dq{\(dq code_block \(dq}\(dq set_var: NAME \(dq=\(dq value \(dq;\(dq print: \(dqprint\(dq value \(dq;\(dq value: name | STRING | DEC_NUMBER name: NAME %import python (NAME, STRING, DEC_NUMBER) %import common.WS %ignore WS \(dq\(dq\(dq, parser=\(dqlalr\(dq, ) transformer = ast_utils.create_transformer(this_module, ToAst()) def parse(text): tree = parser.parse(text) return transformer.transform(tree) # # Test # if __name__ == \(aq__main__\(aq: print(parse(\(dq\(dq\(dq a = 1; if a { print \(dqa is 1\(dq; a = 2; } \(dq\(dq\(dq)) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Example\-Driven Error Reporting .sp A demonstration of example\-driven error reporting with the Earley parser (See also: error_reporting_lalr.py) .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, UnexpectedInput from _json_parser import json_grammar # Using the grammar from the json_parser example json_parser = Lark(json_grammar) class JsonSyntaxError(SyntaxError): def __str__(self): context, line, column = self.args return \(aq%s at line %s, column %s.\en\en%s\(aq % (self.label, line, column, context) class JsonMissingValue(JsonSyntaxError): label = \(aqMissing Value\(aq class JsonMissingOpening(JsonSyntaxError): label = \(aqMissing Opening\(aq class JsonMissingClosing(JsonSyntaxError): label = \(aqMissing Closing\(aq class JsonMissingComma(JsonSyntaxError): label = \(aqMissing Comma\(aq class JsonTrailingComma(JsonSyntaxError): label = \(aqTrailing Comma\(aq def parse(json_text): try: j = json_parser.parse(json_text) except UnexpectedInput as u: exc_class = u.match_examples(json_parser.parse, { JsonMissingOpening: [\(aq{\(dqfoo\(dq: ]}\(aq, \(aq{\(dqfoor\(dq: }}\(aq, \(aq{\(dqfoo\(dq: }\(aq], JsonMissingClosing: [\(aq{\(dqfoo\(dq: [}\(aq, \(aq{\(aq, \(aq{\(dqa\(dq: 1\(aq, \(aq[1\(aq], JsonMissingComma: [\(aq[1 2]\(aq, \(aq[false 1]\(aq, \(aq[\(dqb\(dq 1]\(aq, \(aq{\(dqa\(dq:true 1:4}\(aq, \(aq{\(dqa\(dq:1 1:4}\(aq, \(aq{\(dqa\(dq:\(dqb\(dq 1:4}\(aq], JsonTrailingComma: [\(aq[,]\(aq, \(aq[1,]\(aq, \(aq[1,2,]\(aq, \(aq{\(dqfoo\(dq:1,}\(aq, \(aq{\(dqfoo\(dq:false,\(dqbar\(dq:true,}\(aq] }, use_accepts=True) if not exc_class: raise raise exc_class(u.get_context(json_text), u.line, u.column) def test(): try: parse(\(aq{\(dqexample1\(dq: \(dqvalue\(dq\(aq) except JsonMissingClosing as e: print(e) try: parse(\(aq{\(dqexample2\(dq: ] \(aq) except JsonMissingOpening as e: print(e) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Example\-Driven Error Reporting .sp A demonstration of example\-driven error reporting with the LALR parser (See also: error_reporting_earley.py) .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark, UnexpectedInput from _json_parser import json_grammar # Using the grammar from the json_parser example json_parser = Lark(json_grammar, parser=\(aqlalr\(aq) class JsonSyntaxError(SyntaxError): def __str__(self): context, line, column = self.args return \(aq%s at line %s, column %s.\en\en%s\(aq % (self.label, line, column, context) class JsonMissingValue(JsonSyntaxError): label = \(aqMissing Value\(aq class JsonMissingOpening(JsonSyntaxError): label = \(aqMissing Opening\(aq class JsonMissingClosing(JsonSyntaxError): label = \(aqMissing Closing\(aq class JsonMissingComma(JsonSyntaxError): label = \(aqMissing Comma\(aq class JsonTrailingComma(JsonSyntaxError): label = \(aqTrailing Comma\(aq def parse(json_text): try: j = json_parser.parse(json_text) except UnexpectedInput as u: exc_class = u.match_examples(json_parser.parse, { JsonMissingOpening: [\(aq{\(dqfoo\(dq: ]}\(aq, \(aq{\(dqfoor\(dq: }}\(aq, \(aq{\(dqfoo\(dq: }\(aq], JsonMissingClosing: [\(aq{\(dqfoo\(dq: [}\(aq, \(aq{\(aq, \(aq{\(dqa\(dq: 1\(aq, \(aq[1\(aq], JsonMissingComma: [\(aq[1 2]\(aq, \(aq[false 1]\(aq, \(aq[\(dqb\(dq 1]\(aq, \(aq{\(dqa\(dq:true 1:4}\(aq, \(aq{\(dqa\(dq:1 1:4}\(aq, \(aq{\(dqa\(dq:\(dqb\(dq 1:4}\(aq], JsonTrailingComma: [\(aq[,]\(aq, \(aq[1,]\(aq, \(aq[1,2,]\(aq, \(aq{\(dqfoo\(dq:1,}\(aq, \(aq{\(dqfoo\(dq:false,\(dqbar\(dq:true,}\(aq] }, use_accepts=True) if not exc_class: raise raise exc_class(u.get_context(json_text), u.line, u.column) def test(): try: parse(\(aq{\(dqexample1\(dq: \(dqvalue\(dq\(aq) except JsonMissingClosing as e: print(e) try: parse(\(aq{\(dqexample2\(dq: ] \(aq) except JsonMissingOpening as e: print(e) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Reconstruct Python .sp Demonstrates how Lark\(aqs experimental text\-reconstruction feature can recreate functional Python code from its parse\-tree, using just the correct grammar and a small formatter. .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Token, Lark from lark.reconstruct import Reconstructor from lark.indenter import PythonIndenter # Official Python grammar by Lark python_parser3 = Lark.open_from_package(\(aqlark\(aq, \(aqpython.lark\(aq, [\(aqgrammars\(aq], parser=\(aqlalr\(aq, postlex=PythonIndenter(), start=\(aqfile_input\(aq, maybe_placeholders=False # Necessary for reconstructor ) SPACE_AFTER = set(\(aq,+\-*/~@<>=\(dq|:\(aq) SPACE_BEFORE = (SPACE_AFTER \- set(\(aq,:\(aq)) | set(\(aq\e\(aq\(aq) def special(sym): return Token(\(aqSPECIAL\(aq, sym.name) def postproc(items): stack = [\(aq\en\(aq] actions = [] last_was_whitespace = True for item in items: if isinstance(item, Token) and item.type == \(aqSPECIAL\(aq: actions.append(item.value) else: if actions: assert actions[0] == \(aq_NEWLINE\(aq and \(aq_NEWLINE\(aq not in actions[1:], actions for a in actions[1:]: if a == \(aq_INDENT\(aq: stack.append(stack[\-1] + \(aq \(aq * 4) else: assert a == \(aq_DEDENT\(aq stack.pop() actions.clear() yield stack[\-1] last_was_whitespace = True if not last_was_whitespace: if item[0] in SPACE_BEFORE: yield \(aq \(aq yield item last_was_whitespace = item[\-1].isspace() if not last_was_whitespace: if item[\-1] in SPACE_AFTER: yield \(aq \(aq last_was_whitespace = True yield \(dq\en\(dq class PythonReconstructor: def __init__(self, parser): self._recons = Reconstructor(parser, {\(aq_NEWLINE\(aq: special, \(aq_DEDENT\(aq: special, \(aq_INDENT\(aq: special}) def reconstruct(self, tree): return self._recons.reconstruct(tree, postproc) def test(): python_reconstructor = PythonReconstructor(python_parser3) self_contents = open(__file__).read() tree = python_parser3.parse(self_contents+\(aq\en\(aq) output = python_reconstructor.reconstruct(tree) tree_new = python_parser3.parse(output) print(tree.pretty()) print(tree_new.pretty()) # assert tree.pretty() == tree_new.pretty() assert tree == tree_new print(output) if __name__ == \(aq__main__\(aq: test() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Using lexer dynamic_complete .sp Demonstrates how to use \fBlexer=\(aqdynamic_complete\(aq\fP and \fBambiguity=\(aqexplicit\(aq\fP .sp Sometimes you have data that is highly ambiguous or \(aqbroken\(aq in some sense. When using \fBparser=\(aqearley\(aq\fP and \fBlexer=\(aqdynamic_complete\(aq\fP, Lark will be able parse just about anything as long as there is a valid way to generate it from the Grammar, including looking \(aqinto\(aq the Regexes. .sp This examples shows how to parse a json input where the quotes have been replaced by underscores: \fB{_foo_:{}, _bar_: [], _baz_: __}\fP Notice that underscores might still appear inside strings, so a potentially valid reading of the above is: \fB{\(dqfoo_:{}, _bar\(dq: [], \(dqbaz\(dq: \(dq\(dq}\fP .INDENT 0.0 .INDENT 3.5 .sp .EX from pprint import pprint from lark import Lark, Tree, Transformer, v_args from lark.visitors import Transformer_InPlace GRAMMAR = r\(dq\(dq\(dq %import common.SIGNED_NUMBER %import common.WS_INLINE %import common.NEWLINE %ignore WS_INLINE ?start: value ?value: object | array | string | SIGNED_NUMBER \-> number | \(dqtrue\(dq \-> true | \(dqfalse\(dq \-> false | \(dqnull\(dq \-> null array : \(dq[\(dq (value (\(dq,\(dq value)*)? \(dq]\(dq object : \(dq{\(dq (pair (\(dq,\(dq pair)*)? \(dq}\(dq pair : string \(dq:\(dq value string: STRING STRING : ESCAPED_STRING ESCAPED_STRING: QUOTE_CHAR _STRING_ESC_INNER QUOTE_CHAR QUOTE_CHAR: \(dq_\(dq _STRING_INNER: /.*/ _STRING_ESC_INNER: _STRING_INNER /(? STRING %import common.SIGNED_NUMBER \-> NUMBER %import common.WS %ignore WS \(aq\(aq\(aq self.lark = Lark(grammar, parser=None, lexer=\(aqbasic\(aq) # All tokens: print([t.name for t in self.lark.parser.lexer.tokens]) def defaultPaper(self, style): return QColor(39, 40, 34) def language(self): return \(dqJson\(dq def description(self, style): return {v: k for k, v in self.token_styles.items()}.get(style, \(dq\(dq) def styleText(self, start, end): self.startStyling(start) text = self.parent().text()[start:end] last_pos = 0 try: for token in self.lark.lex(text): ws_len = token.start_pos \- last_pos if ws_len: self.setStyling(ws_len, 0) # whitespace token_len = len(bytearray(token, \(dqutf\-8\(dq)) self.setStyling( token_len, self.token_styles.get(token.type, 0)) last_pos = token.start_pos + token_len except Exception as e: print(e) class EditorAll(QsciScintilla): def __init__(self, parent=None): super().__init__(parent) # Set font defaults font = QFont() font.setFamily(\(aqConsolas\(aq) font.setFixedPitch(True) font.setPointSize(8) font.setBold(True) self.setFont(font) # Set margin defaults fontmetrics = QFontMetrics(font) self.setMarginsFont(font) self.setMarginWidth(0, fontmetrics.width(\(dq000\(dq) + 6) self.setMarginLineNumbers(0, True) self.setMarginsForegroundColor(QColor(128, 128, 128)) self.setMarginsBackgroundColor(QColor(39, 40, 34)) self.setMarginType(1, self.SymbolMargin) self.setMarginWidth(1, 12) # Set indentation defaults self.setIndentationsUseTabs(False) self.setIndentationWidth(4) self.setBackspaceUnindents(True) self.setIndentationGuides(True) # self.setFolding(QsciScintilla.CircledFoldStyle) # Set caret defaults self.setCaretForegroundColor(QColor(247, 247, 241)) self.setCaretWidth(2) # Set selection color defaults self.setSelectionBackgroundColor(QColor(61, 61, 52)) self.resetSelectionForegroundColor() # Set multiselection defaults self.SendScintilla(QsciScintilla.SCI_SETMULTIPLESELECTION, True) self.SendScintilla(QsciScintilla.SCI_SETMULTIPASTE, 1) self.SendScintilla( QsciScintilla.SCI_SETADDITIONALSELECTIONTYPING, True) lexer = LexerJson(self) self.setLexer(lexer) EXAMPLE_TEXT = textwrap.dedent(\(dq\(dq\(dq\e { \(dq_id\(dq: \(dq5b05ffcbcf8e597939b3f5ca\(dq, \(dqabout\(dq: \(dqExcepteur consequat commodo esse voluptate aute aliquip ad sint deserunt commodo eiusmod irure. Sint aliquip sit magna duis eu est culpa aliqua excepteur ut tempor nulla. Aliqua ex pariatur id labore sit. Quis sit ex aliqua veniam exercitation laboris anim adipisicing. Lorem nisi reprehenderit ullamco labore qui sit ut aliqua tempor consequat pariatur proident.\(dq, \(dqaddress\(dq: \(dq665 Malbone Street, Thornport, Louisiana, 243\(dq, \(dqage\(dq: 23, \(dqbalance\(dq: \(dq$3,216.91\(dq, \(dqcompany\(dq: \(dqBULLJUICE\(dq, \(dqemail\(dq: \(dqelisekelley@bulljuice.com\(dq, \(dqeyeColor\(dq: \(dqbrown\(dq, \(dqgender\(dq: \(dqfemale\(dq, \(dqguid\(dq: \(dqd3a6d865\-0f64\-4042\-8a78\-4f53de9b0707\(dq, \(dqindex\(dq: 0, \(dqisActive\(dq: false, \(dqisActive2\(dq: true, \(dqlatitude\(dq: \-18.660714, \(dqlongitude\(dq: \-85.378048, \(dqname\(dq: \(dqElise Kelley\(dq, \(dqphone\(dq: \(dq+1 (808) 543\-3966\(dq, \(dqpicture\(dq: \(dqhttp://placehold.it/32x32\(dq, \(dqregistered\(dq: \(dq2017\-09\-30T03:47:40 \-02:00\(dq, \(dqtags\(dq: [ \(dqet\(dq, \(dqnostrud\(dq, \(dqin\(dq, \(dqfugiat\(dq, \(dqincididunt\(dq, \(dqlabore\(dq, \(dqnostrud\(dq ] }\e \(dq\(dq\(dq) def main(): app = QApplication(sys.argv) ex = EditorAll() ex.setWindowTitle(__file__) ex.setText(EXAMPLE_TEXT) ex.resize(800, 600) ex.show() sys.exit(app.exec_()) if __name__ == \(dq__main__\(dq: main() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SH GRAMMAR COMPOSITION .sp This example shows how to do grammar composition in Lark, by creating a new file format that allows both CSV and JSON to co\-exist. .sp We show how, by using namespaces, Lark grammars and their transformers can be fully reused \- they don\(aqt need to care if their grammar is used directly, or being imported, or who is doing the importing. .sp See \fI\%main.py\fP for more details. Transformer for evaluating json.lark .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Transformer, v_args class JsonTreeToJson(Transformer): @v_args(inline=True) def string(self, s): return s[1:\-1].replace(\(aq\e\e\(dq\(aq, \(aq\(dq\(aq) array = list pair = tuple object = dict number = v_args(inline=True)(float) null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) Transformer for evaluating csv.lark .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Transformer class CsvTreeToPandasDict(Transformer): INT = int FLOAT = float SIGNED_FLOAT = float WORD = str NON_SEPARATOR_STRING = str def row(self, children): return children def start(self, children): data = {} header = children[0].children for heading in header: data[heading] = [] for row in children[1:]: for i, element in enumerate(row): data[header[i]].append(element) return data .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SS Grammar Composition .sp This example shows how to do grammar composition in Lark, by creating a new file format that allows both CSV and JSON to co\-exist. .INDENT 0.0 .IP 1. 3 We define \fBstorage.lark\fP, which imports both \fBcsv.lark\fP and \fBjson.lark\fP, .UNINDENT .INDENT 0.0 .INDENT 3.5 and allows them to be used one after the other. .sp In the generated tree, each imported rule/terminal is automatically prefixed (with \fBjson__\fP or .nf \(ga\(ga .fi .nf csv__ .fi ), which creates an implicit namespace and allows them to coexist without collisions. .UNINDENT .UNINDENT .INDENT 0.0 .IP 2. 3 We merge their respective transformers (unaware of each other) into a new base transformer. The resulting transformer can evaluate both JSON and CSV in the parse tree. .UNINDENT .INDENT 0.0 .INDENT 3.5 The methods of each transformer are renamed into their appropriate namespace, using the given prefix. This approach allows full re\-use: the transformers don\(aqt need to care if their grammar is used directly, or being imported, or who is doing the importing. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX from pathlib import Path from lark import Lark from json import dumps from lark.visitors import Transformer, merge_transformers from eval_csv import CsvTreeToPandasDict from eval_json import JsonTreeToJson __dir__ = Path(__file__).parent class Storage(Transformer): def start(self, children): return children storage_transformer = merge_transformers(Storage(), csv=CsvTreeToPandasDict(), json=JsonTreeToJson()) parser = Lark.open(\(dqstorage.lark\(dq, rel_to=__file__) def main(): json_tree = parser.parse(dumps({\(dqtest\(dq: \(dqa\(dq, \(dqdict\(dq: { \(dqlist\(dq: [1, 1.2] }})) res = storage_transformer.transform(json_tree) print(\(dqJust JSON: \(dq, res) csv_json_tree = parser.parse(open(__dir__ / \(aqcombined_csv_and_json.txt\(aq).read()) res = storage_transformer.transform(csv_json_tree) print(\(dqJSON + CSV: \(dq, dumps(res, indent=2)) if __name__ == \(dq__main__\(dq: main() .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SH EXAMPLE GRAMMARS .sp This directory is a collection of lark grammars, taken from real world projects. .INDENT 0.0 .IP \(bu 2 \fI\%Verilog\fP \- Taken from \fI\%https://github.com/circuitgraph/circuitgraph/blob/main/circuitgraph/parsing/verilog.lark\fP .UNINDENT .SH STANDALONE EXAMPLE .sp To initialize, cd to this folder, and run: .INDENT 0.0 .INDENT 3.5 .sp .EX \&./create_standalone.sh .EE .UNINDENT .UNINDENT .sp Or: .INDENT 0.0 .INDENT 3.5 .sp .EX python \-m lark.tools.standalone json.lark > json_parser.py .EE .UNINDENT .UNINDENT .sp Then run using: .INDENT 0.0 .INDENT 3.5 .sp .EX python json_parser_main.py .EE .UNINDENT .UNINDENT .SS Standalone Parser .INDENT 0.0 .INDENT 3.5 This example demonstrates how to generate and use the standalone parser, using the JSON example. .sp See README.md for more details. .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX import sys from json_parser import Lark_StandAlone, Transformer, v_args inline_args = v_args(inline=True) class TreeToJson(Transformer): @inline_args def string(self, s): return s[1:\-1].replace(\(aq\e\e\(dq\(aq, \(aq\(dq\(aq) array = list pair = tuple object = dict number = inline_args(float) null = lambda self, _: None true = lambda self, _: True false = lambda self, _: False parser = Lark_StandAlone(transformer=TreeToJson()) if __name__ == \(aq__main__\(aq: with open(sys.argv[1]) as f: print(parser.parse(f.read())) .EE .UNINDENT .UNINDENT .sp \fBTotal running time of the script:\fP ( 0 minutes 0.000 seconds) .SH GRAMMAR REFERENCE .SS Definitions .sp A \fBgrammar\fP is a list of rules and terminals, that together define a language. .sp Terminals define the alphabet of the language, while rules define its structure. .sp In Lark, a terminal may be a string, a regular expression, or a concatenation of these and other terminals. .sp Each rule is a list of terminals and rules, whose location and nesting define the structure of the resulting parse\-tree. .sp A \fBparsing algorithm\fP is an algorithm that takes a grammar definition and a sequence of symbols (members of the alphabet), and matches the entirety of the sequence by searching for a structure that is allowed by the grammar. .SS General Syntax and notes .sp Grammars in Lark are based on \fI\%EBNF\fP syntax, with several enhancements. .sp EBNF is basically a short\-hand for common BNF patterns. .sp Optionals are expanded: .INDENT 0.0 .INDENT 3.5 .sp .EX a b? c \-> (a c | a b c) .EE .UNINDENT .UNINDENT .sp Repetition is extracted into a recursion: .INDENT 0.0 .INDENT 3.5 .sp .EX a: b* \-> a: _b_tag _b_tag: (_b_tag b)? .EE .UNINDENT .UNINDENT .sp And so on. .sp Lark grammars are composed of a list of definitions and directives, each on its own line. A definition is either a named rule, or a named terminal, with the following syntax, respectively: .INDENT 0.0 .INDENT 3.5 .sp .EX rule: | etc. TERM: // Rules aren\(aqt allowed .EE .UNINDENT .UNINDENT .sp \fBComments\fP start with either \fB//\fP (C++ style) or \fB#\fP (Python style, since version 1.1.6) and last to the end of the line. .sp Lark begins the parse with the rule \(aqstart\(aq, unless specified otherwise in the options. .sp Names of rules are always in lowercase, while names of terminals are always in uppercase. This distinction has practical effects, for the shape of the generated parse\-tree, and the automatic construction of the lexer (aka tokenizer, or scanner). .SS Terminals .sp Terminals are used to match text into symbols. They can be defined as a combination of literals and other terminals. .sp \fBSyntax:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX [. ] : .EE .UNINDENT .UNINDENT .sp Terminal names must be uppercase. .sp Literals can be one of: .INDENT 0.0 .IP \(bu 2 \fB\(dqstring\(dq\fP .IP \(bu 2 \fB/regular expression+/\fP .IP \(bu 2 \fB\(dqcase\-insensitive string\(dqi\fP .IP \(bu 2 \fB/re with flags/imulx\fP .IP \(bu 2 Literal range: \fB\(dqa\(dq..\(dqz\(dq\fP, \fB\(dq1\(dq..\(dq9\(dq\fP, etc. .UNINDENT .sp Terminals also support grammar operators, such as \fB|\fP, \fB+\fP, \fB*\fP and \fB?\fP\&. .sp Terminals are a linear construct, and therefore may not contain themselves (recursion isn\(aqt allowed). .SS Templates .sp Templates are expanded when preprocessing the grammar. .sp Definition syntax: .INDENT 0.0 .INDENT 3.5 .sp .EX my_template{param1, param2, ...}: .EE .UNINDENT .UNINDENT .sp Use syntax: .INDENT 0.0 .INDENT 3.5 .sp .EX some_rule: my_template{arg1, arg2, ...} .EE .UNINDENT .UNINDENT .sp Example: .INDENT 0.0 .INDENT 3.5 .sp .EX _separated{x, sep}: x (sep x)* // Define a sequence of \(aqx sep x sep x ...\(aq num_list: \(dq[\(dq _separated{NUMBER, \(dq,\(dq} \(dq]\(dq // Will match \(dq[1, 2, 3]\(dq etc. .EE .UNINDENT .UNINDENT .SS Priority .sp Terminals can be assigned a priority to influence lexing. Terminal priorities are signed integers with a default value of 0. .sp When using a lexer, the highest priority terminals are always matched first. .sp When using Earley\(aqs dynamic lexing, terminal priorities are used to prefer certain lexings and resolve ambiguity. .SS Regexp Flags .sp You can use flags on regexps and strings. For example: .INDENT 0.0 .INDENT 3.5 .sp .EX SELECT: \(dqselect\(dqi //# Will ignore case, and match SELECT or Select, etc. MULTILINE_TEXT: /.+/s SIGNED_INTEGER: / [+\-]? # the sign (0|[1\-9][0\-9]*) # the digits /x .EE .UNINDENT .UNINDENT .sp Supported flags are one of: \fBimslux\fP\&. See Python\(aqs regex documentation for more details on each one. .sp Regexps/strings of different flags can only be concatenated in Python 3.6+ .SS Notes for when using a lexer: .sp When using a lexer (basic or contextual), it is the grammar\-author\(aqs responsibility to make sure the literals don\(aqt collide, or that if they do, they are matched in the desired order. Literals are matched according to the following precedence: .INDENT 0.0 .IP \(bu 2 Highest priority first (priority is specified as: TERM.number: ...) .IP \(bu 2 Length of match (for regexps, the longest theoretical match is used) .IP \(bu 2 Length of literal / pattern definition .IP \(bu 2 Name .UNINDENT .sp \fBExamples:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX IF: \(dqif\(dq INTEGER : /[0\-9]+/ INTEGER2 : (\(dq0\(dq..\(dq9\(dq)+ //# Same as INTEGER DECIMAL.2: INTEGER? \(dq.\(dq INTEGER //# Will be matched before INTEGER WHITESPACE: (\(dq \(dq | /\et/ )+ SQL_SELECT: \(dqselect\(dqi .EE .UNINDENT .UNINDENT .SS Regular expressions & Ambiguity .sp Each terminal is eventually compiled to a regular expression. All the operators and references inside it are mapped to their respective expressions. .sp For example, in the following grammar, \fBA1\fP and \fBA2\fP, are equivalent: .INDENT 0.0 .INDENT 3.5 .sp .EX A1: \(dqa\(dq | \(dqb\(dq A2: /a|b/ .EE .UNINDENT .UNINDENT .sp This means that inside terminals, Lark cannot detect or resolve ambiguity, even when using Earley. .sp For example, for this grammar: .INDENT 0.0 .INDENT 3.5 .sp .EX start : (A | B)+ A : \(dqa\(dq | \(dqab\(dq B : \(dqb\(dq .EE .UNINDENT .UNINDENT .sp We get only one possible derivation, instead of two: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> p = Lark(g, ambiguity=\(dqexplicit\(dq) >>> p.parse(\(dqab\(dq) Tree(\(aqstart\(aq, [Token(\(aqA\(aq, \(aqab\(aq)]) .EE .UNINDENT .UNINDENT .sp This is happening because Python\(aqs regex engine always returns the best matching option. There is no way to access the alternatives. .sp If you find yourself in this situation, the recommended solution is to use rules instead. .sp Example: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> p = Lark(\(dq\(dq\(dqstart: (a | b)+ \&... !a: \(dqa\(dq | \(dqab\(dq \&... !b: \(dqb\(dq \&... \(dq\(dq\(dq, ambiguity=\(dqexplicit\(dq) >>> print(p.parse(\(dqab\(dq).pretty()) _ambig start a ab start a a b b .EE .UNINDENT .UNINDENT .SS Rules .sp \fBSyntax:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX : [\-> ] | ... .EE .UNINDENT .UNINDENT .sp Names of rules and aliases are always in lowercase. .sp Rule definitions can be extended to the next line by using the OR operator (signified by a pipe: \fB|\fP ). .sp An alias is a name for the specific rule alternative. It affects tree construction. .sp Each item is one of: .INDENT 0.0 .IP \(bu 2 \fBrule\fP .IP \(bu 2 \fBTERMINAL\fP .IP \(bu 2 \fB\(dqstring literal\(dq\fP or \fB/regexp literal/\fP .IP \(bu 2 \fB(item item ..)\fP \- Group items .IP \(bu 2 \fB[item item ..]\fP \- Maybe. Same as \fB(item item ..)?\fP, but when \fBmaybe_placeholders=True\fP, generates \fBNone\fP if there is no match. .IP \(bu 2 \fBitem?\fP \- Zero or one instances of item (\(dqmaybe\(dq) .IP \(bu 2 \fBitem*\fP \- Zero or more instances of item .IP \(bu 2 \fBitem+\fP \- One or more instances of item .IP \(bu 2 \fBitem ~ n\fP \- Exactly \fIn\fP instances of item .IP \(bu 2 \fBitem ~ n..m\fP \- Between \fIn\fP to \fIm\fP instances of item (not recommended for wide ranges, due to performance issues) .UNINDENT .sp \fBExamples:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX hello_world: \(dqhello\(dq \(dqworld\(dq mul: (mul \(dq*\(dq)? number //# Left\-recursion is allowed and encouraged! expr: expr operator expr | value //# Multi\-line, belongs to expr four_words: word ~ 4 .EE .UNINDENT .UNINDENT .SS Priority .sp Like terminals, rules can be assigned a priority. Rule priorities are signed integers with a default value of 0. .sp When using LALR, the highest priority rules are used to resolve collision errors. .sp When using Earley, rule priorities are used to resolve ambiguity. .sp .SS Directives .SS %ignore .sp All occurrences of the terminal will be ignored, and won\(aqt be part of the parse. .sp Using the \fB%ignore\fP directive results in a cleaner grammar. .sp It\(aqs especially important for the LALR(1) algorithm, because adding whitespace (or comments, or other extraneous elements) explicitly in the grammar, harms its predictive abilities, which are based on a lookahead of 1. .sp \fBSyntax:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %ignore .EE .UNINDENT .UNINDENT .sp \fBExamples:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %ignore \(dq \(dq COMMENT: \(dq#\(dq /[^\en]/* %ignore COMMENT .EE .UNINDENT .UNINDENT .SS %import .sp Allows one to import terminals and rules from lark grammars. .sp When importing rules, all their dependencies will be imported into a namespace, to avoid collisions. It\(aqs not possible to override their dependencies (e.g. like you would when inheriting a class). .sp \fBSyntax:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %import . %import . %import . \-> %import . \-> %import (, , , ) .EE .UNINDENT .UNINDENT .sp If the module path is absolute, Lark will attempt to load it from the built\-in directory (which currently contains \fBcommon.lark\fP, \fBpython.lark\fP, and \fBunicode.lark\fP). .sp If the module path is relative, such as \fB\&.path.to.file\fP, Lark will attempt to load it from the current working directory. Grammars must have the \fB\&.lark\fP extension. .sp The rule or terminal can be imported under another name with the \fB\->\fP syntax. .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %import common.NUMBER %import .terminals_file (A, B, C) %import .rules_file.rulea \-> ruleb .EE .UNINDENT .UNINDENT .sp Note that \fB%ignore\fP directives cannot be imported. Imported rules will abide by the \fB%ignore\fP directives declared in the main grammar. .SS %declare .sp Declare a terminal without defining it. Useful for plugins. .SS %override .sp Override a rule or terminals, affecting all references to it, even in imported grammars. .sp Useful for implementing an inheritance pattern when importing grammars. .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %import my_grammar (start, number, NUMBER) // Add hex support to my_grammar %override number: NUMBER | /0x\ew+/ .EE .UNINDENT .UNINDENT .SS %extend .sp Extend the definition of a rule or terminal, e.g. add a new option on what it can match, like when separated with \fB|\fP\&. .sp Useful for splitting up a definition of a complex rule with many different options over multiple files. .sp Can also be used to implement a plugin system where a core grammar is extended by others. .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX %import my_grammar (start, NUMBER) // Add hex support to my_grammar %extend NUMBER: /0x\ew+/ .EE .UNINDENT .UNINDENT .sp For both \fB%extend\fP and \fB%override\fP, there is not requirement for a rule/terminal to come from another file, but that is probably the most common usecase .SH TREE CONSTRUCTION REFERENCE .sp Lark builds a tree automatically based on the structure of the grammar, where each rule that is matched becomes a branch (node) in the tree, and its children are its matches, in the order of matching. .sp For example, the rule \fBnode: child1 child2\fP will create a tree node with two children. If it is matched as part of another rule (i.e. if it isn\(aqt the root), the new rule\(aqs tree node will become its parent. .sp Using \fBitem+\fP or \fBitem*\fP will result in a list of items, equivalent to writing \fBitem item item ..\fP\&. .sp Using \fBitem?\fP will return the item if it matched, or nothing. .sp If \fBmaybe_placeholders=True\fP (the default), then using \fB[item]\fP will return the item if it matched, or the value \fBNone\fP, if it didn\(aqt. .sp If \fBmaybe_placeholders=False\fP, then \fB[]\fP behaves like \fB()?\fP\&. .SS Terminals .sp Terminals are always values in the tree, never branches. .sp Lark filters out certain types of terminals by default, considering them punctuation: .INDENT 0.0 .IP \(bu 2 Terminals that won\(aqt appear in the tree are: .INDENT 2.0 .IP \(bu 2 Unnamed literals (like \fB\(dqkeyword\(dq\fP or \fB\(dq+\(dq\fP) .IP \(bu 2 Terminals whose name starts with an underscore (like \fB_DIGIT\fP) .UNINDENT .IP \(bu 2 Terminals that \fIwill\fP appear in the tree are: .INDENT 2.0 .IP \(bu 2 Unnamed regular expressions (like \fB/[0\-9]/\fP) .IP \(bu 2 Named terminals whose name starts with a letter (like \fBDIGIT\fP) .UNINDENT .UNINDENT .sp Note: Terminals composed of literals and other terminals always include the entire match without filtering any part. .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX start: PNAME pname PNAME: \(dq(\(dq NAME \(dq)\(dq pname: \(dq(\(dq NAME \(dq)\(dq NAME: /\ew+/ %ignore /\es+/ .EE .UNINDENT .UNINDENT .sp Lark will parse \(dq(Hello) (World)\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX start (Hello) pname World .EE .UNINDENT .UNINDENT .sp Rules prefixed with \fB!\fP will retain all their literals regardless. .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX expr: \(dq(\(dq expr \(dq)\(dq | NAME+ NAME: /\ew+/ %ignore \(dq \(dq .EE .UNINDENT .UNINDENT .sp Lark will parse \(dq((hello world))\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX expr expr expr \(dqhello\(dq \(dqworld\(dq .EE .UNINDENT .UNINDENT .sp The brackets do not appear in the tree by design. The words appear because they are matched by a named terminal. .SS Shaping the tree .sp Users can alter the automatic construction of the tree using a collection of grammar features. .INDENT 0.0 .IP \(bu 2 Rules whose name begins with an underscore will be inlined into their containing rule. .UNINDENT .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX start: \(dq(\(dq _greet \(dq)\(dq _greet: /\ew+/ /\ew+/ .EE .UNINDENT .UNINDENT .sp Lark will parse \(dq(hello world)\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX start \(dqhello\(dq \(dqworld\(dq .EE .UNINDENT .UNINDENT .INDENT 0.0 .IP \(bu 2 Rules that receive a question mark (?) at the beginning of their definition, will be inlined if they have a single child, after filtering. .UNINDENT .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX start: greet greet ?greet: \(dq(\(dq /\ew+/ \(dq)\(dq | /\ew+/ /\ew+/ .EE .UNINDENT .UNINDENT .sp Lark will parse \(dqhello world (planet)\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX start greet \(dqhello\(dq \(dqworld\(dq \(dqplanet\(dq .EE .UNINDENT .UNINDENT .INDENT 0.0 .IP \(bu 2 Rules that begin with an exclamation mark will keep all their terminals (they won\(aqt get filtered). .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX !expr: \(dq(\(dq expr \(dq)\(dq | NAME+ NAME: /\ew+/ %ignore \(dq \(dq .EE .UNINDENT .UNINDENT .sp Will parse \(dq((hello world))\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX expr ( expr ( expr hello world ) ) .EE .UNINDENT .UNINDENT .sp Using the \fB!\fP prefix is usually a \(dqcode smell\(dq, and may point to a flaw in your grammar design. .INDENT 0.0 .IP \(bu 2 Aliases \- options in a rule can receive an alias. It will be then used as the branch name for the option, instead of the rule name. .UNINDENT .sp \fBExample:\fP .INDENT 0.0 .INDENT 3.5 .sp .EX start: greet greet greet: \(dqhello\(dq | \(dqworld\(dq \-> planet .EE .UNINDENT .UNINDENT .sp Lark will parse \(dqhello world\(dq as: .INDENT 0.0 .INDENT 3.5 .sp .EX start greet planet .EE .UNINDENT .UNINDENT .SH API REFERENCE .SS Lark .INDENT 0.0 .TP .B class lark.Lark(grammar: Grammar | str | IO[str], **options) Main interface for the library. .sp It\(aqs mostly a thin wrapper for the many different parsers, and for the tree constructor. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBgrammar\fP \-\- a string or file\-object containing the grammar spec (using Lark\(aqs ebnf syntax) .IP \(bu 2 \fBoptions\fP \-\- a dictionary controlling various aspects of Lark. .UNINDENT .UNINDENT .sp Example .sp .EX >>> Lark(r\(aq\(aq\(aqstart: \(dqfoo\(dq \(aq\(aq\(aq) Lark(...) .EE .sp \fB=== General Options ===\fP .INDENT 7.0 .TP .B start The start symbol. Either a string, or a list of strings for multiple possible starts (Default: \(dqstart\(dq) .TP .B debug Display debug information and extra warnings. Use only when debugging (Default: \fBFalse\fP) When used with Earley, it generates a forest graph as \(dqsppf.png\(dq, if \(aqdot\(aq is installed. .TP .B strict Throw an exception on any potential ambiguity, including shift/reduce conflicts, and regex collisions. .TP .B transformer Applies the transformer to every parse tree (equivalent to applying it after the parse, but faster) .TP .B propagate_positions Propagates positional attributes into the \(aqmeta\(aq attribute of all tree branches. Sets attributes: (line, column, end_line, end_column, start_pos, end_pos, .INDENT 7.0 .INDENT 3.5 container_line, container_column, container_end_line, container_end_column) .UNINDENT .UNINDENT .sp Accepts \fBFalse\fP, \fBTrue\fP, or a callable, which will filter which nodes to ignore when propagating. .TP .B maybe_placeholders When \fBTrue\fP, the \fB[]\fP operator returns \fBNone\fP when not matched. When \fBFalse\fP, \fB[]\fP behaves like the \fB?\fP operator, and returns no value at all. (default= \fBTrue\fP) .TP .B cache Cache the results of the Lark grammar analysis, for x2 to x3 faster loading. LALR only for now. .INDENT 7.0 .IP \(bu 2 When \fBFalse\fP, does nothing (default) .IP \(bu 2 When \fBTrue\fP, caches to a temporary file in the local directory .IP \(bu 2 When given a string, caches to the path pointed by the string .UNINDENT .TP .B regex When True, uses the \fBregex\fP module instead of the stdlib \fBre\fP\&. .TP .B g_regex_flags Flags that are applied to all terminals (both regex and strings) .TP .B keep_all_tokens Prevent the tree builder from automagically removing \(dqpunctuation\(dq tokens (Default: \fBFalse\fP) .TP .B tree_class Lark will produce trees comprised of instances of this class instead of the default \fBlark.Tree\fP\&. .UNINDENT .sp \fB=== Algorithm Options ===\fP .INDENT 7.0 .TP .B parser Decides which parser engine to use. Accepts \(dqearley\(dq or \(dqlalr\(dq. (Default: \(dqearley\(dq). (there is also a \(dqcyk\(dq option for legacy) .TP .B lexer Decides whether or not to use a lexer stage .INDENT 7.0 .IP \(bu 2 \(dqauto\(dq (default): Choose for me based on the parser .IP \(bu 2 \(dqbasic\(dq: Use a basic lexer .IP \(bu 2 \(dqcontextual\(dq: Stronger lexer (only works with parser=\(dqlalr\(dq) .IP \(bu 2 \(dqdynamic\(dq: Flexible and powerful (only with parser=\(dqearley\(dq) .IP \(bu 2 \(dqdynamic_complete\(dq: Same as dynamic, but tries \fIevery\fP variation of tokenizing possible. .UNINDENT .TP .B ambiguity Decides how to handle ambiguity in the parse. Only relevant if parser=\(dqearley\(dq .INDENT 7.0 .IP \(bu 2 \(dqresolve\(dq: The parser will automatically choose the simplest derivation (it chooses consistently: greedy for tokens, non\-greedy for rules) .IP \(bu 2 \(dqexplicit\(dq: The parser will return all derivations wrapped in \(dq_ambig\(dq tree nodes (i.e. a forest). .IP \(bu 2 \(dqforest\(dq: The parser will return the root of the shared packed parse forest. .UNINDENT .UNINDENT .sp \fB=== Misc. / Domain Specific Options ===\fP .INDENT 7.0 .TP .B postlex Lexer post\-processing (Default: \fBNone\fP) Only works with the basic and contextual lexers. .TP .B priority How priorities should be evaluated \- \(dqauto\(dq, \fBNone\fP, \(dqnormal\(dq, \(dqinvert\(dq (Default: \(dqauto\(dq) .TP .B lexer_callbacks Dictionary of callbacks for the lexer. May alter tokens during lexing. Use with caution. .TP .B use_bytes Accept an input of type \fBbytes\fP instead of \fBstr\fP\&. .TP .B ordered_sets Should Earley use ordered\-sets to achieve stable output (~10% slower than regular sets. Default: True) .TP .B edit_terminals A callback for editing the terminals before parse. .TP .B import_paths A List of either paths or loader functions to specify from where grammars are imported .TP .B source_path Override the source of from where the grammar was loaded. Useful for relative imports and unconventional grammar loading .UNINDENT .sp \fB=== End of Options ===\fP .INDENT 7.0 .TP .B save(f, exclude_options: Collection[str] = ()) -> None Saves the instance into the given file object .sp Useful for caching and multiprocessing. .UNINDENT .INDENT 7.0 .TP .B classmethod load(f) -> _T Loads an instance from the given file object .sp Useful for caching and multiprocessing. .UNINDENT .INDENT 7.0 .TP .B classmethod open(grammar_filename: str, rel_to: str | None = None, **options) -> _T Create an instance of Lark with the grammar given by its filename .sp If \fBrel_to\fP is provided, the function will find the grammar filename in relation to it. .sp Example .sp .EX >>> Lark.open(\(dqgrammar_file.lark\(dq, rel_to=__file__, parser=\(dqlalr\(dq) Lark(...) .EE .UNINDENT .INDENT 7.0 .TP .B classmethod open_from_package(package: str, grammar_path: str, search_paths: Sequence[str] = [\(aq\(aq], **options) -> _T Create an instance of Lark with the grammar loaded from within the package \fIpackage\fP\&. This allows grammar loading from zipapps. .sp Imports in the grammar will use the \fIpackage\fP and \fIsearch_paths\fP provided, through \fIFromPackageLoader\fP .sp Example .sp Lark.open_from_package(__name__, \(dqexample.lark\(dq, (\(dqgrammars\(dq,), parser=...) .UNINDENT .INDENT 7.0 .TP .B lex(text: str, dont_ignore: bool = False) -> Iterator[\fI\%Token\fP] Only lex (and postlex) the text, without parsing it. Only relevant when lexer=\(aqbasic\(aq .sp When dont_ignore=True, the lexer will return all tokens, even those marked for %ignore. .INDENT 7.0 .TP .B Raises \fI\%UnexpectedCharacters\fP \-\- In case the lexer cannot find a suitable match. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B get_terminal(name: str) -> TerminalDef Get information about a terminal .UNINDENT .INDENT 7.0 .TP .B parse_interactive(text: str | None = None, start: str | None = None) -> \fI\%InteractiveParser\fP Start an interactive parsing session. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBtext\fP (\fIstr\fP\fI, \fP\fIoptional\fP) \-\- Text to be parsed. Required for \fBresume_parse()\fP\&. .IP \(bu 2 \fBstart\fP (\fIstr\fP\fI, \fP\fIoptional\fP) \-\- Start symbol .UNINDENT .TP .B Returns A new InteractiveParser instance. .UNINDENT .sp See Also: \fBLark.parse()\fP .UNINDENT .INDENT 7.0 .TP .B parse(text: str, start: str | None = None, on_error: Callable[[\fI\%UnexpectedInput\fP], bool] | None = None) -> ParseTree Parse the given text, according to the options provided. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBtext\fP (\fIstr\fP) \-\- Text to be parsed. .IP \(bu 2 \fBstart\fP (\fIstr\fP\fI, \fP\fIoptional\fP) \-\- Required if Lark was given multiple possible start symbols (using the start option). .IP \(bu 2 \fBon_error\fP (\fIfunction\fP\fI, \fP\fIoptional\fP) \-\- if provided, will be called on UnexpectedToken error. Return true to resume parsing. LALR only. See examples/advanced/error_handling.py for an example of how to use on_error. .UNINDENT .TP .B Returns If a transformer is supplied to \fB__init__\fP, returns whatever is the result of the transformation. Otherwise, returns a Tree instance. .TP .B Raises \fI\%UnexpectedInput\fP \-\- On a parse error, one of these sub\-exceptions will rise: \fBUnexpectedCharacters\fP, \fBUnexpectedToken\fP, or \fBUnexpectedEOF\fP\&. For convenience, these sub\-exceptions also inherit from \fBParserError\fP and \fBLexerError\fP\&. .UNINDENT .UNINDENT .UNINDENT .SS Using Unicode character classes with \fBregex\fP .sp Python\(aqs builtin \fBre\fP module has a few persistent known bugs and also won\(aqt parse advanced regex features such as character classes. With \fBpip install lark[regex]\fP, the \fBregex\fP module will be installed alongside lark and can act as a drop\-in replacement to \fBre\fP\&. .sp Any instance of Lark instantiated with \fBregex=True\fP will use the \fBregex\fP module instead of \fBre\fP\&. .sp For example, we can use character classes to match PEP\-3131 compliant Python identifiers: .INDENT 0.0 .INDENT 3.5 .sp .EX from lark import Lark >>> g = Lark(r\(dq\(dq\(dq ?start: NAME NAME: ID_START ID_CONTINUE* ID_START: /[\ep{Lu}\ep{Ll}\ep{Lt}\ep{Lm}\ep{Lo}\ep{Nl}_]+/ ID_CONTINUE: ID_START | /[\ep{Mn}\ep{Mc}\ep{Nd}\ep{Pc}·]+/ \(dq\(dq\(dq, regex=True) >>> g.parse(\(aqவணக்கம்\(aq) \(aqவணக்கம்\(aq .EE .UNINDENT .UNINDENT .SS Tree .INDENT 0.0 .TP .B class lark.Tree(data: str, children: List[_Leaf_T | \fI\%Tree\fP[_Leaf_T]], meta: Meta | None = None) The main tree class. .sp Creates a new tree, and stores \(dqdata\(dq and \(dqchildren\(dq in attributes of the same name. Trees can be hashed and compared. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBdata\fP \-\- The name of the rule or alias .IP \(bu 2 \fBchildren\fP \-\- List of matched sub\-rules and terminals .IP \(bu 2 \fBmeta\fP \-\- .sp Line & Column numbers (if \fBpropagate_positions\fP is enabled). meta attributes: (line, column, end_line, end_column, start_pos, end_pos, .INDENT 2.0 .INDENT 3.5 container_line, container_column, container_end_line, container_end_column) .UNINDENT .UNINDENT .sp container_* attributes consider all symbols, including those that have been inlined in the tree. For example, in the rule \(aqa: _A B _C\(aq, the regular attributes will mark the start and end of B, but the container_* attributes will also include _A and _C in the range. However, rules that contain \(aqa\(aq will consider it in full, including _A and _C for all attributes. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B pretty(indent_str: str = \(aq \(aq) -> str Returns an indented string representation of the tree. .sp Great for debugging. .UNINDENT .INDENT 7.0 .TP .B __rich__(parent: rich.tree.Tree | None = None) -> rich.tree.Tree Returns a tree widget for the \(aqrich\(aq library. .sp Example .INDENT 7.0 .TP .B :: from rich import print from lark import Tree .sp tree = Tree(\(aqroot\(aq, [\(aqnode1\(aq, \(aqnode2\(aq]) print(tree) .UNINDENT .UNINDENT .INDENT 7.0 .TP .B iter_subtrees() -> Iterator[\fI\%Tree\fP[_Leaf_T]] Depth\-first iteration. .sp Iterates over all the subtrees, never returning to the same node twice (Lark\(aqs parse\-tree is actually a DAG). .UNINDENT .INDENT 7.0 .TP .B iter_subtrees_topdown() Breadth\-first iteration. .sp Iterates over all the subtrees, return nodes in order like pretty() does. .UNINDENT .INDENT 7.0 .TP .B find_pred(pred: Callable[[\fI\%Tree\fP[_Leaf_T]], bool]) -> Iterator[\fI\%Tree\fP[_Leaf_T]] Returns all nodes of the tree that evaluate pred(node) as true. .UNINDENT .INDENT 7.0 .TP .B find_data(data: str) -> Iterator[\fI\%Tree\fP[_Leaf_T]] Returns all nodes of the tree whose data equals the given data. .UNINDENT .INDENT 7.0 .TP .B scan_values(pred: Callable[[_Leaf_T | \fI\%Tree\fP[_Leaf_T]], bool]) -> Iterator[_Leaf_T] Return all values in the tree that evaluate pred(value) as true. .sp This can be used to find all the tokens in the tree. .sp Example .sp .EX >>> all_tokens = tree.scan_values(lambda v: isinstance(v, Token)) .EE .UNINDENT .UNINDENT .SS Token .INDENT 0.0 .TP .B class lark.Token(type: str, value: Any, start_pos: int | None = None, line: int | None = None, column: int | None = None, end_line: int | None = None, end_column: int | None = None, end_pos: int | None = None) .TP .B class lark.Token(type_: str, value: Any, start_pos: int | None = None, line: int | None = None, column: int | None = None, end_line: int | None = None, end_column: int | None = None, end_pos: int | None = None) A string with meta\-information, that is produced by the lexer. .sp When parsing text, the resulting chunks of the input that haven\(aqt been discarded, will end up in the tree as Token instances. The Token class inherits from Python\(aqs \fBstr\fP, so normal string comparisons and operations will work as expected. .INDENT 7.0 .TP .B type Name of the token (as specified in grammar) .INDENT 7.0 .TP .B Type str .UNINDENT .UNINDENT .INDENT 7.0 .TP .B value Value of the token (redundant, as \fBtoken.value == token\fP will always be true) .INDENT 7.0 .TP .B Type Any .UNINDENT .UNINDENT .INDENT 7.0 .TP .B start_pos The index of the token in the text .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .INDENT 7.0 .TP .B line The line of the token in the text (starting with 1) .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .INDENT 7.0 .TP .B column The column of the token in the text (starting with 1) .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .INDENT 7.0 .TP .B end_line The line where the token ends .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .INDENT 7.0 .TP .B end_column The next column after the end of the token. For example, if the token is a single character with a column value of 4, end_column will be 5. .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .INDENT 7.0 .TP .B end_pos the index where the token ends (basically \fBstart_pos + len(token)\fP) .INDENT 7.0 .TP .B Type int | None .UNINDENT .UNINDENT .UNINDENT .SS Transformer, Visitor & Interpreter .sp See \fI\%Transformers & Visitors\fP\&. .SS ForestVisitor, ForestTransformer, & TreeForestTransformer .sp See \fI\%Working with the SPPF\fP\&. .SS UnexpectedInput .INDENT 0.0 .TP .B class lark.exceptions.UnexpectedInput UnexpectedInput Error. .sp Used as a base class for the following exceptions: .INDENT 7.0 .IP \(bu 2 \fBUnexpectedCharacters\fP: The lexer encountered an unexpected string .IP \(bu 2 \fBUnexpectedToken\fP: The parser received an unexpected token .IP \(bu 2 \fBUnexpectedEOF\fP: The parser expected a token, but the input ended .UNINDENT .sp After catching one of these exceptions, you may call the following helper methods to create a nicer error message. .INDENT 7.0 .TP .B get_context(text: str, span: int = 40) -> str Returns a pretty string pinpointing the error in the text, with span amount of context characters around it. .sp \fBNOTE:\fP .INDENT 7.0 .INDENT 3.5 The parser doesn\(aqt hold a copy of the text it has to parse, so you have to provide it again .UNINDENT .UNINDENT .UNINDENT .INDENT 7.0 .TP .B match_examples(parse_fn: Callable[[str], \fI\%Tree\fP], examples: Mapping[T, Iterable[str]] | Iterable[Tuple[T, Iterable[str]]], token_type_match_fallback: bool = False, use_accepts: bool = True) -> T | None Allows you to detect what\(aqs wrong in the input text by matching against example errors. .sp Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it\(aqll return the label for the example that bests matches the current error. The function will iterate the dictionary until it finds a matching error, and return the corresponding value. .sp For an example usage, see \fIexamples/error_reporting_lalr.py\fP .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBparse_fn\fP \-\- parse function (usually \fBlark_instance.parse\fP) .IP \(bu 2 \fBexamples\fP \-\- dictionary of \fB{\(aqexample_string\(aq: value}\fP\&. .IP \(bu 2 \fBuse_accepts\fP \-\- Recommended to keep this as \fBuse_accepts=True\fP\&. .UNINDENT .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class lark.exceptions.UnexpectedToken(token, expected, considered_rules=None, state=None, interactive_parser=None, terminals_by_name=None, token_history=None) An exception that is raised by the parser, when the token it received doesn\(aqt match any valid step forward. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBtoken\fP \-\- The mismatched token .IP \(bu 2 \fBexpected\fP \-\- The set of expected tokens .IP \(bu 2 \fBconsidered_rules\fP \-\- Which rules were considered, to deduce the expected tokens .IP \(bu 2 \fBstate\fP \-\- A value representing the parser state. Do not rely on its value or type. .IP \(bu 2 \fBinteractive_parser\fP \-\- An instance of \fBInteractiveParser\fP, that is initialized to the point of failure, and can be used for debugging and error handling. .UNINDENT .UNINDENT .sp Note: These parameters are available as attributes of the instance. .UNINDENT .INDENT 0.0 .TP .B class lark.exceptions.UnexpectedCharacters(seq, lex_pos, line, column, allowed=None, considered_tokens=None, state=None, token_history=None, terminals_by_name=None, considered_rules=None) An exception that is raised by the lexer, when it cannot match the next string of characters to any of its terminals. .UNINDENT .INDENT 0.0 .TP .B class lark.exceptions.UnexpectedEOF(expected, state=None, terminals_by_name=None) An exception that is raised by the parser, when the input ends while it still expects a token. .UNINDENT .SS InteractiveParser .INDENT 0.0 .TP .B class lark.parsers.lalr_interactive_parser.InteractiveParser(parser, parser_state, lexer_thread: LexerThread) InteractiveParser gives you advanced control over parsing and error handling when parsing with LALR. .sp For a simpler interface, see the \fBon_error\fP argument to \fBLark.parse()\fP\&. .INDENT 7.0 .TP .B feed_token(token: \fI\%Token\fP) Feed the parser with a token, and advance it to the next state, as if it received it from the lexer. .sp Note that \fBtoken\fP has to be an instance of \fBToken\fP\&. .UNINDENT .INDENT 7.0 .TP .B exhaust_lexer() -> List[\fI\%Token\fP] Try to feed the rest of the lexer state into the interactive parser. .sp Note that this modifies the instance in place and does not feed an \(aq$END\(aq Token .UNINDENT .INDENT 7.0 .TP .B as_immutable() Convert to an \fBImmutableInteractiveParser\fP\&. .UNINDENT .INDENT 7.0 .TP .B pretty() Print the output of \fBchoices()\fP in a way that\(aqs easier to read. .UNINDENT .INDENT 7.0 .TP .B choices() Returns a dictionary of token types, matched to their action in the parser. .sp Only returns token types that are accepted by the current state. .sp Updated by \fBfeed_token()\fP\&. .UNINDENT .INDENT 7.0 .TP .B accepts() Returns the set of possible tokens that will advance the parser into a new valid state. .UNINDENT .INDENT 7.0 .TP .B resume_parse() Resume automated parsing from the current state. .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class lark.parsers.lalr_interactive_parser.ImmutableInteractiveParser(parser, parser_state, lexer_thread: LexerThread) Same as \fBInteractiveParser\fP, but operations create a new instance instead of changing it in\-place. .INDENT 7.0 .TP .B feed_token(token) Feed the parser with a token, and advance it to the next state, as if it received it from the lexer. .sp Note that \fBtoken\fP has to be an instance of \fBToken\fP\&. .UNINDENT .INDENT 7.0 .TP .B exhaust_lexer() Try to feed the rest of the lexer state into the parser. .sp Note that this returns a new ImmutableInteractiveParser and does not feed an \(aq$END\(aq Token .UNINDENT .INDENT 7.0 .TP .B as_mutable() Convert to an \fBInteractiveParser\fP\&. .UNINDENT .INDENT 7.0 .TP .B choices() Returns a dictionary of token types, matched to their action in the parser. .sp Only returns token types that are accepted by the current state. .sp Updated by \fBfeed_token()\fP\&. .UNINDENT .INDENT 7.0 .TP .B pretty() Print the output of \fBchoices()\fP in a way that\(aqs easier to read. .UNINDENT .INDENT 7.0 .TP .B resume_parse() Resume automated parsing from the current state. .UNINDENT .INDENT 7.0 .TP .B accepts() Returns the set of possible tokens that will advance the parser into a new valid state. .UNINDENT .UNINDENT .SS ast_utils .sp For an example of using \fBast_utils\fP, see \fI\%/examples/advanced/create_ast.py\fP .INDENT 0.0 .TP .B class lark.ast_utils.Ast Abstract class .sp Subclasses will be collected by \fIcreate_transformer()\fP .UNINDENT .INDENT 0.0 .TP .B class lark.ast_utils.AsList Abstract class .sp Subclasses will be instantiated with the parse results as a single list, instead of as arguments. .UNINDENT .INDENT 0.0 .TP .B lark.ast_utils.create_transformer(ast_module: module, transformer: ~lark.visitors.Transformer | None = None, decorator_factory: ~typing.Callable = ) -> \fI\%Transformer\fP Collects \fIAst\fP subclasses from the given module, and creates a Lark transformer that builds the AST. .sp For each class, we create a corresponding rule in the transformer, with a matching name. CamelCase names will be converted into snake_case. Example: \(dqCodeBlock\(dq \-> \(dqcode_block\(dq. .sp Classes starting with an underscore (\fI_\fP) will be skipped. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBast_module\fP \-\- A Python module containing all the subclasses of \fBast_utils.Ast\fP .IP \(bu 2 \fBtransformer\fP (\fIOptional\fP\fI[\fP\fI\%Transformer\fP\fI]\fP) \-\- An initial transformer. Its attributes may be overwritten. .IP \(bu 2 \fBdecorator_factory\fP (\fICallable\fP) \-\- An optional callable accepting two booleans, inline, and meta, and returning a decorator for the methods of \fBtransformer\fP\&. (default: \fBv_args\fP). .UNINDENT .UNINDENT .UNINDENT .SH TRANSFORMERS & VISITORS .sp Transformers & Visitors provide a convenient interface to process the parse\-trees that Lark returns. .sp They are used by inheriting from the correct class (visitor or transformer), and implementing methods corresponding to the rule you wish to process. Each method accepts the children as an argument. That can be modified using the \fBv_args\fP decorator, which allows one to inline the arguments (akin to \fB*args\fP), or add the tree \fBmeta\fP property as an argument. .sp See: \fI\%visitors.py\fP .SS Visitor .sp Visitors visit each node of the tree, and run the appropriate method on it according to the node\(aqs data. .sp They work bottom\-up, starting with the leaves and ending at the root of the tree. .sp There are two classes that implement the visitor interface: .INDENT 0.0 .IP \(bu 2 \fBVisitor\fP: Visit every node (without recursion) .IP \(bu 2 \fBVisitor_Recursive\fP: Visit every node using recursion. Slightly faster. .UNINDENT .INDENT 0.0 .TP .B Example: .INDENT 7.0 .INDENT 3.5 .sp .EX class IncreaseAllNumbers(Visitor): def number(self, tree): assert tree.data == \(dqnumber\(dq tree.children[0] += 1 IncreaseAllNumbers().visit(parse_tree) .EE .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class lark.visitors.Visitor Tree visitor, non\-recursive (can handle huge trees). .sp Visiting a node calls its methods (provided by the user via inheritance) according to \fBtree.data\fP .INDENT 7.0 .TP .B visit(tree: \fI\%Tree\fP[_Leaf_T]) -> \fI\%Tree\fP[_Leaf_T] Visits the tree, starting with the leaves and finally the root (bottom\-up) .UNINDENT .INDENT 7.0 .TP .B visit_topdown(tree: \fI\%Tree\fP[_Leaf_T]) -> \fI\%Tree\fP[_Leaf_T] Visit the tree, starting at the root, and ending at the leaves (top\-down) .UNINDENT .INDENT 7.0 .TP .B __default__(tree) Default function that is called if there is no attribute matching \fBtree.data\fP .sp Can be overridden. Defaults to doing nothing. .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class lark.visitors.Visitor_Recursive Bottom\-up visitor, recursive. .sp Visiting a node calls its methods (provided by the user via inheritance) according to \fBtree.data\fP .sp Slightly faster than the non\-recursive version. .INDENT 7.0 .TP .B visit(tree: \fI\%Tree\fP[_Leaf_T]) -> \fI\%Tree\fP[_Leaf_T] Visits the tree, starting with the leaves and finally the root (bottom\-up) .UNINDENT .INDENT 7.0 .TP .B visit_topdown(tree: \fI\%Tree\fP[_Leaf_T]) -> \fI\%Tree\fP[_Leaf_T] Visit the tree, starting at the root, and ending at the leaves (top\-down) .UNINDENT .INDENT 7.0 .TP .B __default__(tree) Default function that is called if there is no attribute matching \fBtree.data\fP .sp Can be overridden. Defaults to doing nothing. .UNINDENT .UNINDENT .SS Interpreter .INDENT 0.0 .TP .B class lark.visitors.Interpreter Interpreter walks the tree starting at the root. .sp Visits the tree, starting with the root and finally the leaves (top\-down) .sp For each tree node, it calls its methods (provided by user via inheritance) according to \fBtree.data\fP\&. .sp Unlike \fBTransformer\fP and \fBVisitor\fP, the Interpreter doesn\(aqt automatically visit its sub\-branches. The user has to explicitly call \fBvisit\fP, \fBvisit_children\fP, or use the \fB@visit_children_decor\fP\&. This allows the user to implement branching and loops. .UNINDENT .INDENT 0.0 .TP .B Example: .INDENT 7.0 .INDENT 3.5 .sp .EX class IncreaseSomeOfTheNumbers(Interpreter): def number(self, tree): tree.children[0] += 1 def skip(self, tree): # skip this subtree. don\(aqt change any number node inside it. pass IncreaseSomeOfTheNumbers().visit(parse_tree) .EE .UNINDENT .UNINDENT .UNINDENT .SS Transformer .INDENT 0.0 .TP .B class lark.visitors.Transformer(visit_tokens: bool = True) Transformers work bottom\-up (or depth\-first), starting with visiting the leaves and working their way up until ending at the root of the tree. .sp For each node visited, the transformer will call the appropriate method (callbacks), according to the node\(aqs \fBdata\fP, and use the returned value to replace the node, thereby creating a new tree structure. .sp Transformers can be used to implement map & reduce patterns. Because nodes are reduced from leaf to root, at any point the callbacks may assume the children have already been transformed (if applicable). .sp If the transformer cannot find a method with the right name, it will instead call \fB__default__\fP, which by default creates a copy of the node. .sp To discard a node, return Discard (\fBlark.visitors.Discard\fP). .sp \fBTransformer\fP can do anything \fBVisitor\fP can do, but because it reconstructs the tree, it is slightly less efficient. .sp A transformer without methods essentially performs a non\-memoized partial deepcopy. .sp All these classes implement the transformer interface: .INDENT 7.0 .IP \(bu 2 \fBTransformer\fP \- Recursively transforms the tree. This is the one you probably want. .IP \(bu 2 \fBTransformer_InPlace\fP \- Non\-recursive. Changes the tree in\-place instead of returning new instances .IP \(bu 2 \fBTransformer_InPlaceRecursive\fP \- Recursive. Changes the tree in\-place instead of returning new instances .UNINDENT .INDENT 7.0 .TP .B Parameters \fBvisit_tokens\fP (\fIbool\fP\fI, \fP\fIoptional\fP) \-\- Should the transformer visit tokens in addition to rules. Setting this to \fBFalse\fP is slightly faster. Defaults to \fBTrue\fP\&. (For processing ignored tokens, use the \fBlexer_callbacks\fP options) .UNINDENT .INDENT 7.0 .TP .B transform(tree: \fI\%Tree\fP[_Leaf_T]) -> _Return_T Transform the given tree, and return the final result .UNINDENT .INDENT 7.0 .TP .B __mul__(other: \fI\%Transformer\fP | TransformerChain[_Leaf_U, _Return_V]) -> TransformerChain[_Leaf_T, _Return_V] Chain two transformers together, returning a new transformer. .UNINDENT .INDENT 7.0 .TP .B __default__(data, children, meta) Default function that is called if there is no attribute matching \fBdata\fP .sp Can be overridden. Defaults to creating a new copy of the tree node (i.e. \fBreturn Tree(data, children, meta)\fP) .UNINDENT .INDENT 7.0 .TP .B __default_token__(token) Default function that is called if there is no attribute matching \fBtoken.type\fP .sp Can be overridden. Defaults to returning the token as\-is. .UNINDENT .UNINDENT .INDENT 0.0 .TP .B Example: .INDENT 7.0 .INDENT 3.5 .sp .EX from lark import Tree, Transformer class EvalExpressions(Transformer): def expr(self, args): return eval(args[0]) t = Tree(\(aqa\(aq, [Tree(\(aqexpr\(aq, [\(aq1+2\(aq])]) print(EvalExpressions().transform( t )) # Prints: Tree(a, [3]) .EE .UNINDENT .UNINDENT .TP .B Example: .INDENT 7.0 .INDENT 3.5 .sp .EX class T(Transformer): INT = int NUMBER = float def NAME(self, name): return lookup_dict.get(name, name) T(visit_tokens=True).transform(tree) .EE .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class lark.visitors.Transformer_NonRecursive(visit_tokens: bool = True) Same as Transformer but non\-recursive. .sp Like Transformer, it doesn\(aqt change the original tree. .sp Useful for huge trees. .UNINDENT .INDENT 0.0 .TP .B class lark.visitors.Transformer_InPlace(visit_tokens: bool = True) Same as Transformer, but non\-recursive, and changes the tree in\-place instead of returning new instances .sp Useful for huge trees. Conservative in memory. .UNINDENT .INDENT 0.0 .TP .B class lark.visitors.Transformer_InPlaceRecursive(visit_tokens: bool = True) Same as Transformer, recursive, but changes the tree in\-place instead of returning new instances .UNINDENT .SS v_args .INDENT 0.0 .TP .B lark.visitors.v_args(inline: bool = False, meta: bool = False, tree: bool = False, wrapper: Callable | None = None) -> Callable[[Callable[[\&...], _Return_T] | type], Callable[[\&...], _Return_T] | type] A convenience decorator factory for modifying the behavior of user\-supplied visitor methods. .sp By default, callback methods of transformers/visitors accept one argument \- a list of the node\(aqs children. .sp \fBv_args\fP can modify this behavior. When used on a transformer/visitor class definition, it applies to all the callback methods inside it. .sp \fBv_args\fP can be applied to a single method, or to an entire class. When applied to both, the options given to the method take precedence. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBinline\fP (\fIbool\fP\fI, \fP\fIoptional\fP) \-\- Children are provided as \fB*args\fP instead of a list argument (not recommended for very long lists). .IP \(bu 2 \fBmeta\fP (\fIbool\fP\fI, \fP\fIoptional\fP) \-\- Provides two arguments: \fBmeta\fP and \fBchildren\fP (instead of just the latter) .IP \(bu 2 \fBtree\fP (\fIbool\fP\fI, \fP\fIoptional\fP) \-\- Provides the entire tree as the argument, instead of the children. .IP \(bu 2 \fBwrapper\fP (\fIfunction\fP\fI, \fP\fIoptional\fP) \-\- Provide a function to decorate all methods. .UNINDENT .UNINDENT .sp Example .INDENT 7.0 .INDENT 3.5 .sp .EX @v_args(inline=True) class SolveArith(Transformer): def add(self, left, right): return left + right @v_args(meta=True) def mul(self, meta, children): logger.info(f\(aqmul at line {meta.line}\(aq) left, right = children return left * right class ReverseNotation(Transformer_InPlace): @v_args(tree=True) def tree_node(self, tree): tree.children = tree.children[::\-1] .EE .UNINDENT .UNINDENT .UNINDENT .SS merge_transformers .INDENT 0.0 .TP .B lark.visitors.merge_transformers(base_transformer=None, **transformers_to_merge) Merge a collection of transformers into the base_transformer, each into its own \(aqnamespace\(aq. .sp When called, it will collect the methods from each transformer, and assign them to base_transformer, with their name prefixed with the given keyword, as \fBprefix__methodname\fP\&. .sp This function is especially useful for processing grammars that import other grammars, thereby creating some of their rules in a \(aqnamespace\(aq. (i.e with a consistent name prefix). In this case, the key for the transformer should match the name of the imported grammar. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBbase_transformer\fP (\fI\%Transformer\fP\fI, \fP\fIoptional\fP) \-\- The transformer that all other transformers will be added to. .IP \(bu 2 \fB**transformers_to_merge\fP \-\- Keyword arguments, in the form of \fBname_prefix = transformer\fP\&. .UNINDENT .TP .B Raises \fBAttributeError\fP \-\- In case of a name collision in the merged methods .UNINDENT .sp Example .INDENT 7.0 .INDENT 3.5 .sp .EX class TBase(Transformer): def start(self, children): return children[0] + \(aqbar\(aq class TImportedGrammar(Transformer): def foo(self, children): return \(dqfoo\(dq composed_transformer = merge_transformers(TBase(), imported=TImportedGrammar()) t = Tree(\(aqstart\(aq, [ Tree(\(aqimported__foo\(aq, []) ]) assert composed_transformer.transform(t) == \(aqfoobar\(aq .EE .UNINDENT .UNINDENT .UNINDENT .SS Discard .sp \fBDiscard\fP is the singleton instance of \fB_DiscardType\fP\&. .INDENT 0.0 .TP .B class lark.visitors._DiscardType When the Discard value is returned from a transformer callback, that node is discarded and won\(aqt appear in the parent. .sp \fBNOTE:\fP .INDENT 7.0 .INDENT 3.5 This feature is disabled when the transformer is provided to Lark using the \fBtransformer\fP keyword (aka Tree\-less LALR mode). .UNINDENT .UNINDENT .sp Example .INDENT 7.0 .INDENT 3.5 .sp .EX class T(Transformer): def ignore_tree(self, children): return Discard def IGNORE_TOKEN(self, token): return Discard .EE .UNINDENT .UNINDENT .UNINDENT .SS VisitError .INDENT 0.0 .TP .B class lark.exceptions.VisitError(rule, obj, orig_exc) VisitError is raised when visitors are interrupted by an exception .sp It provides the following attributes for inspection: .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBrule\fP \-\- the name of the visit rule that failed .IP \(bu 2 \fBobj\fP \-\- the tree\-node or token that was being processed .IP \(bu 2 \fBorig_exc\fP \-\- the exception that cause it to fail .UNINDENT .UNINDENT .sp Note: These parameters are available as attributes .UNINDENT .SH WORKING WITH THE SPPF .sp When parsing with Earley, Lark provides the \fBambiguity=\(aqforest\(aq\fP option to obtain the shared packed parse forest (SPPF) produced by the parser as an alternative to it being automatically converted to a tree. .sp Lark provides a few tools to facilitate working with the SPPF. Here are some things to consider when deciding whether or not to use the SPPF. .sp \fBPros\fP .INDENT 0.0 .IP \(bu 2 Efficient storage of highly ambiguous parses .IP \(bu 2 Precise handling of ambiguities .IP \(bu 2 Custom rule prioritizers .IP \(bu 2 Ability to handle infinite ambiguities .IP \(bu 2 Directly transform forest \-> object instead of forest \-> tree \-> object .UNINDENT .sp \fBCons\fP .INDENT 0.0 .IP \(bu 2 More complex than working with a tree .IP \(bu 2 SPPF may contain nodes corresponding to rules generated internally .IP \(bu 2 Loss of Lark grammar features: .INDENT 2.0 .IP \(bu 2 Rules starting with \(aq_\(aq are not inlined in the SPPF .IP \(bu 2 Rules starting with \(aq?\(aq are never inlined in the SPPF .IP \(bu 2 All tokens will appear in the SPPF .UNINDENT .UNINDENT .SS SymbolNode .INDENT 0.0 .TP .B class lark.parsers.earley_forest.SymbolNode(s, start, end) A Symbol Node represents a symbol (or Intermediate LR0). .sp Symbol nodes are keyed by the symbol (s). For intermediate nodes s will be an LR0, stored as a tuple of (rule, ptr). For completed symbol nodes, s will be a string representing the non\-terminal origin (i.e. the left hand side of the rule). .sp The children of a Symbol or Intermediate Node will always be Packed Nodes; with each Packed Node child representing a single derivation of a production. .sp Hence a Symbol Node with a single child is unambiguous. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBs\fP \-\- A Symbol, or a tuple of (rule, ptr) for an intermediate node. .IP \(bu 2 \fBstart\fP \-\- The index of the start of the substring matched by this symbol (inclusive). .IP \(bu 2 \fBend\fP \-\- The index of the end of the substring matched by this symbol (exclusive). .UNINDENT .UNINDENT .INDENT 7.0 .TP .B Properties: is_intermediate: True if this node is an intermediate node. priority: The priority of the node\(aqs symbol. .UNINDENT .INDENT 7.0 .TP .B property is_ambiguous Returns True if this node is ambiguous. .UNINDENT .INDENT 7.0 .TP .B property children Returns a list of this node\(aqs children sorted from greatest to least priority. .UNINDENT .UNINDENT .SS PackedNode .INDENT 0.0 .TP .B class lark.parsers.earley_forest.PackedNode(parent, s, rule, start, left, right) A Packed Node represents a single derivation in a symbol node. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBrule\fP \-\- The rule associated with this node. .IP \(bu 2 \fBparent\fP \-\- The parent of this node. .IP \(bu 2 \fBleft\fP \-\- The left child of this node. \fBNone\fP if one does not exist. .IP \(bu 2 \fBright\fP \-\- The right child of this node. \fBNone\fP if one does not exist. .IP \(bu 2 \fBpriority\fP \-\- The priority of this node. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B property children Returns a list of this node\(aqs children. .UNINDENT .UNINDENT .SS ForestVisitor .INDENT 0.0 .TP .B class lark.parsers.earley_forest.ForestVisitor(single_visit=False) An abstract base class for building forest visitors. .sp This class performs a controllable depth\-first walk of an SPPF. The visitor will not enter cycles and will backtrack if one is encountered. Subclasses are notified of cycles through the \fBon_cycle\fP method. .sp Behavior for visit events is defined by overriding the \fBvisit*node*\fP functions. .sp The walk is controlled by the return values of the \fBvisit*node_in\fP methods. Returning a node(s) will schedule them to be visited. The visitor will begin to backtrack if no nodes are returned. .INDENT 7.0 .TP .B Parameters \fBsingle_visit\fP \-\- If \fBTrue\fP, non\-Token nodes will only be visited once. .UNINDENT .INDENT 7.0 .TP .B visit_token_node(node) Called when a \fBToken\fP is visited. \fBToken\fP nodes are always leaves. .UNINDENT .INDENT 7.0 .TP .B visit_symbol_node_in(node) Called when a symbol node is visited. Nodes that are returned will be scheduled to be visited. If \fBvisit_intermediate_node_in\fP is not implemented, this function will be called for intermediate nodes as well. .UNINDENT .INDENT 7.0 .TP .B visit_symbol_node_out(node) Called after all nodes returned from a corresponding \fBvisit_symbol_node_in\fP call have been visited. If \fBvisit_intermediate_node_out\fP is not implemented, this function will be called for intermediate nodes as well. .UNINDENT .INDENT 7.0 .TP .B visit_packed_node_in(node) Called when a packed node is visited. Nodes that are returned will be scheduled to be visited. .UNINDENT .INDENT 7.0 .TP .B visit_packed_node_out(node) Called after all nodes returned from a corresponding \fBvisit_packed_node_in\fP call have been visited. .UNINDENT .INDENT 7.0 .TP .B on_cycle(node, path) Called when a cycle is encountered. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBnode\fP \-\- The node that causes a cycle. .IP \(bu 2 \fBpath\fP \-\- The list of nodes being visited: nodes that have been entered but not exited. The first element is the root in a forest visit, and the last element is the node visited most recently. \fBpath\fP should be treated as read\-only. .UNINDENT .UNINDENT .UNINDENT .INDENT 7.0 .TP .B get_cycle_in_path(node, path) A utility function for use in \fBon_cycle\fP to obtain a slice of \fBpath\fP that only contains the nodes that make up the cycle. .UNINDENT .UNINDENT .SS ForestTransformer .INDENT 0.0 .TP .B class lark.parsers.earley_forest.ForestTransformer The base class for a bottom\-up forest transformation. Most users will want to use \fBTreeForestTransformer\fP instead as it has a friendlier interface and covers most use cases. .sp Transformations are applied via inheritance and overriding of the \fBtransform*node\fP methods. .sp \fBtransform_token_node\fP receives a \fBToken\fP as an argument. All other methods receive the node that is being transformed and a list of the results of the transformations of that node\(aqs children. The return value of these methods are the resulting transformations. .sp If \fBDiscard\fP is raised in a node\(aqs transformation, no data from that node will be passed to its parent\(aqs transformation. .INDENT 7.0 .TP .B transform(root) Perform a transformation on an SPPF. .UNINDENT .INDENT 7.0 .TP .B transform_symbol_node(node, data) Transform a symbol node. .UNINDENT .INDENT 7.0 .TP .B transform_intermediate_node(node, data) Transform an intermediate node. .UNINDENT .INDENT 7.0 .TP .B transform_packed_node(node, data) Transform a packed node. .UNINDENT .INDENT 7.0 .TP .B transform_token_node(node) Transform a \fBToken\fP\&. .UNINDENT .UNINDENT .SS TreeForestTransformer .INDENT 0.0 .TP .B class lark.parsers.earley_forest.TreeForestTransformer(tree_class=, prioritizer=, resolve_ambiguity=True, use_cache=False) A \fBForestTransformer\fP with a tree \fBTransformer\fP\-like interface. By default, it will construct a tree. .sp Methods provided via inheritance are called based on the rule/symbol names of nodes in the forest. .sp Methods that act on rules will receive a list of the results of the transformations of the rule\(aqs children. By default, trees and tokens. .sp Methods that act on tokens will receive a token. .sp Alternatively, methods that act on rules may be annotated with \fBhandles_ambiguity\fP\&. In this case, the function will receive a list of all the transformations of all the derivations of the rule. By default, a list of trees where each tree.data is equal to the rule name or one of its aliases. .sp Non\-tree transformations are made possible by override of \fB__default__\fP, \fB__default_token__\fP, and \fB__default_ambig__\fP\&. .sp \fBNOTE:\fP .INDENT 7.0 .INDENT 3.5 Tree shaping features such as inlined rules and token filtering are not built into the transformation. Positions are also not propagated. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBtree_class\fP \-\- The tree class to use for construction .IP \(bu 2 \fBprioritizer\fP \-\- A \fBForestVisitor\fP that manipulates the priorities of nodes in the SPPF. .IP \(bu 2 \fBresolve_ambiguity\fP \-\- If True, ambiguities will be resolved based on priorities. .IP \(bu 2 \fBuse_cache\fP (\fIbool\fP) \-\- If True, caches the results of some transformations, potentially improving performance when \fBresolve_ambiguity==False\fP\&. Only use if you know what you are doing: i.e. All transformation functions are pure and referentially transparent. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B __default__(name, data) Default operation on tree (for override). .sp Returns a tree with name with data as children. .UNINDENT .INDENT 7.0 .TP .B __default_ambig__(name, data) Default operation on ambiguous rule (for override). .sp Wraps data in an \(aq_ambig_\(aq node if it contains more than one element. .UNINDENT .INDENT 7.0 .TP .B __default_token__(node) Default operation on \fBToken\fP (for override). .sp Returns \fBnode\fP\&. .UNINDENT .UNINDENT .SS handles_ambiguity .INDENT 0.0 .TP .B lark.parsers.earley_forest.handles_ambiguity(func) Decorator for methods of subclasses of \fBTreeForestTransformer\fP\&. Denotes that the method should receive a list of transformed derivations. .UNINDENT .SH TOOLS (STAND-ALONE, NEARLEY) .SS Stand\-alone parser .sp Lark can generate a stand\-alone LALR(1) parser from a grammar. .sp The resulting module provides the same interface as Lark, but with a fixed grammar, and reduced functionality. .sp Run using: .INDENT 0.0 .INDENT 3.5 .sp .EX python \-m lark.tools.standalone .EE .UNINDENT .UNINDENT .sp For a play\-by\-play, read the \fI\%tutorial\fP .SS Importing grammars from Nearley.js .sp Lark comes with a tool to convert grammars from \fI\%Nearley\fP, a popular Earley library for Javascript. It uses \fI\%Js2Py\fP to convert and run the Javascript postprocessing code segments. .SS Requirements .INDENT 0.0 .IP \(bu 2 Install Lark with the \fBnearley\fP component: .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX pip install lark[nearley] .EE .UNINDENT .UNINDENT .INDENT 0.0 .IP \(bu 2 Acquire a copy of the Nearley codebase. This can be done using: .UNINDENT .INDENT 0.0 .INDENT 3.5 .sp .EX git clone https://github.com/Hardmath123/nearley .EE .UNINDENT .UNINDENT .SS Usage .sp The tool can be run using: .INDENT 0.0 .INDENT 3.5 .sp .EX python \-m lark.tools.nearley .EE .UNINDENT .UNINDENT .sp Here\(aqs an example of how to import nearley\(aqs calculator example into Lark: .INDENT 0.0 .INDENT 3.5 .sp .EX git clone https://github.com/Hardmath123/nearley python \-m lark.tools.nearley nearley/examples/calculator/arithmetic.ne main ./nearley > ncalc.py .EE .UNINDENT .UNINDENT .sp You can use the output as a regular python module: .INDENT 0.0 .INDENT 3.5 .sp .EX >>> import ncalc >>> ncalc.parse(\(aqsin(pi/4) ^ e\(aq) 0.38981434460254655 .EE .UNINDENT .UNINDENT .sp The Nearley converter also supports an experimental converter for newer JavaScript (ES6+), using the \fB\-\-es6\fP flag: .INDENT 0.0 .INDENT 3.5 .sp .EX git clone https://github.com/Hardmath123/nearley python \-m lark.tools.nearley nearley/examples/calculator/arithmetic.ne main nearley \-\-es6 > ncalc.py .EE .UNINDENT .UNINDENT .SS Notes .INDENT 0.0 .IP \(bu 2 Lark currently cannot import templates from Nearley .IP \(bu 2 Lark currently cannot export grammars to Nearley .UNINDENT .sp These might get added in the future, if enough users ask for them. .sp Lark is a modern parsing library for Python. Lark can parse any context\-free grammar. .sp Lark provides: .INDENT 0.0 .IP \(bu 2 Advanced grammar language, based on EBNF .IP \(bu 2 Three parsing algorithms to choose from: Earley, LALR(1) and CYK .IP \(bu 2 Automatic tree construction, inferred from your grammar .IP \(bu 2 Fast unicode lexer with regexp support, and automatic line\-counting .UNINDENT .SH INSTALL LARK .INDENT 0.0 .INDENT 3.5 .sp .EX $ pip install lark .EE .UNINDENT .UNINDENT .SH SYNTAX HIGHLIGHTING .INDENT 0.0 .IP \(bu 2 \fI\%Sublime Text & TextMate\fP .IP \(bu 2 \fI\%Visual Studio Code\fP (Or install through the vscode plugin system) .IP \(bu 2 \fI\%Intellij & PyCharm\fP .IP \(bu 2 \fI\%Vim\fP .IP \(bu 2 \fI\%Atom\fP .UNINDENT .SH RESOURCES .INDENT 0.0 .IP \(bu 2 \fI\%Philosophy\fP .IP \(bu 2 \fI\%Features\fP .IP \(bu 2 \fI\%Examples\fP .IP \(bu 2 \fI\%Third\-party examples\fP .IP \(bu 2 \fI\%Online IDE\fP .IP \(bu 2 Tutorials .INDENT 2.0 .IP \(bu 2 \fI\%How to write a DSL\fP \- Implements a toy LOGO\-like language with an interpreter .IP \(bu 2 \fI\%JSON parser \- Tutorial\fP \- Teaches you how to use Lark .IP \(bu 2 Unofficial .INDENT 2.0 .IP \(bu 2 \fI\%Program Synthesis is Possible\fP \- Creates a DSL for Z3 .IP \(bu 2 \fI\%Using Lark to Parse Text \- Robin Reynolds\-Haertle (PyCascades 2023)\fP (video presentation) .UNINDENT .UNINDENT .IP \(bu 2 Guides .INDENT 2.0 .IP \(bu 2 \fI\%How To Use Lark \- Guide\fP .IP \(bu 2 \fI\%How to develop Lark \- Guide\fP .UNINDENT .IP \(bu 2 Reference .INDENT 2.0 .IP \(bu 2 \fI\%Grammar Reference\fP .IP \(bu 2 \fI\%Tree Construction Reference\fP .IP \(bu 2 \fI\%Transformers & Visitors\fP .IP \(bu 2 \fI\%Working with the SPPF\fP .IP \(bu 2 \fI\%API Reference\fP .IP \(bu 2 \fI\%Tools (Stand\-alone, Nearley)\fP .IP \(bu 2 \fI\%Cheatsheet (PDF)\fP .UNINDENT .IP \(bu 2 Discussion .INDENT 2.0 .IP \(bu 2 \fI\%Gitter\fP .IP \(bu 2 \fI\%Forum (Google Groups)\fP .UNINDENT .UNINDENT .SH AUTHOR Erez Shinan .SH COPYRIGHT 2024, Erez Shinan .\" Generated by docutils manpage writer. .