by R. Mark Volkmann, OCI Partner & Software Engineer

JUNE 2008


ANTLR is a big topic, so this is a big article. Topics are introduced in the order in which understanding them is essential to the example code that follows. Your questions and feedback are welcomed at

Table of Contents

  • Part I - Overview
    • Introduction To ANTLR
    • ANTLR Overview
    • Use Cases
    • Other DSL Approaches
    • Definitions
    • General Steps
  • Part II - Jumping In
    • Example Description
    • Important Classes
    • Grammar Syntax
    • Grammar Options
    • Grammar Actions
  • Part III - Lexers
    • Lexer Rules
    • Whitespace and Comments
    • Our Lexer Grammar
  • Part IV - Parsers
    • Token Specifications
    • Rule Syntax
    • Creating ASTs
    • Rule Arguments and Return Values
    • Our Parser Grammar


Part I - Overview

Introduction to ANTLR

ANTLR is a free, open source parser generator tool that is used to implement "real" programming languages and domain-specific languages (DSLs). The name stands for Another Tool for Language Recognition. Terence Parr, a professor at the University of San Francisco, implemented (in Java) and maintains it. It can be downloaded from This site also contains documentation, articles, examples, a Wiki and information about mailing lists.

Many people feel that ANTLR is easier to use than other, similar tools. One reason for this is the syntax it uses to express grammars. Another is the existence of a graphical grammar editor and debugger called ANTLRWorks. Jean Bovet, a former masters student at the University of San Francisco who worked with Terence, implemented (using Java Swing) and maintains it.

A brief word about conventions in this article... ANTLR grammar syntax makes frequent use of the characters [ ] and { }. When describing a placeholder we will use italics rather than surrounding it with { }. When describing something that's optional, we'll follow it with a question mark rather than surrounding it with [ ].

ANTLR Overview

ANTLR uses Extended Backus-Naur (EBNF) grammars which can directly express optional and repeated elements. BNF grammars require a more verbose syntax to express these. EBNF grammars also support "subrules" which are parenthesized groups of elements.

ANTLR supports infinite lookahead for selecting the rule alternative that matches the portion of the input stream being evaluated. The technical way of stating this is that ANTLR supports LL(*). An LL(k) parser is a top-down parser that parses from left to right, constructs a leftmost derivation of the input and looks ahead k tokens when selecting between rule alternatives. The * means any number of lookahead tokens. Another type of parser, LR(k), is a bottom-up parser that parses from left to right and constructs a rightmost derivation of the input. LL parsers can't handle left-recursive rules so those must be avoided when writing ANTLR grammars. Most people find LL grammars easier to understand than LR grammars. See Wikipedia for a more detailed descriptions of LL and LR parsers.

ANTLR supports three kinds of predicates that aid in resolving ambiguities. These allow rules that are not based strictly on input syntax.

While ANTLR is implemented in Java, it generates code in many target languages including Java, Ruby, Python, C, C++, C# and Objective C.

There are IDE plug-ins available for working with ANTLR inside IDEA and Eclipse, but not yet for NetBeans or other IDEs.

Use Cases

There are three primary use cases for ANTLR.

The first is implementing "validators." These generate code that validates that input obeys grammar rules.

The second is implementing "processors." These generate code that validates and processes input. They can perform calculations, update databases, read configuration files into runtime data structures, etc. Our Math example coming up is an example of a processor.

The third is implementing "translators." These generate code that validates and translates input into another format such as a programming language or bytecode.

Later we'll discuss "actions" and "rewrite rules." It's useful to point out where these are used in the three use cases above. Grammars for validators don't use actions or rewrite rules. Grammars for processors use actions, but not rewrite rules. Grammars for translators use actions (containing printlns) and/or rewrite rules.

Other DSL Approaches

Dynamic languages like Ruby and Groovy can be used to implement many DSLs. However, when they are used, the DSLs have to live within the syntax rules of the language. For example, such DSLs often require dots between object references and method names, parameters separated by commas, and blocks of code surrounded by curly braces or do/end keywords. Using a tool like ANTLR to implement a DSL provides maximum control over the syntax of the DSL.


converts a stream of characters to a stream of tokens (ANTLR token objects know their start/stop character stream index, line number, index within the line, and more)
processes a stream of tokens, possibly creating an AST
Abstract Syntax Tree (AST):
an intermediate tree representation of the parsed input that is simpler to process than the stream of tokens and can be efficiently processed multiple times
Tree Parser:
processes an AST
a library that supports using templates with placeholders for outputting text (ex. Java source code)

An input character stream is fed into the lexer. The lexer converts this to a stream of tokens that is fed to the parser. The parser often constructs an AST which is fed to the tree parser. The tree parser processes the AST and optionally produces text output, possibly using StringTemplate.


General Steps

The general steps involved in using ANTLR include the following.

  1. Write the grammar using one or more files.
    A common approach is to use three grammar files, each focusing on a specific aspect of the processing. The first is the lexer grammar, which creates tokens from text input. The second is the parser grammar, which creates an AST from tokens. The third is the tree parser grammar, which processes an AST. This results in three relatively simple grammar files as opposed to one complex grammar file.
  2. Optionally write StringTemplate templates for producing output.
  3. Debug the grammar using ANTLRWorks.
  4. Generate classes from the grammar. These validate that text input conforms to the grammar and execute target language "actions" specified in the grammar.
  5. Write an application that uses the the generated classes.
  6. Feed the application text that conforms to the grammar.


Part II - Jumping In

Example Description

Enough background information, let's create a language!

Here's a list of features we want our language to have:

Here's some example input.

a = 3.14
f(x) = 3x^2 - 4x + 2
print "The value of f for " a " is " f(a)
print "The derivative of " f() " is " f'()
list variables
list functions
g(y) = 2y^3 + 6y - 5
h = f + g
print h()

Here's the output that would be produced.

The value of f for 3.14 is 19.0188
The derivative of f(x) = 3x^2 - 4x + 2 is f'(x) = 6x - 4
# of variables defined: 1
a = 3.14
# of functions defined: 1
f(x) = 3x^2 - 4x + 2
h(x) = 2x^3 + 3x^2 + 2x - 3

Here's the AST we'd like to produce for the input above, drawn by ANTLRWorks. It's split into three parts because the image is really wide. The "nil" root node is automatically supplied by ANTLR. Note the horizontal line under the "nil" root node that connects the three graphics. Nodes with uppercase names are "imaginary nodes" added for the purpose of grouping other nodes. We'll discuss those in more detail later.

AST part 1AST part 2AST part 3

Important Classes

The diagram below shows the relationships between the most important classes used in this example.

Important Classes

Note the key in the upper-left corner of the diagram that distinguishes between classes provided by ANTLR, classes generated from our grammar by ANTLR, and classes we wrote manually.

Grammar Syntax

The syntax of an ANTLR grammar is described below.

  1. grammar-type? grammar grammar-name;
  2. grammar-options?
  3. token-spec?
  4. attribute-scopes?
  5. grammar-actions?
  6. rule+

Comments in an ANTLR grammar use the same syntax as Java. There are three types of grammars: lexer, parser and tree. If a grammar-type isn't specified, it defaults to a combined lexer and parser. The name of the file containing the grammar must match the grammar-name and have a ".g" extension. The classes generated by ANTLR will contain a method for each rule in the grammar. Each of the elements of grammar syntax above will be discussed in the order they are needed to implement our Math language.

Grammar Options

Grammar options include the following:

AST node typeASTLabelType = CommonTree
infinite lookaheadbacktrack = true
limited lookaheadk = integer
output typeoutput = AST | template
token vocabularytokenVocab = grammar-name

Grammar options are specified using the following syntax. Note that quotes aren't needed around single word values.

options {
  name = 'value';
  . . .

Grammar Actions

Grammar actions add code to the generated code. There are three places where code can be added by a grammar action.

  1. Before the generated class definition:
    This is commonly used to specify a Java package name and import classes in other packages. The syntax for adding code here is @header { ... }. In a combined lexer/parser grammar, this only affects the generated parser class. To affect the generated lexer class, use @lexer::header { ... }.
  2. Inside the generated class definition:
    This is commonly used to define constants, attributes and methods accessible to all rule methods in the generated classes. It can also be used to override methods in the superclasses of the generated classes.
    The syntax for adding code here is @members { ... }. In a combined lexer/parser grammar, this only affects the generated parser class. To affect the generated lexer class, use @lexer::members { ... }.
  3. Inside generated methods:
    The catch blocks for the try block in the methods generated for each rule can be customized. One use for this is to stop processing after the first error is encountered rathering than attempting to recover by skipping unrecognized tokens.
    The syntax for adding catch blocks is @rulecatch { catch-blocks }


Part III - Lexers

Lexer Rules

A lexer rule or token specification is needed for every kind of token to be processed by the parser grammar. The names of lexer rules must start with an uppercase letter and are typically all uppercase. A lexer rule can be associated with:

A lexer rule cannot be associated with a regular expression.

When the lexer chooses the next lexer rule to apply, it chooses the one that matches the most characters. If there is a tie then the one listed first is used, so order matters.

A lexer rule can refer to other lexer rules. Often they reference "fragment" lexer rules. These do not result in creation of tokens and are only present to simplify the definition of other lexer rules. In the example ahead, LETTER and DIGIT are fragment lexer rules.

Whitespace and Comments

Whitespace and comments in the input are handled in lexer rules. There are two common options for handling these: either throw them away or write them to a different "channel" that is not automatically inspected by the parser. To throw them away, use "skip();". To write them to the special "hidden" channel, use "$channel = HIDDEN;".

Here are examples of lexer rules that handle whitespace and comments.

  1. // Send runs of space and tab characters to the hidden channel.
  2. WHITESPACE: (' ' | '\t')+ { $channel = HIDDEN; };
  4. // Treat runs of newline characters as a single NEWLINE token.
  5. // On some platforms, newlines are represented by a \n character.
  6. // On others they are represented by a \r and a \n character.
  7. NEWLINE: ('\r'? '\n')+;
  9. // Single-line comments begin with //, are followed by any characters
  10. // other than those in a newline, and are terminated by newline characters.
  11. SINGLE_COMMENT: '//' ~('\r' | '\n')* NEWLINE { skip(); };
  13. // Multi-line comments are delimited by /* and */
  14. // and are optionally followed by newline characters.
  15. MULTI_COMMENT options { greedy = false; }
  16. : '/*' .* '*/' NEWLINE? { skip(); };

When the greedy option is set to true, the lexer matches as much input as possible. When false, it stops when input matches the next element in the lexer rule. The greedy option defaults to true except when the patterns ".*" and ".+" are used. For this reason, it didn't need to be specified in the example above.

If newline characters are to be used as statement terminators then they shouldn't be skipped or hidden since the parser needs to see them.

Our Lexer Grammar

  1. lexer grammar MathLexer;
  3. // We want the generated lexer class to be in this package.
  4. @header { package com.ociweb.math; }
  6. APOSTROPHE: '\''; // for derivative
  7. ASSIGN: '=';
  8. CARET: '^'; // for exponentiation
  9. FUNCTIONS: 'functions'; // for list command
  10. HELP: '?' | 'help';
  11. LEFT_PAREN: '(';
  12. LIST: 'list';
  13. PRINT: 'print';
  14. RIGHT_PAREN: ')';
  15. SIGN: '+' | '-';
  16. VARIABLES: 'variables'; // for list command
  19. fragment FLOAT: INTEGER '.' '0'..'9'+;
  20. fragment INTEGER: '0' | SIGN? '1'..'9' '0'..'9'*;
  21. NAME: LETTER (LETTER | DIGIT | '_')*;
  25. fragment LETTER: LOWER | UPPER;
  26. fragment LOWER: 'a'..'z';
  27. fragment UPPER: 'A'..'Z';
  28. fragment DIGIT: '0'..'9';
  29. fragment SPACE: ' ' | '\t';
  31. // Note that SYMBOL does not include the double-quote character.
  32. fragment SYMBOL: '!' | '#'..'/' | ':'..'@' | '['..'`' | '{'..'~';
  34. // Windows uses \r\n. UNIX and Mac OS X use \n.
  35. // To use newlines as a terminator,
  36. // they can't be written to the hidden channel!
  37. NEWLINE: ('\r'? '\n')+;
  38. WHITESPACE: SPACE+ { $channel = HIDDEN; };

We'll be looking at the parser grammar soon. When parser rule alternatives contain literal strings, they are converted into references to automatically generated lexer rules. For example, we could eliminate the ASSIGN lexer rule above and change ASSIGN to '=' in the parser grammar.


Part IV - Parsers

Token Specifications

The lexer creates tokens for all input character sequences that match lexer rules. It can be useful to create other tokens that either don't exist in the input (imaginary) or have a better name than what is found in the input. Imaginary tokens are often used to group other tokens. In the parser grammar ahead, the tokens that play this role are DEFINE, POLYNOMIAL, TERM, FUNCTION, DERIVATIVE and COMBINE.

The syntax for specifying these kinds of tokens in a parser grammar is:

  1. tokens {
  2. imaginary-name;
  3. better-name = 'input-name';
  4. }

Rule Syntax

The syntax for defining rules is

  1. fragment? rule-name arguments?
  2. (returns return-values)?
  3. throws-spec?
  4. rule-options?
  5. rule-attribute-scopes?
  6. rule-actions?
  7. : token-sequence-1
  8. | token-sequence-2
  9. ...
  10. ;
  11. exceptions-spec?

The fragment keyword only appears at the beginning of lexer rules that are used as fragments (described earlier).

Rule options include backtrack and k which customize those options for a specific rule instead of using the grammar-wide values specified as grammar options. They are specified using the syntax options { ... }.

The token sequences are alternatives that can be selected by the rule. Each element in the sequences can be followed by an action which is target language code (such as Java) in curly braces. The code is executed immediately after a preceding element is matched by input.

The optional exceptions-spec customizes exception handling for this rule.

Elements in a token sequence can be assigned to variables so they can be accessed in actions. To obtain the text value of a token that is referred to by a variable, use $variable.text. There are several examples of this in the parser grammar that follows.

Creating ASTs

Parser grammars often create ASTs. To do this, the grammar option output must be set to AST.

There are two approaches for creating ASTs. The first is to use "rewrite rules". These appear after a rule alternative. This is the recommended approach in most cases. The syntax of a rewrite rule is

-> ^(parent child-1 child-2 ... child-n)

The second approach for creating ASTs is to use AST operators. These appear in a rule alternative, immediately after tokens. They work best for sequences like mathematical expressions. There are two AST operators. When a ^ is used, a new root node is created for all child nodes at the same level. When a ! is used, no node is created. This is often used for bits of syntax that aren't needed in the AST such as parentheses, commas and semicolons. When a token isn't followed by one of them, a new child node is created for that token using the current root node as its parent.

A rule can use both of these approaches, but each rule alternative can only use one approach.

Rule Arguments and Return Values

The following syntax is used to declare rule arguments and return types.

  1. rule-name[type1 name1, type2 name2, ...]
  2. returns [type1 name1, type2 name2, ...] :
  3. ...
  4. ;

The names after the rule name are arguments and the names after the returns keyword are return values.

Note that rules can return more than one value. ANTLR generates a class to use as the return type of the generated method for the rule. Instances of this class hold all the return values. The generated method name matches the rule name. The name of the generated return type class is the rule name with "_return" appended.

Our Parser Grammar

  1. parser grammar MathParser;
  3. options {
  4. // We're going to output an AST.
  5. output = AST;
  7. // We're going to use the tokens defined in our MathLexer grammar.
  8. tokenVocab = MathLexer;
  9. }
  11. // These are imaginary tokens that will serve as parent nodes
  12. // for grouping other tokens in our AST.
  13. tokens {
  14. COMBINE;
  15. DEFINE;
  19. TERM;
  20. }
  22. // We want the generated parser class to be in this package.
  23. @header { package com.ociweb.math; }
  25. // This is the "start rule".
  26. // EOF is a predefined token that represents the end of input.
  27. // The "start rule" should end with this.
  28. // Note the use of the ! AST operator
  29. // to avoid adding the EOF token to the AST.
  30. script: statement* EOF!;
  32. statement: assign | define | interactiveStatement | combine | print;
  34. // These kinds of statements only need to be supported
  35. // when reading input from the keyboard.
  36. interactiveStatement: help | list;
  38. // Examples of input that match this rule include
  39. // "a = 19", "a = b", "a = f(2)" and "a = f(b)".
  40. assign: NAME ASSIGN value terminator -> ^(ASSIGN NAME value);
  42. value: NUMBER | NAME | functionEval;
  44. // A parenthesized group in a rule alternative is called a "subrule".
  45. // Examples of input that match this rule include "f(2)" and "f(b)".
  46. functionEval
  49. // EOF cannot be used in lexer rules, so we made this a parser rule.
  50. // EOF is needed here for interactive mode where each line entered ends in EOF
  51. // and for file mode where the last line ends in EOF.
  52. terminator: NEWLINE | EOF;
  54. // Examples of input that match this rule include
  55. // "f(x) = 3x^2 - 4" and "g(x) = y^2 - 2y + 1".
  56. // Note that two parameters are passed to the polynomial rule.
  57. define
  59. polynomial[$fn.text, $fv.text] terminator
  60. -> ^(DEFINE $fn $fv polynomial);
  62. // Examples of input that match this rule include
  63. // "3x2 - 4" and "y^2 - 2y + 1".
  64. // fnt = function name text; fvt = function variable text
  65. // Note that two parameters are passed in each invocation of the term rule.
  66. polynomial[String fnt, String fvt]
  67. : term[$fnt, $fvt] (SIGN term[$fnt, $fvt])*
  68. -> ^(POLYNOMIAL term (SIGN term)*);
  70. // Examples of input that match this rule include
  71. // "4", "4x", "x^2" and "4x^2".
  72. // fnt = function name text; fvt = function variable text
  73. term[String fnt, String fvt]
  74. // tv = term variable
  75. : c=coefficient? (tv=NAME e=exponent?)?
  76. // What follows is a validating semantic predicate.
  77. // If it evaluates to false, a FailedPredicateException will be thrown.
  78. // It is testing whether the term variable matches the function variable.
  79. { tv == null ? true : ($tv.text).equals($fvt) }?
  80. -> ^(TERM $c? $tv? $e?)
  81. ;
  82. // This catches bad function definitions such as
  83. // f(x) = 2y
  84. catch [FailedPredicateException fpe] {
  85. String tvt = $tv.text;
  86. String msg = "In function \"" + fnt +
  87. "\" the term variable \"" + tvt +
  88. "\" doesn't match function variable \"" + fvt + "\".";
  89. throw new RuntimeException(msg);
  90. }
  92. coefficient: NUMBER;
  94. // An example of input that matches this rule is "^2".
  95. exponent: CARET NUMBER -> NUMBER;
  97. // Inputs that match this rule are "?" and "help".
  98. help: HELP terminator -> HELP;
  100. // Inputs that match this rule include
  101. // "list functions" and "list variables".
  102. list
  103. : LIST listOption terminator -> ^(LIST listOption);
  105. // Inputs that match this rule are "functions" and "variables".
  106. listOption: FUNCTIONS | VARIABLES;
  108. // Examples of input that match this rule include
  109. // "h = f + g" and "h = f - g".
  110. combine
  111. : fn1=NAME ASSIGN fn2=NAME op=SIGN fn3=NAME terminator
  112. -> ^(COMBINE $fn1 $op $fn2 $fn3);
  114. // An example of input that matches this rule is
  115. // print "f(" a ") = " f(a)
  116. print
  117. : PRINT printTarget* terminator -> ^(PRINT printTarget*);
  119. // Examples of input that match this rule include
  120. // 19, 3.14, "my text", a, f(), f(2), f(a) and f'().
  121. printTarget
  122. : NUMBER -> NUMBER
  123. | sl=STRING_LITERAL -> $sl
  124. | NAME -> NAME
  125. // This is a function reference to print a string representation.
  127. | functionEval
  128. | derivative
  129. ;
  131. // An example of input that matches this rule is "f'()".
  132. derivative


Part V - Tree Parsers

Rule Actions

Rule actions add code before and/or after the generated code in the method generated for a rule. They can be used for AOP-like wrapping of methods. The syntax @init { ...code... } inserts the contained code before the generated code. The syntax @after { ...code... } inserts the contained code after the generated code. The tree grammar rules polynomial and term ahead demonstrate using @init.

Attribute Scopes

Data is shared between rules in two ways: by passing parameters and/or returning values, or by using attributes. These are the same as the options for sharing data between Java methods in the same class. Attributes can be accessible to a single rule (using @init to declare them), a rule and all rules invoked by it (rule scope), or by all rules that request the named global scope of the attributes.

Attribute scopes define collections of attributes that can be accessed by multiple rules. There are two kinds, global and rule scopes.

Global scopes are named scopes that are defined outside any rule. To request access to a global scope within a rule, add scope name; to the rule. To access multiple global scopes, list their names separated by spaces. The following syntax is used to define a global scope.

  1. scope name {
  2. type variable;
  3. ...
  4. }

Rule scopes are unnamed scopes that are defined inside a rule. Rule actions in the defining rule and rules invoked by it access attributes in the scope with $rule-name::variable. The following syntax is used to define a rule scope.

  1. scope {
  2. type variable;
  3. ...
  4. }

To initialize an attribute, use an @init rule action.

Our Tree Grammar

  1. tree grammar MathTree;
  3. options {
  4. // We're going to process an AST whose nodes are of type CommonTree.
  5. ASTLabelType = CommonTree;
  7. // We're going to use the tokens defined in
  8. // both our MathLexer and MathParser grammars.
  9. // The MathParser grammar already includes
  10. // the tokens defined in the MathLexer grammar.
  11. tokenVocab = MathParser;
  12. }
  14. @header {
  15. // We want the generated parser class to be in this package.
  16. package com.ociweb.math;
  18. import java.util.Map;
  19. import java.util.TreeMap;
  20. }
  22. // We want to add some fields and methods to the generated class.
  23. @members {
  24. // We're using TreeMaps so the entries are sorted on their keys
  25. // which is desired when listing them.
  26. private Map<String, Function> functionMap = new TreeMap<String, Function>();
  27. private Map<String, Double> variableMap = new TreeMap<String, Double>();
  29. // This adds a Function to our function Map.
  30. private void define(Function function) {
  31. functionMap.put(function.getName(), function);
  32. }
  34. // This retrieves a Function from our function Map
  35. // whose name matches the text of a given AST tree node.
  36. private Function getFunction(CommonTree nameNode) {
  37. String name = nameNode.getText();
  38. Function function = functionMap.get(name);
  39. if (function == null) {
  40. String msg = "The function \"" + name + "\" is not defined.";
  41. throw new RuntimeException(msg);
  42. }
  43. return function;
  44. }
  46. // This evaluates a function whose name matches the text
  47. // of a given AST tree node for a given value.
  48. private double evalFunction(CommonTree nameNode, double value) {
  49. return getFunction(nameNode).getValue(value);
  50. }
  52. // This retrieves the value of a variable from our variable Map
  53. // whose name matches the text of a given AST tree node.
  54. private double getVariable(CommonTree nameNode) {
  55. String name = nameNode.getText();
  56. Double value = variableMap.get(name);
  57. if (value == null) {
  58. String msg = "The variable \"" + name + "\" is not set.";
  59. throw new RuntimeException(msg);
  60. }
  61. return value;
  62. }
  64. // This just shortens the code for print calls.
  65. private static void out(Object obj) {
  66. System.out.print(obj);
  67. }
  69. // This just shortens the code for println calls.
  70. private static void outln(Object obj) {
  71. System.out.println(obj);
  72. }
  74. // This converts the text of a given AST node to a double.
  75. private double toDouble(CommonTree node) {
  76. double value = 0.0;
  77. String text = node.getText();
  78. try {
  79. value = Double.parseDouble(text);
  80. } catch (NumberFormatException e) {
  81. throw new RuntimeException("Cannot convert \"" + text + "\" to a double.");
  82. }
  83. return value;
  84. }
  86. // This replaces all escaped newline characters in a String
  87. // with unescaped newline characters.
  88. // It is used to allow newline characters to be placed in
  89. // literal Strings that are passed to the print command.
  90. private static String unescape(String text) {
  91. return text.replaceAll("\\\\n", "\n");
  92. }
  94. } // @members
  96. script: statement*;
  98. statement: assign | combine | define | interactiveStatement | print;
  100. // These kinds of statements only need to be supported
  101. // when reading input from the keyboard.
  102. interactiveStatement: help | list;
  104. // This adds a variable to the map.
  105. // Parts of rule alternatives can be assigned to variables (ex. v)
  106. // that are used to refer to them in rule actions.
  107. // Alternatively rule names (ex. NAME) can be used.
  108. // We could have used $value in place of $v below.
  109. assign: ^(ASSIGN NAME v=value) { variableMap.put($NAME.text, $v.result); };
  111. // This returns a value as a double.
  112. // The value can be a number, a variable name or a function evaluation.
  113. value returns [double result]
  114. : NUMBER { $result = toDouble($NUMBER); }
  115. | NAME { $result = getVariable($NAME); }
  116. | functionEval { $result = $functionEval.result; }
  117. ;
  119. // This returns the result of a function evaluation as a double.
  120. functionEval returns [double result]
  121. : ^(FUNCTION fn=NAME v=NUMBER) {
  122. $result = evalFunction($fn, toDouble($v));
  123. }
  124. | ^(FUNCTION fn=NAME v=NAME) {
  125. $result = evalFunction($fn, getVariable($v));
  126. }
  127. ;
  129. // This builds a Function object and adds it to the function map.
  130. define
  131. : ^(DEFINE name=NAME variable=NAME polynomial) {
  132. define(new Function($name.text, $variable.text, $polynomial.result));
  133. }
  134. ;
  136. // This builds a Polynomial object and returns it.
  137. polynomial returns [Polynomial result]
  138. // The "current" attribute in this rule scope is visible to
  139. // rules invoked by this one, such as term.
  140. scope { Polynomial current; }
  141. @init { $polynomial::current = new Polynomial(); }
  142. // There can be no sign in front of the first term,
  143. // so "" is passed to the term rule.
  144. // The coefficient of the first term can be negative.
  145. // The sign between terms is passed to
  146. // subsequent invocations of the term rule.
  147. : ^(POLYNOMIAL term[""] (s=SIGN term[$s.text])*) {
  148. $result = $polynomial::current;
  149. }
  150. ;
  152. // This builds a Term object and adds it to the current Polynomial.
  153. term[String sign]
  154. @init { boolean negate = "-".equals(sign); }
  155. : ^(TERM coefficient=NUMBER) {
  156. double c = toDouble($coefficient);
  157. if (negate) c = -c; // applies sign to coefficient
  158. $polynomial::current.addTerm(new Term(c));
  159. }
  160. | ^(TERM coefficient=NUMBER? variable=NAME exponent=NUMBER?) {
  161. double c = coefficient == null ? 1.0 : toDouble($coefficient);
  162. if (negate) c = -c; // applies sign to coefficient
  163. double exp = exponent == null ? 1.0 : toDouble($exponent);
  164. $polynomial::current.addTerm(new Term(c, $variable.text, exp));
  165. }
  166. ;
  168. // This outputs help on our language which is useful in interactive mode.
  169. help
  170. : HELP {
  171. outln("In the help below");
  172. outln("* fn stands for function name");
  173. outln("* n stands for a number");
  174. outln("* v stands for variable");
  175. outln("");
  176. outln("To define");
  177. outln("* a variable: v = n");
  178. outln("* a function from a polynomial: fn(v) = polynomial-terms");
  179. outln(" (for example, f(x) = 3x^2 - 4x + 1)");
  180. outln("* a function from adding or subtracting two others: " +
  181. "fn3 = fn1 +|- fn2");
  182. outln(" (for example, h = f + g)");
  183. outln("");
  184. outln("To print");
  185. outln("* a literal string: print \"text\"");
  186. outln("* a number: print n");
  187. outln("* the evaluation of a function: print fn(n | v)");
  188. outln("* the defintion of a function: print fn()");
  189. outln("* the derivative of a function: print fn'()");
  190. outln("* multiple items on the same line: print i1 i2 ... in");
  191. outln("");
  192. outln("To list");
  193. outln("* variables defined: list variables");
  194. outln("* functions defined: list functions");
  195. outln("");
  196. outln("To get help: help or ?");
  197. outln("");
  198. outln("To exit: exit or quit");
  199. }
  200. ;
  202. // This lists all the functions or variables that are currently defined.
  203. list
  204. : ^(LIST FUNCTIONS) {
  205. outln("# of functions defined: " + functionMap.size());
  206. for (Function function : functionMap.values()) {
  207. outln(function);
  208. }
  209. }
  210. | ^(LIST VARIABLES) {
  211. outln("# of variables defined: " + variableMap.size());
  212. for (String name : variableMap.keySet()) {
  213. double value = variableMap.get(name);
  214. outln(name + " = " + value);
  215. }
  216. }
  217. ;
  219. // This adds or substracts two functions to create a new one.
  220. combine
  221. : ^(COMBINE fn1=NAME op=SIGN fn2=NAME fn3=NAME) {
  222. Function f2 = getFunction(fn2);
  223. Function f3 = getFunction(fn3);
  224. if ("+".equals($op.text)) {
  225. // "$fn1.text" is the name of the new function to create.
  226. define(f2.add($fn1.text, f3));
  227. } else if ("-".equals($op.text)) {
  228. define(f2.subtract($fn1.text, f3));
  229. } else {
  230. // This should never happen since SIGN is defined to be either "+" or "-".
  231. throw new RuntimeException(
  232. "The operator \"" + $op +
  233. " cannot be used for combining functions.");
  234. }
  235. }
  236. ;
  238. // This prints a list of printTargets then prints a newline.
  239. print
  240. : ^(PRINT printTarget*)
  241. { System.out.println(); };
  243. // This prints a single printTarget without a newline.
  244. // "out", "unescape", "getVariable", "getFunction", "evalFunction"
  245. // and "toDouble" are methods we wrote that were defined
  246. // in the @members block earlier.
  247. printTarget
  248. : NUMBER { out($NUMBER); }
  250. String s = unescape($STRING_LITERAL.text);
  251. out(s.substring(1, s.length() - 1)); // removes quotes
  252. }
  253. | NAME { out(getVariable($NAME)); }
  254. | ^(FUNCTION NAME) { out(getFunction($NAME)); }
  255. // The next line uses the return value named "result"
  256. // from the earlier rule named "functionEval".
  257. | functionEval { out($functionEval.result); }
  258. | derivative
  259. ;
  261. // This prints the derivative of a function.
  262. // This also could have been done in place in the printTarget rule.
  263. derivative
  264. : ^(DERIVATIVE NAME) {
  265. out(getFunction($NAME).getDerivative());
  266. }
  267. ;


Part VI - ANTLRWorks

ANTLRWorks is a graphical grammar editor and debugger. It checks for grammar errors, including those beyond the syntax variety such as conflicting rule alternatives, and highlights them. It can display a syntax diagram for a selected rule. It provides a debugger that can step through creation of parse trees and ASTs.

Rectangles in syntax diagrams correspond to fixed vocabulary symbols. Rounded rectangles correspond to variable symbols.

Here's an example of a syntax diagram for a selected lexer rule.

ANTLRWorks Lexer Rule Syntax Diagram

Here's an example of a syntax diagram for a selected parser rule.

ANTLRWorks Parser Rule Syntax Diagram

Here's an example of requesting a grammar check, followed by a successful result.

ANTLRWorks Check Grammar 1
ANTLRWorks Check Grammar 2

Using the ANTLRWorks debugger is simple when the lexer and parser rules are combined in a single grammar file, unlike our example. Press the Debug toolbar button (with a bug on it), enter input text or select an input file, select the start rule (allows debugging a subset of the grammar) and press the OK button. Here's an example of entering the input for a different, simpler grammar that defines the lexer and parser rules in a single file:

ANTLRWorks Debugger 1

The debugger controls and output are displayed at the bottom of the ANTLRWorks window. Here's an example using that same, simpler grammar:

ANTLRWorks Debugger 2

Using the debugger when the lexer and parser rules are in separate files, like in our example, is a bit more complicated. See the ANTLR Wiki page titled "When do I need to use remote debugging."


Part VII - Putting It All Together

Using Generated Classes

Next we need to write a class to utilize the classes generated by ANTLR. We'll call ours Processor. This class will use MathLexer (extends Lexer), MathParser (extends Parser) and MathTree (extends TreeParser). Note that the classes Lexer, Parser and TreeParser all extend the class BaseRecognizer. Our Processor class will also use other classes we wrote to model our domain. These classes are named Term, Function and Polynomial. We'll support two modes of operation, batch and interactive.

Here's our Processor class.

  1. package com.ociweb.math;
  3. import*;
  4. import java.util.Scanner;
  5. import org.antlr.runtime.*;
  6. import org.antlr.runtime.tree.*;
  8. public class Processor {
  10. public static void main(String[] args) throws IOException, RecognitionException {
  11. if (args.length == 0) {
  12. new Processor().processInteractive();
  13. } else if (args.length == 1) { // name of file to process was passed in
  14. new Processor().processFile(args[0]);
  15. } else { // more than one command-line argument
  16. System.err.println("usage: java com.ociweb.math.Processor [file-name]");
  17. }
  18. }
  20. private void processFile(String filePath) throws IOException, RecognitionException {
  21. CommonTree ast = getAST(new FileReader(filePath));
  22. //System.out.println(ast.toStringTree()); // for debugging
  23. processAST(ast);
  24. }
  26. private CommonTree getAST(Reader reader) throws IOException, RecognitionException {
  27. MathParser tokenParser = new MathParser(getTokenStream(reader));
  28. MathParser.script_return parserResult =
  29. tokenParser.script(); // start rule method
  30. reader.close();
  31. return (CommonTree) parserResult.getTree();
  32. }
  34. private CommonTokenStream getTokenStream(Reader reader) throws IOException {
  35. MathLexer lexer = new MathLexer(new ANTLRReaderStream(reader));
  36. return new CommonTokenStream(lexer);
  37. }
  39. private void processAST(CommonTree ast) throws RecognitionException {
  40. MathTree treeParser = new MathTree(new CommonTreeNodeStream(ast));
  41. treeParser.script(); // start rule method
  42. }
  44. private void processInteractive() throws IOException, RecognitionException {
  45. MathTree treeParser = new MathTree(null); // a TreeNodeStream will be assigned later
  46. Scanner scanner = new Scanner(;
  48. while (true) {
  49. System.out.print("math> ");
  50. String line = scanner.nextLine().trim();
  51. if ("quit".equals(line) || "exit".equals(line)) break;
  52. processLine(treeParser, line);
  53. }
  54. }
  56. // Note that we can't create a new instance of MathTree for each
  57. // line processed because it maintains the variable and function Maps.
  58. private void processLine(MathTree treeParser, String line) throws RecognitionException {
  59. // Run the lexer and token parser on the line.
  60. MathLexer lexer = new MathLexer(new ANTLRStringStream(line));
  61. MathParser tokenParser = new MathParser(new CommonTokenStream(lexer));
  62. MathParser.statement_return parserResult =
  63. tokenParser.statement(); // start rule method
  65. // Use the token parser to retrieve the AST.
  66. CommonTree ast = (CommonTree) parserResult.getTree();
  67. if (ast == null) return; // line is empty
  69. // Use the tree parser to process the AST.
  70. treeParser.setTreeNodeStream(new CommonTreeNodeStream(ast));
  71. treeParser.statement(); // start rule method
  72. }
  74. } // end of Processor class

Ant Tips

Ant is a great tool for automating tasks used to develop and test grammars. Suggested independent "targets" include the following.

For examples of all of these, download the source code from the URL listed at the end of this article and see the build.xml file.


Part VIII - Wrap Up

Hidden Tokens

By default the parser only processes tokens from the default channel. It can however request tokens from other channels such as the hidden channel. Tokens are assigned unique, sequential indexes regardless of the channel to which they are written. This allows parser code to determine the order in which the tokens were encountered, regardless of the channel to which they were written.

Here are some related public constants and methods from the Token class.

Here are some related public methods from the CommonTokenStream class, which implements the TokenStream interface.

Advanced Topics

We have demonstrated the basics of using ANTLR. For information on advanced topics, see the slides from the presentation on which this article was based at This web page contains links to the slides and the code presented in this article. The advanced topics covered in these slides include the following.

Projects Using ANTLR

Many programming languages have been implemented using ANTLR. These include Boo, Groovy, Mantra, Nemerle and XRuby.

Many other kinds of tools use ANTLR in their implementation. These include Hibernate (for its HQL to SQL query translator), Intellij IDEA, Jazillian (translates COBOL, C and C++ to Java), JBoss Rules (was Drools), Keynote (from Apple), WebLogic (from Oracle), and many more.


Currently, only one book on ANTLR is available. Terence Parr, creator of ANTLR, wrote "The Definitive ANTLR Reference" It is published by "The Pragmatic Programmers." Terence is working on a second book for the same publisher that may be titled "ANTLR Recipes."


There you have it! ANTLR is a great tool for generating custom language parsers. We hope this article will make it easier to get started creating validators, processors and translators.


The Software Engineering Tech Trends (SETT) is a monthly newsletter featuring emerging trends in software engineering. 

Check out OUR MOST POPULAR articles!