Source Code Cross Referenced for CodeGenerator.java in  » Parser » antlr-3.0.1 » org » antlr » codegen » Java Source Code / Java DocumentationJava Source Code and Java Documentation

Java Source Code / Java Documentation
1. 6.0 JDK Core
2. 6.0 JDK Modules
3. 6.0 JDK Modules com.sun
4. 6.0 JDK Modules com.sun.java
5. 6.0 JDK Modules sun
6. 6.0 JDK Platform
7. Ajax
8. Apache Harmony Java SE
9. Aspect oriented
10. Authentication Authorization
11. Blogger System
12. Build
13. Byte Code
14. Cache
15. Chart
16. Chat
17. Code Analyzer
18. Collaboration
19. Content Management System
20. Database Client
21. Database DBMS
22. Database JDBC Connection Pool
23. Database ORM
24. Development
25. EJB Server geronimo
26. EJB Server GlassFish
27. EJB Server JBoss 4.2.1
28. EJB Server resin 3.1.5
29. ERP CRM Financial
30. ESB
31. Forum
32. GIS
33. Graphic Library
34. Groupware
35. HTML Parser
36. IDE
37. IDE Eclipse
38. IDE Netbeans
39. Installer
40. Internationalization Localization
41. Inversion of Control
42. Issue Tracking
43. J2EE
44. JBoss
45. JMS
46. JMX
47. Library
48. Mail Clients
49. Net
50. Parser
51. PDF
52. Portal
53. Profiler
54. Project Management
55. Report
56. RSS RDF
57. Rule Engine
58. Science
59. Scripting
60. Search Engine
61. Security
62. Sevlet Container
63. Source Control
64. Swing Library
65. Template Engine
66. Test Coverage
67. Testing
68. UML
69. Web Crawler
70. Web Framework
71. Web Mail
72. Web Server
73. Web Services
74. Web Services apache cxf 2.0.1
75. Web Services AXIS2
76. Wiki Engine
77. Workflow Engines
78. XML
79. XML UI
Java
Java Tutorial
Java Open Source
Jar File Download
Java Articles
Java Products
Java by API
Photoshop Tutorials
Maya Tutorials
Flash Tutorials
3ds-Max Tutorials
Illustrator Tutorials
GIMP Tutorials
C# / C Sharp
C# / CSharp Tutorial
C# / CSharp Open Source
ASP.Net
ASP.NET Tutorial
JavaScript DHTML
JavaScript Tutorial
JavaScript Reference
HTML / CSS
HTML CSS Reference
C / ANSI-C
C Tutorial
C++
C++ Tutorial
Ruby
PHP
Python
Python Tutorial
Python Open Source
SQL Server / T-SQL
SQL Server / T-SQL Tutorial
Oracle PL / SQL
Oracle PL/SQL Tutorial
PostgreSQL
SQL / MySQL
MySQL Tutorial
VB.Net
VB.Net Tutorial
Flash / Flex / ActionScript
VBA / Excel / Access / Word
XML
XML Tutorial
Microsoft Office PowerPoint 2007 Tutorial
Microsoft Office Excel 2007 Tutorial
Microsoft Office Word 2007 Tutorial
Java Source Code / Java Documentation » Parser » antlr 3.0.1 » org.antlr.codegen 
Source Cross Referenced  Class Diagram Java Document (Java Doc) 


0001:        /*
0002:        [The "BSD licence"]
0003:        Copyright (c) 2005-2006 Terence Parr
0004:        All rights reserved.
0005:
0006:        Redistribution and use in source and binary forms, with or without
0007:        modification, are permitted provided that the following conditions
0008:        are met:
0009:        1. Redistributions of source code must retain the above copyright
0010:        notice, this list of conditions and the following disclaimer.
0011:        2. Redistributions in binary form must reproduce the above copyright
0012:        notice, this list of conditions and the following disclaimer in the
0013:        documentation and/or other materials provided with the distribution.
0014:        3. The name of the author may not be used to endorse or promote products
0015:        derived from this software without specific prior written permission.
0016:
0017:        THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
0018:        IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
0019:        OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
0020:        IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
0021:        INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
0022:        NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
0023:        DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
0024:        THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
0025:        (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
0026:        THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
0027:         */
0028:        package org.antlr.codegen;
0029:
0030:        import antlr.RecognitionException;
0031:        import antlr.TokenStreamRewriteEngine;
0032:        import antlr.collections.AST;
0033:        import org.antlr.Tool;
0034:        import org.antlr.analysis.*;
0035:        import org.antlr.misc.BitSet;
0036:        import org.antlr.misc.*;
0037:        import org.antlr.stringtemplate.*;
0038:        import org.antlr.stringtemplate.language.AngleBracketTemplateLexer;
0039:        import org.antlr.tool.*;
0040:
0041:        import java.io.IOException;
0042:        import java.io.StringReader;
0043:        import java.io.Writer;
0044:        import java.util.*;
0045:
0046:        /** ANTLR's code generator.
0047:         *
0048:         *  Generate recognizers derived from grammars.  Language independence
0049:         *  achieved through the use of StringTemplateGroup objects.  All output
0050:         *  strings are completely encapsulated in the group files such as Java.stg.
0051:         *  Some computations are done that are unused by a particular language.
0052:         *  This generator just computes and sets the values into the templates;
0053:         *  the templates are free to use or not use the information.
0054:         *
0055:         *  To make a new code generation target, define X.stg for language X
0056:         *  by copying from existing Y.stg most closely releated to your language;
0057:         *  e.g., to do CSharp.stg copy Java.stg.  The template group file has a
0058:         *  bunch of templates that are needed by the code generator.  You can add
0059:         *  a new target w/o even recompiling ANTLR itself.  The language=X option
0060:         *  in a grammar file dictates which templates get loaded/used.
0061:         *
0062:         *  Some language like C need both parser files and header files.  Java needs
0063:         *  to have a separate file for the cyclic DFA as ANTLR generates bytecodes
0064:         *  directly (which cannot be in the generated parser Java file).  To facilitate
0065:         *  this,
0066:         *
0067:         * cyclic can be in same file, but header, output must be searpate.  recognizer
0068:         *  is in outptufile.
0069:         */
0070:        public class CodeGenerator {
0071:            /** When generating SWITCH statements, some targets might need to limit
0072:             *  the size (based upon the number of case labels).  Generally, this
0073:             *  limit will be hit only for lexers where wildcard in a UNICODE
0074:             *  vocabulary environment would generate a SWITCH with 65000 labels.
0075:             */
0076:            public int MAX_SWITCH_CASE_LABELS = 300;
0077:            public int MIN_SWITCH_ALTS = 3;
0078:            public boolean GENERATE_SWITCHES_WHEN_POSSIBLE = true;
0079:            public static boolean GEN_ACYCLIC_DFA_INLINE = true;
0080:            public static boolean EMIT_TEMPLATE_DELIMITERS = false;
0081:
0082:            public String classpathTemplateRootDirectoryName = "org/antlr/codegen/templates";
0083:
0084:            /** Which grammar are we generating code for?  Each generator
0085:             *  is attached to a specific grammar.
0086:             */
0087:            public Grammar grammar;
0088:
0089:            /** What language are we generating? */
0090:            protected String language;
0091:
0092:            /** The target specifies how to write out files and do other language
0093:             *  specific actions.
0094:             */
0095:            public Target target = null;
0096:
0097:            /** Where are the templates this generator should use to generate code? */
0098:            protected StringTemplateGroup templates;
0099:
0100:            /** The basic output templates without AST or templates stuff; this will be
0101:             *  the templates loaded for the language such as Java.stg *and* the Dbg
0102:             *  stuff if turned on.  This is used for generating syntactic predicates.
0103:             */
0104:            protected StringTemplateGroup baseTemplates;
0105:
0106:            protected StringTemplate recognizerST;
0107:            protected StringTemplate outputFileST;
0108:            protected StringTemplate headerFileST;
0109:
0110:            /** Used to create unique labels */
0111:            protected int uniqueLabelNumber = 1;
0112:
0113:            /** A reference to the ANTLR tool so we can learn about output directories
0114:             *  and such.
0115:             */
0116:            protected Tool tool;
0117:
0118:            /** Generate debugging event method calls */
0119:            protected boolean debug;
0120:
0121:            /** Create a Tracer object and make the recognizer invoke this. */
0122:            protected boolean trace;
0123:
0124:            /** Track runtime parsing information about decisions etc...
0125:             *  This requires the debugging event mechanism to work.
0126:             */
0127:            protected boolean profile;
0128:
0129:            protected int lineWidth = 72;
0130:
0131:            /** I have factored out the generation of acyclic DFAs to separate class */
0132:            public ACyclicDFACodeGenerator acyclicDFAGenerator = new ACyclicDFACodeGenerator(
0133:                    this );
0134:
0135:            /** I have factored out the generation of cyclic DFAs to separate class */
0136:            /*
0137:            public CyclicDFACodeGenerator cyclicDFAGenerator =
0138:            	new CyclicDFACodeGenerator(this);
0139:             */
0140:
0141:            public static final String VOCAB_FILE_EXTENSION = ".tokens";
0142:            protected final static String vocabFilePattern = "<tokens:{<attr.name>=<attr.type>\n}>"
0143:                    + "<literals:{<attr.name>=<attr.type>\n}>";
0144:
0145:            public CodeGenerator(Tool tool, Grammar grammar, String language) {
0146:                this .tool = tool;
0147:                this .grammar = grammar;
0148:                this .language = language;
0149:                loadLanguageTarget(language);
0150:            }
0151:
0152:            protected void loadLanguageTarget(String language) {
0153:                String targetName = "org.antlr.codegen." + language + "Target";
0154:                try {
0155:                    Class c = Class.forName(targetName);
0156:                    target = (Target) c.newInstance();
0157:                } catch (ClassNotFoundException cnfe) {
0158:                    target = new Target(); // use default
0159:                } catch (InstantiationException ie) {
0160:                    ErrorManager.error(
0161:                            ErrorManager.MSG_CANNOT_CREATE_TARGET_GENERATOR,
0162:                            targetName, ie);
0163:                } catch (IllegalAccessException cnfe) {
0164:                    ErrorManager.error(
0165:                            ErrorManager.MSG_CANNOT_CREATE_TARGET_GENERATOR,
0166:                            targetName, cnfe);
0167:                }
0168:            }
0169:
0170:            /** load the main language.stg template group file */
0171:            public void loadTemplates(String language) {
0172:                // get a group loader containing main templates dir and target subdir
0173:                String templateDirs = classpathTemplateRootDirectoryName + ":"
0174:                        + classpathTemplateRootDirectoryName + "/" + language;
0175:                //System.out.println("targets="+templateDirs.toString());
0176:                StringTemplateGroupLoader loader = new CommonGroupLoader(
0177:                        templateDirs, ErrorManager
0178:                                .getStringTemplateErrorListener());
0179:                StringTemplateGroup.registerGroupLoader(loader);
0180:                StringTemplateGroup
0181:                        .registerDefaultLexer(AngleBracketTemplateLexer.class);
0182:
0183:                // first load main language template
0184:                StringTemplateGroup coreTemplates = StringTemplateGroup
0185:                        .loadGroup(language);
0186:                baseTemplates = coreTemplates;
0187:                if (coreTemplates == null) {
0188:                    ErrorManager.error(
0189:                            ErrorManager.MSG_MISSING_CODE_GEN_TEMPLATES,
0190:                            language);
0191:                    return;
0192:                }
0193:
0194:                // dynamically add subgroups that act like filters to apply to
0195:                // their supergroup.  E.g., Java:Dbg:AST:ASTDbg.
0196:                String outputOption = (String) grammar.getOption("output");
0197:                if (outputOption != null && outputOption.equals("AST")) {
0198:                    if (debug && grammar.type != Grammar.LEXER) {
0199:                        StringTemplateGroup dbgTemplates = StringTemplateGroup
0200:                                .loadGroup("Dbg", coreTemplates);
0201:                        baseTemplates = dbgTemplates;
0202:                        StringTemplateGroup astTemplates = StringTemplateGroup
0203:                                .loadGroup("AST", dbgTemplates);
0204:                        StringTemplateGroup astDbgTemplates = StringTemplateGroup
0205:                                .loadGroup("ASTDbg", astTemplates);
0206:                        templates = astDbgTemplates;
0207:                    } else {
0208:                        templates = StringTemplateGroup.loadGroup("AST",
0209:                                coreTemplates);
0210:                    }
0211:                } else if (outputOption != null
0212:                        && outputOption.equals("template")) {
0213:                    if (debug && grammar.type != Grammar.LEXER) {
0214:                        StringTemplateGroup dbgTemplates = StringTemplateGroup
0215:                                .loadGroup("Dbg", coreTemplates);
0216:                        baseTemplates = dbgTemplates;
0217:                        StringTemplateGroup stTemplates = StringTemplateGroup
0218:                                .loadGroup("ST", dbgTemplates);
0219:                        /*
0220:                        StringTemplateGroup astDbgTemplates =
0221:                        	StringTemplateGroup.loadGroup("STDbg", astTemplates);
0222:                         */
0223:                        templates = stTemplates;
0224:                    } else {
0225:                        templates = StringTemplateGroup.loadGroup("ST",
0226:                                coreTemplates);
0227:                    }
0228:                } else if (debug && grammar.type != Grammar.LEXER) {
0229:                    templates = StringTemplateGroup.loadGroup("Dbg",
0230:                            coreTemplates);
0231:                    baseTemplates = templates;
0232:                } else {
0233:                    templates = coreTemplates;
0234:                }
0235:
0236:                if (EMIT_TEMPLATE_DELIMITERS) {
0237:                    templates.emitDebugStartStopStrings(true);
0238:                    templates
0239:                            .doNotEmitDebugStringsForTemplate("codeFileExtension");
0240:                    templates
0241:                            .doNotEmitDebugStringsForTemplate("headerFileExtension");
0242:                }
0243:            }
0244:
0245:            /** Given the grammar to which we are attached, walk the AST associated
0246:             *  with that grammar to create NFAs.  Then create the DFAs for all
0247:             *  decision points in the grammar by converting the NFAs to DFAs.
0248:             *  Finally, walk the AST again to generate code.
0249:             *
0250:             *  Either 1 or 2 files are written:
0251:             *
0252:             * 		recognizer: the main parser/lexer/treewalker item
0253:             * 		header file: language like C/C++ need extern definitions
0254:             *
0255:             *  The target, such as JavaTarget, dictates which files get written.
0256:             */
0257:            public StringTemplate genRecognizer() {
0258:                // LOAD OUTPUT TEMPLATES
0259:                loadTemplates(language);
0260:                if (templates == null) {
0261:                    return null;
0262:                }
0263:
0264:                // CHECK FOR LEFT RECURSION; Make sure we can actually do analysis
0265:                grammar.checkAllRulesForLeftRecursion();
0266:
0267:                // was there a severe problem while reading in grammar?
0268:                if (ErrorManager.doNotAttemptAnalysis()) {
0269:                    return null;
0270:                }
0271:
0272:                // CREATE NFA FROM GRAMMAR, CREATE DFA FROM NFA
0273:                target.performGrammarAnalysis(this , grammar);
0274:
0275:                // some grammar analysis errors will not yield reliable DFA
0276:                if (ErrorManager.doNotAttemptCodeGen()) {
0277:                    return null;
0278:                }
0279:
0280:                // OPTIMIZE DFA
0281:                DFAOptimizer optimizer = new DFAOptimizer(grammar);
0282:                optimizer.optimize();
0283:
0284:                // OUTPUT FILE (contains recognizerST)
0285:                outputFileST = templates.getInstanceOf("outputFile");
0286:
0287:                // HEADER FILE
0288:                if (templates.isDefined("headerFile")) {
0289:                    headerFileST = templates.getInstanceOf("headerFile");
0290:                } else {
0291:                    // create a dummy to avoid null-checks all over code generator
0292:                    headerFileST = new StringTemplate(templates, "");
0293:                    headerFileST.setName("dummy-header-file");
0294:                }
0295:
0296:                boolean filterMode = grammar.getOption("filter") != null
0297:                        && grammar.getOption("filter").equals("true");
0298:                boolean canBacktrack = grammar.getSyntacticPredicates() != null
0299:                        || filterMode;
0300:
0301:                // TODO: move this down further because generating the recognizer
0302:                // alters the model with info on who uses predefined properties etc...
0303:                // The actions here might refer to something.
0304:
0305:                // The only two possible output files are available at this point.
0306:                // Verify action scopes are ok for target and dump actions into output
0307:                // Templates can say <actions.parser.header> for example.
0308:                Map actions = grammar.getActions();
0309:                verifyActionScopesOkForTarget(actions);
0310:                // translate $x::y references
0311:                translateActionAttributeReferences(actions);
0312:                Map actionsForGrammarScope = (Map) actions.get(grammar
0313:                        .getDefaultActionScope(grammar.type));
0314:                if (filterMode
0315:                        && (actionsForGrammarScope == null || !actionsForGrammarScope
0316:                                .containsKey(Grammar.SYNPREDGATE_ACTION_NAME))) {
0317:                    // if filtering, we need to set actions to execute at backtracking
0318:                    // level 1 not 0.  Don't set this action if a user has though
0319:                    StringTemplate gateST = templates
0320:                            .getInstanceOf("filteringActionGate");
0321:                    if (actionsForGrammarScope == null) {
0322:                        actionsForGrammarScope = new HashMap();
0323:                        actions.put(
0324:                                grammar.getDefaultActionScope(grammar.type),
0325:                                actionsForGrammarScope);
0326:                    }
0327:                    actionsForGrammarScope.put(Grammar.SYNPREDGATE_ACTION_NAME,
0328:                            gateST);
0329:                }
0330:                headerFileST.setAttribute("actions", actions);
0331:                outputFileST.setAttribute("actions", actions);
0332:
0333:                headerFileST.setAttribute("buildTemplate", new Boolean(grammar
0334:                        .buildTemplate()));
0335:                outputFileST.setAttribute("buildTemplate", new Boolean(grammar
0336:                        .buildTemplate()));
0337:                headerFileST.setAttribute("buildAST", new Boolean(grammar
0338:                        .buildAST()));
0339:                outputFileST.setAttribute("buildAST", new Boolean(grammar
0340:                        .buildAST()));
0341:
0342:                String rewrite = (String) grammar.getOption("rewrite");
0343:                outputFileST.setAttribute("rewrite", Boolean
0344:                        .valueOf(rewrite != null && rewrite.equals("true")));
0345:                headerFileST.setAttribute("rewrite", Boolean
0346:                        .valueOf(rewrite != null && rewrite.equals("true")));
0347:
0348:                outputFileST.setAttribute("backtracking", Boolean
0349:                        .valueOf(canBacktrack));
0350:                headerFileST.setAttribute("backtracking", Boolean
0351:                        .valueOf(canBacktrack));
0352:                String memoize = (String) grammar.getOption("memoize");
0353:                outputFileST.setAttribute("memoize", Boolean
0354:                        .valueOf(memoize != null && memoize.equals("true")
0355:                                && canBacktrack));
0356:                headerFileST.setAttribute("memoize", Boolean
0357:                        .valueOf(memoize != null && memoize.equals("true")
0358:                                && canBacktrack));
0359:
0360:                outputFileST.setAttribute("trace", Boolean.valueOf(trace));
0361:                headerFileST.setAttribute("trace", Boolean.valueOf(trace));
0362:
0363:                outputFileST.setAttribute("profile", Boolean.valueOf(profile));
0364:                headerFileST.setAttribute("profile", Boolean.valueOf(profile));
0365:
0366:                // RECOGNIZER
0367:                if (grammar.type == Grammar.LEXER) {
0368:                    recognizerST = templates.getInstanceOf("lexer");
0369:                    outputFileST.setAttribute("LEXER", Boolean.valueOf(true));
0370:                    headerFileST.setAttribute("LEXER", Boolean.valueOf(true));
0371:                    recognizerST.setAttribute("filterMode", Boolean
0372:                            .valueOf(filterMode));
0373:                } else if (grammar.type == Grammar.PARSER
0374:                        || grammar.type == Grammar.COMBINED) {
0375:                    recognizerST = templates.getInstanceOf("parser");
0376:                    outputFileST.setAttribute("PARSER", Boolean.valueOf(true));
0377:                    headerFileST.setAttribute("PARSER", Boolean.valueOf(true));
0378:                } else {
0379:                    recognizerST = templates.getInstanceOf("treeParser");
0380:                    outputFileST.setAttribute("TREE_PARSER", Boolean
0381:                            .valueOf(true));
0382:                    headerFileST.setAttribute("TREE_PARSER", Boolean
0383:                            .valueOf(true));
0384:                }
0385:                outputFileST.setAttribute("recognizer", recognizerST);
0386:                headerFileST.setAttribute("recognizer", recognizerST);
0387:                outputFileST.setAttribute("actionScope", grammar
0388:                        .getDefaultActionScope(grammar.type));
0389:                headerFileST.setAttribute("actionScope", grammar
0390:                        .getDefaultActionScope(grammar.type));
0391:
0392:                String targetAppropriateFileNameString = target
0393:                        .getTargetStringLiteralFromString(grammar.getFileName());
0394:                outputFileST.setAttribute("fileName",
0395:                        targetAppropriateFileNameString);
0396:                headerFileST.setAttribute("fileName",
0397:                        targetAppropriateFileNameString);
0398:                outputFileST.setAttribute("ANTLRVersion", Tool.VERSION);
0399:                headerFileST.setAttribute("ANTLRVersion", Tool.VERSION);
0400:                outputFileST.setAttribute("generatedTimestamp", Tool
0401:                        .getCurrentTimeStamp());
0402:                headerFileST.setAttribute("generatedTimestamp", Tool
0403:                        .getCurrentTimeStamp());
0404:
0405:                // GENERATE RECOGNIZER
0406:                // Walk the AST holding the input grammar, this time generating code
0407:                // Decisions are generated by using the precomputed DFAs
0408:                // Fill in the various templates with data
0409:                CodeGenTreeWalker gen = new CodeGenTreeWalker();
0410:                try {
0411:                    gen.grammar((AST) grammar.getGrammarTree(), grammar,
0412:                            recognizerST, outputFileST, headerFileST);
0413:                } catch (RecognitionException re) {
0414:                    ErrorManager.error(ErrorManager.MSG_BAD_AST_STRUCTURE, re);
0415:                }
0416:                genTokenTypeConstants(recognizerST);
0417:                genTokenTypeConstants(outputFileST);
0418:                genTokenTypeConstants(headerFileST);
0419:
0420:                if (grammar.type != Grammar.LEXER) {
0421:                    genTokenTypeNames(recognizerST);
0422:                    genTokenTypeNames(outputFileST);
0423:                    genTokenTypeNames(headerFileST);
0424:                }
0425:
0426:                // Now that we know what synpreds are used, we can set into template
0427:                Set synpredNames = null;
0428:                if (grammar.synPredNamesUsedInDFA.size() > 0) {
0429:                    synpredNames = grammar.synPredNamesUsedInDFA;
0430:                }
0431:                outputFileST.setAttribute("synpreds", synpredNames);
0432:                headerFileST.setAttribute("synpreds", synpredNames);
0433:
0434:                // all recognizers can see Grammar object
0435:                recognizerST.setAttribute("grammar", grammar);
0436:
0437:                // WRITE FILES
0438:                try {
0439:                    target.genRecognizerFile(tool, this , grammar, outputFileST);
0440:                    if (templates.isDefined("headerFile")) {
0441:                        StringTemplate extST = templates
0442:                                .getInstanceOf("headerFileExtension");
0443:                        target.genRecognizerHeaderFile(tool, this , grammar,
0444:                                headerFileST, extST.toString());
0445:                    }
0446:                    // write out the vocab interchange file; used by antlr,
0447:                    // does not change per target
0448:                    StringTemplate tokenVocabSerialization = genTokenVocabOutput();
0449:                    String vocabFileName = getVocabFileName();
0450:                    if (vocabFileName != null) {
0451:                        write(tokenVocabSerialization, vocabFileName);
0452:                    }
0453:                    //System.out.println(outputFileST.getDOTForDependencyGraph(false));
0454:                } catch (IOException ioe) {
0455:                    ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE,
0456:                            getVocabFileName(), ioe);
0457:                }
0458:                /*
0459:                System.out.println("num obj.prop refs: "+ ASTExpr.totalObjPropRefs);
0460:                System.out.println("num reflection lookups: "+ ASTExpr.totalReflectionLookups);
0461:                 */
0462:
0463:                return outputFileST;
0464:            }
0465:
0466:            /** Some targets will have some extra scopes like C++ may have
0467:             *  '@headerfile:name {action}' or something.  Make sure the
0468:             *  target likes the scopes in action table.
0469:             */
0470:            protected void verifyActionScopesOkForTarget(Map actions) {
0471:                Set actionScopeKeySet = actions.keySet();
0472:                for (Iterator it = actionScopeKeySet.iterator(); it.hasNext();) {
0473:                    String scope = (String) it.next();
0474:                    if (!target.isValidActionScope(grammar.type, scope)) {
0475:                        // get any action from the scope to get error location
0476:                        Map scopeActions = (Map) actions.get(scope);
0477:                        GrammarAST actionAST = (GrammarAST) scopeActions
0478:                                .values().iterator().next();
0479:                        ErrorManager.grammarError(
0480:                                ErrorManager.MSG_INVALID_ACTION_SCOPE, grammar,
0481:                                actionAST.getToken(), scope,
0482:                                Grammar.grammarTypeToString[grammar.type]);
0483:                    }
0484:                }
0485:            }
0486:
0487:            /** Actions may reference $x::y attributes, call translateAction on
0488:             *  each action and replace that action in the Map.
0489:             */
0490:            protected void translateActionAttributeReferences(Map actions) {
0491:                Set actionScopeKeySet = actions.keySet();
0492:                for (Iterator it = actionScopeKeySet.iterator(); it.hasNext();) {
0493:                    String scope = (String) it.next();
0494:                    Map scopeActions = (Map) actions.get(scope);
0495:                    translateActionAttributeReferencesForSingleScope(null,
0496:                            scopeActions);
0497:                }
0498:            }
0499:
0500:            /** Use for translating rule @init{...} actions that have no scope */
0501:            protected void translateActionAttributeReferencesForSingleScope(
0502:                    Rule r, Map scopeActions) {
0503:                String ruleName = null;
0504:                if (r != null) {
0505:                    ruleName = r.name;
0506:                }
0507:                Set actionNameSet = scopeActions.keySet();
0508:                for (Iterator nameIT = actionNameSet.iterator(); nameIT
0509:                        .hasNext();) {
0510:                    String name = (String) nameIT.next();
0511:                    GrammarAST actionAST = (GrammarAST) scopeActions.get(name);
0512:                    List chunks = translateAction(ruleName, actionAST);
0513:                    scopeActions.put(name, chunks); // replace with translation
0514:                }
0515:            }
0516:
0517:            /** Error recovery in ANTLR recognizers.
0518:             *
0519:             *  Based upon original ideas:
0520:             *
0521:             *  Algorithms + Data Structures = Programs by Niklaus Wirth
0522:             *
0523:             *  and
0524:             *
0525:             *  A note on error recovery in recursive descent parsers:
0526:             *  http://portal.acm.org/citation.cfm?id=947902.947905
0527:             *
0528:             *  Later, Josef Grosch had some good ideas:
0529:             *  Efficient and Comfortable Error Recovery in Recursive Descent Parsers:
0530:             *  ftp://www.cocolab.com/products/cocktail/doca4.ps/ell.ps.zip
0531:             *
0532:             *  Like Grosch I implemented local FOLLOW sets that are combined at run-time
0533:             *  upon error to avoid parsing overhead.
0534:             */
0535:            public void generateLocalFOLLOW(GrammarAST referencedElementNode,
0536:                    String referencedElementName, String enclosingRuleName,
0537:                    int elementIndex) {
0538:                NFAState followingNFAState = referencedElementNode.followingNFAState;
0539:                /*
0540:                 System.out.print("compute FOLLOW "+referencedElementNode.toString()+
0541:                 " for "+referencedElementName+"#"+elementIndex +" in "+
0542:                 enclosingRuleName+
0543:                 " line="+referencedElementNode.getLine());
0544:                 */
0545:                LookaheadSet follow = null;
0546:                if (followingNFAState != null) {
0547:                    follow = grammar.LOOK(followingNFAState);
0548:                }
0549:
0550:                if (follow == null) {
0551:                    ErrorManager
0552:                            .internalError("no follow state or cannot compute follow");
0553:                    follow = new LookaheadSet();
0554:                }
0555:                //System.out.println(" "+follow);
0556:
0557:                List tokenTypeList = null;
0558:                long[] words = null;
0559:                if (follow.tokenTypeSet == null) {
0560:                    words = new long[1];
0561:                    tokenTypeList = new ArrayList();
0562:                } else {
0563:                    BitSet bits = BitSet.of(follow.tokenTypeSet);
0564:                    words = bits.toPackedArray();
0565:                    tokenTypeList = follow.tokenTypeSet.toList();
0566:                }
0567:                // use the target to convert to hex strings (typically)
0568:                String[] wordStrings = new String[words.length];
0569:                for (int j = 0; j < words.length; j++) {
0570:                    long w = words[j];
0571:                    wordStrings[j] = target.getTarget64BitStringFromValue(w);
0572:                }
0573:                recognizerST.setAttribute(
0574:                        "bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
0575:                        referencedElementName, enclosingRuleName, wordStrings,
0576:                        tokenTypeList, Utils.integer(elementIndex));
0577:                outputFileST.setAttribute(
0578:                        "bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
0579:                        referencedElementName, enclosingRuleName, wordStrings,
0580:                        tokenTypeList, Utils.integer(elementIndex));
0581:                headerFileST.setAttribute(
0582:                        "bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
0583:                        referencedElementName, enclosingRuleName, wordStrings,
0584:                        tokenTypeList, Utils.integer(elementIndex));
0585:            }
0586:
0587:            // L O O K A H E A D  D E C I S I O N  G E N E R A T I O N
0588:
0589:            /** Generate code that computes the predicted alt given a DFA.  The
0590:             *  recognizerST can be either the main generated recognizerTemplate
0591:             *  for storage in the main parser file or a separate file.  It's up to
0592:             *  the code that ultimately invokes the codegen.g grammar rule.
0593:             *
0594:             *  Regardless, the output file and header file get a copy of the DFAs.
0595:             */
0596:            public StringTemplate genLookaheadDecision(
0597:                    StringTemplate recognizerST, DFA dfa) {
0598:                StringTemplate decisionST;
0599:                // If we are doing inline DFA and this one is acyclic and LL(*)
0600:                // I have to check for is-non-LL(*) because if non-LL(*) the cyclic
0601:                // check is not done by DFA.verify(); that is, verify() avoids
0602:                // doesStateReachAcceptState() if non-LL(*)
0603:                if (dfa.canInlineDecision()) {
0604:                    decisionST = acyclicDFAGenerator.genFixedLookaheadDecision(
0605:                            getTemplates(), dfa);
0606:                } else {
0607:                    // generate any kind of DFA here (cyclic or acyclic)
0608:                    dfa.createStateTables(this );
0609:                    outputFileST.setAttribute("cyclicDFAs", dfa);
0610:                    headerFileST.setAttribute("cyclicDFAs", dfa);
0611:                    decisionST = templates.getInstanceOf("dfaDecision");
0612:                    String description = dfa.getNFADecisionStartState()
0613:                            .getDescription();
0614:                    description = target
0615:                            .getTargetStringLiteralFromString(description);
0616:                    if (description != null) {
0617:                        decisionST.setAttribute("description", description);
0618:                    }
0619:                    decisionST.setAttribute("decisionNumber", Utils.integer(dfa
0620:                            .getDecisionNumber()));
0621:                }
0622:                return decisionST;
0623:            }
0624:
0625:            /** A special state is huge (too big for state tables) or has a predicated
0626:             *  edge.  Generate a simple if-then-else.  Cannot be an accept state as
0627:             *  they have no emanating edges.  Don't worry about switch vs if-then-else
0628:             *  because if you get here, the state is super complicated and needs an
0629:             *  if-then-else.  This is used by the new DFA scheme created June 2006.
0630:             */
0631:            public StringTemplate generateSpecialState(DFAState s) {
0632:                StringTemplate stateST;
0633:                stateST = templates.getInstanceOf("cyclicDFAState");
0634:                stateST.setAttribute("needErrorClause", Boolean.valueOf(true));
0635:                stateST.setAttribute("semPredState", Boolean.valueOf(s
0636:                        .isResolvedWithPredicates()));
0637:                stateST.setAttribute("stateNumber", s.stateNumber);
0638:                stateST.setAttribute("decisionNumber", s.dfa.decisionNumber);
0639:
0640:                boolean foundGatedPred = false;
0641:                StringTemplate eotST = null;
0642:                for (int i = 0; i < s.getNumberOfTransitions(); i++) {
0643:                    Transition edge = (Transition) s.transition(i);
0644:                    StringTemplate edgeST;
0645:                    if (edge.label.getAtom() == Label.EOT) {
0646:                        // this is the default clause; has to held until last
0647:                        edgeST = templates.getInstanceOf("eotDFAEdge");
0648:                        stateST.removeAttribute("needErrorClause");
0649:                        eotST = edgeST;
0650:                    } else {
0651:                        edgeST = templates.getInstanceOf("cyclicDFAEdge");
0652:                        StringTemplate exprST = genLabelExpr(templates, edge, 1);
0653:                        edgeST.setAttribute("labelExpr", exprST);
0654:                    }
0655:                    edgeST.setAttribute("edgeNumber", Utils.integer(i + 1));
0656:                    edgeST.setAttribute("targetStateNumber", Utils
0657:                            .integer(edge.target.stateNumber));
0658:                    // stick in any gated predicates for any edge if not already a pred
0659:                    if (!edge.label.isSemanticPredicate()) {
0660:                        DFAState t = (DFAState) edge.target;
0661:                        SemanticContext preds = t
0662:                                .getGatedPredicatesInNFAConfigurations();
0663:                        if (preds != null) {
0664:                            foundGatedPred = true;
0665:                            StringTemplate predST = preds.genExpr(this ,
0666:                                    getTemplates(), t.dfa);
0667:                            edgeST
0668:                                    .setAttribute("predicates", predST
0669:                                            .toString());
0670:                        }
0671:                    }
0672:                    if (edge.label.getAtom() != Label.EOT) {
0673:                        stateST.setAttribute("edges", edgeST);
0674:                    }
0675:                }
0676:                if (foundGatedPred) {
0677:                    // state has >= 1 edge with a gated pred (syn or sem)
0678:                    // must rewind input first, set flag.
0679:                    stateST.setAttribute("semPredState", new Boolean(
0680:                            foundGatedPred));
0681:                }
0682:                if (eotST != null) {
0683:                    stateST.setAttribute("edges", eotST);
0684:                }
0685:                return stateST;
0686:            }
0687:
0688:            /** Generate an expression for traversing an edge. */
0689:            protected StringTemplate genLabelExpr(
0690:                    StringTemplateGroup templates, Transition edge, int k) {
0691:                Label label = edge.label;
0692:                if (label.isSemanticPredicate()) {
0693:                    return genSemanticPredicateExpr(templates, edge);
0694:                }
0695:                if (label.isSet()) {
0696:                    return genSetExpr(templates, label.getSet(), k, true);
0697:                }
0698:                // must be simple label
0699:                StringTemplate eST = templates.getInstanceOf("lookaheadTest");
0700:                eST.setAttribute("atom", getTokenTypeAsTargetLabel(label
0701:                        .getAtom()));
0702:                eST.setAttribute("atomAsInt", Utils.integer(label.getAtom()));
0703:                eST.setAttribute("k", Utils.integer(k));
0704:                return eST;
0705:            }
0706:
0707:            protected StringTemplate genSemanticPredicateExpr(
0708:                    StringTemplateGroup templates, Transition edge) {
0709:                DFA dfa = ((DFAState) edge.target).dfa; // which DFA are we in
0710:                Label label = edge.label;
0711:                SemanticContext semCtx = label.getSemanticContext();
0712:                return semCtx.genExpr(this , templates, dfa);
0713:            }
0714:
0715:            /** For intervals such as [3..3, 30..35], generate an expression that
0716:             *  tests the lookahead similar to LA(1)==3 || (LA(1)>=30&&LA(1)<=35)
0717:             */
0718:            public StringTemplate genSetExpr(StringTemplateGroup templates,
0719:                    IntSet set, int k, boolean partOfDFA) {
0720:                if (!(set instanceof  IntervalSet)) {
0721:                    throw new IllegalArgumentException(
0722:                            "unable to generate expressions for non IntervalSet objects");
0723:                }
0724:                IntervalSet iset = (IntervalSet) set;
0725:                if (iset.getIntervals() == null
0726:                        || iset.getIntervals().size() == 0) {
0727:                    StringTemplate emptyST = new StringTemplate(templates, "");
0728:                    emptyST.setName("empty-set-expr");
0729:                    return emptyST;
0730:                }
0731:                String testSTName = "lookaheadTest";
0732:                String testRangeSTName = "lookaheadRangeTest";
0733:                if (!partOfDFA) {
0734:                    testSTName = "isolatedLookaheadTest";
0735:                    testRangeSTName = "isolatedLookaheadRangeTest";
0736:                }
0737:                StringTemplate setST = templates.getInstanceOf("setTest");
0738:                Iterator iter = iset.getIntervals().iterator();
0739:                int rangeNumber = 1;
0740:                while (iter.hasNext()) {
0741:                    Interval I = (Interval) iter.next();
0742:                    int a = I.a;
0743:                    int b = I.b;
0744:                    StringTemplate eST;
0745:                    if (a == b) {
0746:                        eST = templates.getInstanceOf(testSTName);
0747:                        eST.setAttribute("atom", getTokenTypeAsTargetLabel(a));
0748:                        eST.setAttribute("atomAsInt", Utils.integer(a));
0749:                        //eST.setAttribute("k",Utils.integer(k));
0750:                    } else {
0751:                        eST = templates.getInstanceOf(testRangeSTName);
0752:                        eST.setAttribute("lower", getTokenTypeAsTargetLabel(a));
0753:                        eST.setAttribute("lowerAsInt", Utils.integer(a));
0754:                        eST.setAttribute("upper", getTokenTypeAsTargetLabel(b));
0755:                        eST.setAttribute("upperAsInt", Utils.integer(b));
0756:                        eST.setAttribute("rangeNumber", Utils
0757:                                .integer(rangeNumber));
0758:                    }
0759:                    eST.setAttribute("k", Utils.integer(k));
0760:                    setST.setAttribute("ranges", eST);
0761:                    rangeNumber++;
0762:                }
0763:                return setST;
0764:            }
0765:
0766:            // T O K E N  D E F I N I T I O N  G E N E R A T I O N
0767:
0768:            /** Set attributes tokens and literals attributes in the incoming
0769:             *  code template.  This is not the token vocab interchange file, but
0770:             *  rather a list of token type ID needed by the recognizer.
0771:             */
0772:            protected void genTokenTypeConstants(StringTemplate code) {
0773:                // make constants for the token types
0774:                Iterator tokenIDs = grammar.getTokenIDs().iterator();
0775:                while (tokenIDs.hasNext()) {
0776:                    String tokenID = (String) tokenIDs.next();
0777:                    int tokenType = grammar.getTokenType(tokenID);
0778:                    if (tokenType == Label.EOF
0779:                            || tokenType >= Label.MIN_TOKEN_TYPE) {
0780:                        // don't do FAUX labels 'cept EOF
0781:                        code.setAttribute("tokens.{name,type}", tokenID, Utils
0782:                                .integer(tokenType));
0783:                    }
0784:                }
0785:            }
0786:
0787:            /** Generate a token names table that maps token type to a printable
0788:             *  name: either the label like INT or the literal like "begin".
0789:             */
0790:            protected void genTokenTypeNames(StringTemplate code) {
0791:                for (int t = Label.MIN_TOKEN_TYPE; t <= grammar
0792:                        .getMaxTokenType(); t++) {
0793:                    String tokenName = grammar.getTokenDisplayName(t);
0794:                    if (tokenName != null) {
0795:                        tokenName = target.getTargetStringLiteralFromString(
0796:                                tokenName, true);
0797:                        code.setAttribute("tokenNames", tokenName);
0798:                    }
0799:                }
0800:            }
0801:
0802:            /** Get a meaningful name for a token type useful during code generation.
0803:             *  Literals without associated names are converted to the string equivalent
0804:             *  of their integer values. Used to generate x==ID and x==34 type comparisons
0805:             *  etc...  Essentially we are looking for the most obvious way to refer
0806:             *  to a token type in the generated code.  If in the lexer, return the
0807:             *  char literal translated to the target language.  For example, ttype=10
0808:             *  will yield '\n' from the getTokenDisplayName method.  That must
0809:             *  be converted to the target languages literals.  For most C-derived
0810:             *  languages no translation is needed.
0811:             */
0812:            public String getTokenTypeAsTargetLabel(int ttype) {
0813:                if (grammar.type == Grammar.LEXER) {
0814:                    String name = grammar.getTokenDisplayName(ttype);
0815:                    return target.getTargetCharLiteralFromANTLRCharLiteral(
0816:                            this , name);
0817:                }
0818:                return target.getTokenTypeAsTargetLabel(this , ttype);
0819:            }
0820:
0821:            /** Generate a token vocab file with all the token names/types.  For example:
0822:             *  ID=7
0823:             *  FOR=8
0824:             *  'for'=8
0825:             *
0826:             *  This is independent of the target language; used by antlr internally
0827:             */
0828:            protected StringTemplate genTokenVocabOutput() {
0829:                StringTemplate vocabFileST = new StringTemplate(
0830:                        vocabFilePattern, AngleBracketTemplateLexer.class);
0831:                vocabFileST.setName("vocab-file");
0832:                // make constants for the token names
0833:                Iterator tokenIDs = grammar.getTokenIDs().iterator();
0834:                while (tokenIDs.hasNext()) {
0835:                    String tokenID = (String) tokenIDs.next();
0836:                    int tokenType = grammar.getTokenType(tokenID);
0837:                    if (tokenType >= Label.MIN_TOKEN_TYPE) {
0838:                        vocabFileST.setAttribute("tokens.{name,type}", tokenID,
0839:                                Utils.integer(tokenType));
0840:                    }
0841:                }
0842:
0843:                // now dump the strings
0844:                Iterator literals = grammar.getStringLiterals().iterator();
0845:                while (literals.hasNext()) {
0846:                    String literal = (String) literals.next();
0847:                    int tokenType = grammar.getTokenType(literal);
0848:                    if (tokenType >= Label.MIN_TOKEN_TYPE) {
0849:                        vocabFileST.setAttribute("tokens.{name,type}", literal,
0850:                                Utils.integer(tokenType));
0851:                    }
0852:                }
0853:
0854:                return vocabFileST;
0855:            }
0856:
0857:            public List translateAction(String ruleName, GrammarAST actionTree) {
0858:                if (actionTree.getType() == ANTLRParser.ARG_ACTION) {
0859:                    return translateArgAction(ruleName, actionTree);
0860:                }
0861:                ActionTranslatorLexer translator = new ActionTranslatorLexer(
0862:                        this , ruleName, actionTree);
0863:                List chunks = translator.translateToChunks();
0864:                chunks = target.postProcessAction(chunks, actionTree.token);
0865:                return chunks;
0866:            }
0867:
0868:            /** Translate an action like [3,"foo",a[3]] and return a List of the
0869:             *  translated actions.  Because actions are translated to a list of
0870:             *  chunks, this returns List<List<String|StringTemplate>>.
0871:             *
0872:             *  Simple ',' separator is assumed.
0873:             */
0874:            public List translateArgAction(String ruleName,
0875:                    GrammarAST actionTree) {
0876:                String actionText = actionTree.token.getText();
0877:                StringTokenizer argTokens = new StringTokenizer(actionText, ",");
0878:                List args = new ArrayList();
0879:                while (argTokens.hasMoreTokens()) {
0880:                    String arg = (String) argTokens.nextToken();
0881:                    antlr.Token actionToken = new antlr.CommonToken(
0882:                            ANTLRParser.ACTION, arg);
0883:                    ActionTranslatorLexer translator = new ActionTranslatorLexer(
0884:                            this , ruleName, actionToken, actionTree.outerAltNum);
0885:                    List chunks = translator.translateToChunks();
0886:                    chunks = target.postProcessAction(chunks, actionToken);
0887:                    args.add(chunks);
0888:                }
0889:                if (args.size() == 0) {
0890:                    return null;
0891:                }
0892:                return args;
0893:            }
0894:
0895:            /** Given a template constructor action like %foo(a={...}) in
0896:             *  an action, translate it to the appropriate template constructor
0897:             *  from the templateLib. This translates a *piece* of the action.
0898:             */
0899:            public StringTemplate translateTemplateConstructor(String ruleName,
0900:                    int outerAltNum, antlr.Token actionToken,
0901:                    String templateActionText) {
0902:                // first, parse with antlr.g
0903:                //System.out.println("translate template: "+templateActionText);
0904:                ANTLRLexer lexer = new ANTLRLexer(new StringReader(
0905:                        templateActionText));
0906:                lexer.setFilename(grammar.getFileName());
0907:                lexer.setTokenObjectClass("antlr.TokenWithIndex");
0908:                TokenStreamRewriteEngine tokenBuffer = new TokenStreamRewriteEngine(
0909:                        lexer);
0910:                tokenBuffer.discard(ANTLRParser.WS);
0911:                tokenBuffer.discard(ANTLRParser.ML_COMMENT);
0912:                tokenBuffer.discard(ANTLRParser.COMMENT);
0913:                tokenBuffer.discard(ANTLRParser.SL_COMMENT);
0914:                ANTLRParser parser = new ANTLRParser(tokenBuffer);
0915:                parser.setFilename(grammar.getFileName());
0916:                parser.setASTNodeClass("org.antlr.tool.GrammarAST");
0917:                try {
0918:                    parser.rewrite_template();
0919:                } catch (RecognitionException re) {
0920:                    ErrorManager.grammarError(
0921:                            ErrorManager.MSG_INVALID_TEMPLATE_ACTION, grammar,
0922:                            actionToken, templateActionText);
0923:                } catch (Exception tse) {
0924:                    ErrorManager.internalError("can't parse template action",
0925:                            tse);
0926:                }
0927:                GrammarAST rewriteTree = (GrammarAST) parser.getAST();
0928:
0929:                // then translate via codegen.g
0930:                CodeGenTreeWalker gen = new CodeGenTreeWalker();
0931:                gen.init(grammar);
0932:                gen.currentRuleName = ruleName;
0933:                gen.outerAltNum = outerAltNum;
0934:                StringTemplate st = null;
0935:                try {
0936:                    st = gen.rewrite_template((AST) rewriteTree);
0937:                } catch (RecognitionException re) {
0938:                    ErrorManager.error(ErrorManager.MSG_BAD_AST_STRUCTURE, re);
0939:                }
0940:                return st;
0941:            }
0942:
0943:            public void issueInvalidScopeError(String x, String y,
0944:                    Rule enclosingRule, antlr.Token actionToken, int outerAltNum) {
0945:                //System.out.println("error $"+x+"::"+y);
0946:                Rule r = grammar.getRule(x);
0947:                AttributeScope scope = grammar.getGlobalScope(x);
0948:                if (scope == null) {
0949:                    if (r != null) {
0950:                        scope = r.ruleScope; // if not global, might be rule scope
0951:                    }
0952:                }
0953:                if (scope == null) {
0954:                    ErrorManager.grammarError(
0955:                            ErrorManager.MSG_UNKNOWN_DYNAMIC_SCOPE, grammar,
0956:                            actionToken, x);
0957:                } else if (scope.getAttribute(y) == null) {
0958:                    ErrorManager.grammarError(
0959:                            ErrorManager.MSG_UNKNOWN_DYNAMIC_SCOPE_ATTRIBUTE,
0960:                            grammar, actionToken, x, y);
0961:                }
0962:            }
0963:
0964:            public void issueInvalidAttributeError(String x, String y,
0965:                    Rule enclosingRule, antlr.Token actionToken, int outerAltNum) {
0966:                //System.out.println("error $"+x+"."+y);
0967:                if (enclosingRule == null) {
0968:                    // action not in a rule
0969:                    ErrorManager.grammarError(
0970:                            ErrorManager.MSG_ATTRIBUTE_REF_NOT_IN_RULE,
0971:                            grammar, actionToken, x, y);
0972:                    return;
0973:                }
0974:
0975:                // action is in a rule
0976:                Grammar.LabelElementPair label = enclosingRule.getRuleLabel(x);
0977:
0978:                if (label != null
0979:                        || enclosingRule.getRuleRefsInAlt(x, outerAltNum) != null) {
0980:                    // $rulelabel.attr or $ruleref.attr; must be unknown attr
0981:                    String refdRuleName = x;
0982:                    if (label != null) {
0983:                        refdRuleName = enclosingRule.getRuleLabel(x).referencedRuleName;
0984:                    }
0985:                    Rule refdRule = grammar.getRule(refdRuleName);
0986:                    AttributeScope scope = refdRule.getAttributeScope(y);
0987:                    if (scope == null) {
0988:                        ErrorManager.grammarError(
0989:                                ErrorManager.MSG_UNKNOWN_RULE_ATTRIBUTE,
0990:                                grammar, actionToken, refdRuleName, y);
0991:                    } else if (scope.isParameterScope) {
0992:                        ErrorManager.grammarError(
0993:                                ErrorManager.MSG_INVALID_RULE_PARAMETER_REF,
0994:                                grammar, actionToken, refdRuleName, y);
0995:                    } else if (scope.isDynamicRuleScope) {
0996:                        ErrorManager
0997:                                .grammarError(
0998:                                        ErrorManager.MSG_INVALID_RULE_SCOPE_ATTRIBUTE_REF,
0999:                                        grammar, actionToken, refdRuleName, y);
1000:                    }
1001:                }
1002:
1003:            }
1004:
1005:            public void issueInvalidAttributeError(String x,
1006:                    Rule enclosingRule, antlr.Token actionToken, int outerAltNum) {
1007:                //System.out.println("error $"+x);
1008:                if (enclosingRule == null) {
1009:                    // action not in a rule
1010:                    ErrorManager.grammarError(
1011:                            ErrorManager.MSG_ATTRIBUTE_REF_NOT_IN_RULE,
1012:                            grammar, actionToken, x);
1013:                    return;
1014:                }
1015:
1016:                // action is in a rule
1017:                Grammar.LabelElementPair label = enclosingRule.getRuleLabel(x);
1018:                AttributeScope scope = enclosingRule.getAttributeScope(x);
1019:
1020:                if (label != null
1021:                        || enclosingRule.getRuleRefsInAlt(x, outerAltNum) != null
1022:                        || enclosingRule.name.equals(x)) {
1023:                    ErrorManager.grammarError(
1024:                            ErrorManager.MSG_ISOLATED_RULE_SCOPE, grammar,
1025:                            actionToken, x);
1026:                } else if (scope != null && scope.isDynamicRuleScope) {
1027:                    ErrorManager.grammarError(
1028:                            ErrorManager.MSG_ISOLATED_RULE_ATTRIBUTE, grammar,
1029:                            actionToken, x);
1030:                } else {
1031:                    ErrorManager.grammarError(
1032:                            ErrorManager.MSG_UNKNOWN_SIMPLE_ATTRIBUTE, grammar,
1033:                            actionToken, x);
1034:                }
1035:            }
1036:
1037:            // M I S C
1038:
1039:            public StringTemplateGroup getTemplates() {
1040:                return templates;
1041:            }
1042:
1043:            public StringTemplateGroup getBaseTemplates() {
1044:                return baseTemplates;
1045:            }
1046:
1047:            public void setDebug(boolean debug) {
1048:                this .debug = debug;
1049:            }
1050:
1051:            public void setTrace(boolean trace) {
1052:                this .trace = trace;
1053:            }
1054:
1055:            public void setProfile(boolean profile) {
1056:                this .profile = profile;
1057:                if (profile) {
1058:                    setDebug(true); // requires debug events
1059:                }
1060:            }
1061:
1062:            public StringTemplate getRecognizerST() {
1063:                return outputFileST;
1064:            }
1065:
1066:            public String getRecognizerFileName(String name, int type) {
1067:                StringTemplate extST = templates
1068:                        .getInstanceOf("codeFileExtension");
1069:                String suffix = Grammar.grammarTypeToFileNameSuffix[type];
1070:                return name + suffix + extST.toString();
1071:            }
1072:
1073:            /** What is the name of the vocab file generated for this grammar?
1074:             *  Returns null if no .tokens file should be generated.
1075:             */
1076:            public String getVocabFileName() {
1077:                if (grammar.isBuiltFromString()) {
1078:                    return null;
1079:                }
1080:                return grammar.name + VOCAB_FILE_EXTENSION;
1081:            }
1082:
1083:            public void write(StringTemplate code, String fileName)
1084:                    throws IOException {
1085:                long start = System.currentTimeMillis();
1086:                Writer w = tool.getOutputFile(grammar, fileName);
1087:                // Write the output to a StringWriter
1088:                StringTemplateWriter wr = templates.getStringTemplateWriter(w);
1089:                wr.setLineWidth(lineWidth);
1090:                code.write(wr);
1091:                w.close();
1092:                long stop = System.currentTimeMillis();
1093:                //System.out.println("render time for "+fileName+": "+(int)(stop-start)+"ms");
1094:            }
1095:
1096:            /** You can generate a switch rather than if-then-else for a DFA state
1097:             *  if there are no semantic predicates and the number of edge label
1098:             *  values is small enough; e.g., don't generate a switch for a state
1099:             *  containing an edge label such as 20..52330 (the resulting byte codes
1100:             *  would overflow the method 65k limit probably).
1101:             */
1102:            protected boolean canGenerateSwitch(DFAState s) {
1103:                if (!GENERATE_SWITCHES_WHEN_POSSIBLE) {
1104:                    return false;
1105:                }
1106:                int size = 0;
1107:                for (int i = 0; i < s.getNumberOfTransitions(); i++) {
1108:                    Transition edge = (Transition) s.transition(i);
1109:                    if (edge.label.isSemanticPredicate()) {
1110:                        return false;
1111:                    }
1112:                    // can't do a switch if the edges are going to require predicates
1113:                    if (edge.label.getAtom() == Label.EOT) {
1114:                        int EOTPredicts = ((DFAState) edge.target)
1115:                                .getUniquelyPredictedAlt();
1116:                        if (EOTPredicts == NFA.INVALID_ALT_NUMBER) {
1117:                            // EOT target has to be a predicate then; no unique alt
1118:                            return false;
1119:                        }
1120:                    }
1121:                    // if target is a state with gated preds, we need to use preds on
1122:                    // this edge then to reach it.
1123:                    if (((DFAState) edge.target)
1124:                            .getGatedPredicatesInNFAConfigurations() != null) {
1125:                        return false;
1126:                    }
1127:                    size += edge.label.getSet().size();
1128:                }
1129:                if (s.getNumberOfTransitions() < MIN_SWITCH_ALTS
1130:                        || size > MAX_SWITCH_CASE_LABELS) {
1131:                    return false;
1132:                }
1133:                return true;
1134:            }
1135:
1136:            /** Create a label to track a token / rule reference's result.
1137:             *  Technically, this is a place where I break model-view separation
1138:             *  as I am creating a variable name that could be invalid in a
1139:             *  target language, however, label ::= <ID><INT> is probably ok in
1140:             *  all languages we care about.
1141:             */
1142:            public String createUniqueLabel(String name) {
1143:                return new StringBuffer().append(name).append(
1144:                        uniqueLabelNumber++).toString();
1145:            }
1146:        }
www.java2java.com | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.