Qucs-S S-parameter Viewer & RF Synthesis Tools
Loading...
Searching...
No Matches
Classes | Functions | Variables
jinja2.lexer Namespace Reference

Classes

class  _Rule
 
class  Failure
 
class  Lexer
 
class  OptionalLStrip
 
class  Token
 
class  TokenStream
 
class  TokenStreamIterator
 

Functions

str _describe_token_type (str token_type)
 
str describe_token ("Token" token)
 
str describe_token_expr (str expr)
 
int count_newlines (str value)
 
t.List[t.Tuple[str, str]] compile_rules ("Environment" environment)
 
"Lexer" get_lexer ("Environment" environment)
 

Variables

_lexer_cache = LRUCache(50)
 
 whitespace_re = re.compile(r"\s+")
 
 newline_re = re.compile(r"(\r\n|\r|\n)")
 
 string_re
 
 integer_re
 
 float_re
 
 TOKEN_ADD = intern("add")
 
 TOKEN_ASSIGN = intern("assign")
 
 TOKEN_COLON = intern("colon")
 
 TOKEN_COMMA = intern("comma")
 
 TOKEN_DIV = intern("div")
 
 TOKEN_DOT = intern("dot")
 
 TOKEN_EQ = intern("eq")
 
 TOKEN_FLOORDIV = intern("floordiv")
 
 TOKEN_GT = intern("gt")
 
 TOKEN_GTEQ = intern("gteq")
 
 TOKEN_LBRACE = intern("lbrace")
 
 TOKEN_LBRACKET = intern("lbracket")
 
 TOKEN_LPAREN = intern("lparen")
 
 TOKEN_LT = intern("lt")
 
 TOKEN_LTEQ = intern("lteq")
 
 TOKEN_MOD = intern("mod")
 
 TOKEN_MUL = intern("mul")
 
 TOKEN_NE = intern("ne")
 
 TOKEN_PIPE = intern("pipe")
 
 TOKEN_POW = intern("pow")
 
 TOKEN_RBRACE = intern("rbrace")
 
 TOKEN_RBRACKET = intern("rbracket")
 
 TOKEN_RPAREN = intern("rparen")
 
 TOKEN_SEMICOLON = intern("semicolon")
 
 TOKEN_SUB = intern("sub")
 
 TOKEN_TILDE = intern("tilde")
 
 TOKEN_WHITESPACE = intern("whitespace")
 
 TOKEN_FLOAT = intern("float")
 
 TOKEN_INTEGER = intern("integer")
 
 TOKEN_NAME = intern("name")
 
 TOKEN_STRING = intern("string")
 
 TOKEN_OPERATOR = intern("operator")
 
 TOKEN_BLOCK_BEGIN = intern("block_begin")
 
 TOKEN_BLOCK_END = intern("block_end")
 
 TOKEN_VARIABLE_BEGIN = intern("variable_begin")
 
 TOKEN_VARIABLE_END = intern("variable_end")
 
 TOKEN_RAW_BEGIN = intern("raw_begin")
 
 TOKEN_RAW_END = intern("raw_end")
 
 TOKEN_COMMENT_BEGIN = intern("comment_begin")
 
 TOKEN_COMMENT_END = intern("comment_end")
 
 TOKEN_COMMENT = intern("comment")
 
 TOKEN_LINESTATEMENT_BEGIN = intern("linestatement_begin")
 
 TOKEN_LINESTATEMENT_END = intern("linestatement_end")
 
 TOKEN_LINECOMMENT_BEGIN = intern("linecomment_begin")
 
 TOKEN_LINECOMMENT_END = intern("linecomment_end")
 
 TOKEN_LINECOMMENT = intern("linecomment")
 
 TOKEN_DATA = intern("data")
 
 TOKEN_INITIAL = intern("initial")
 
 TOKEN_EOF = intern("eof")
 
dict operators
 
dict reverse_operators = {v: k for k, v in operators.items()}
 
 operator_re
 
 ignored_tokens
 
 ignore_if_empty
 

Detailed Description

Implements a Jinja / Python combination lexer. The ``Lexer`` class
is used to do some preprocessing. It filters out invalid operators like
the bitshift operators we don't allow in templates. It separates
template code and python code in expressions.

Function Documentation

◆ compile_rules()

t.List[t.Tuple[str, str]] jinja2.lexer.compile_rules ( "Environment"  environment)
Compiles all the rules from the environment into a list of rules.

◆ count_newlines()

int jinja2.lexer.count_newlines ( str  value)
Count the number of newline characters in the string.  This is
useful for extensions that filter a stream.

◆ describe_token()

str jinja2.lexer.describe_token ( "Token"  token)
Returns a description of the token.

◆ describe_token_expr()

str jinja2.lexer.describe_token_expr ( str  expr)
Like `describe_token` but for token expressions.

◆ get_lexer()

"Lexer" jinja2.lexer.get_lexer ( "Environment"  environment)
Return a lexer which is probably cached.

Variable Documentation

◆ float_re

jinja2.lexer.float_re
Initial value:
1= re.compile(
2 ,
3 re.IGNORECASE | re.VERBOSE,
4)

◆ ignore_if_empty

jinja2.lexer.ignore_if_empty
Initial value:
1= frozenset(
2 [TOKEN_WHITESPACE, TOKEN_DATA, TOKEN_COMMENT, TOKEN_LINECOMMENT]
3)

◆ ignored_tokens

jinja2.lexer.ignored_tokens
Initial value:
1= frozenset(
2 [
3 TOKEN_COMMENT_BEGIN,
4 TOKEN_COMMENT,
5 TOKEN_COMMENT_END,
6 TOKEN_WHITESPACE,
7 TOKEN_LINECOMMENT_BEGIN,
8 TOKEN_LINECOMMENT_END,
9 TOKEN_LINECOMMENT,
10 ]
11)

◆ integer_re

jinja2.lexer.integer_re
Initial value:
1= re.compile(
2 ,
3 re.IGNORECASE | re.VERBOSE,
4)

◆ operator_re

jinja2.lexer.operator_re
Initial value:
1= re.compile(
2 f"({'|'.join(re.escape(x) for x in sorted(operators, key=lambda x: -len(x)))})"
3)

◆ operators

dict jinja2.lexer.operators
Initial value:
1= {
2 "+": TOKEN_ADD,
3 "-": TOKEN_SUB,
4 "/": TOKEN_DIV,
5 "//": TOKEN_FLOORDIV,
6 "*": TOKEN_MUL,
7 "%": TOKEN_MOD,
8 "**": TOKEN_POW,
9 "~": TOKEN_TILDE,
10 "[": TOKEN_LBRACKET,
11 "]": TOKEN_RBRACKET,
12 "(": TOKEN_LPAREN,
13 ")": TOKEN_RPAREN,
14 "{": TOKEN_LBRACE,
15 "}": TOKEN_RBRACE,
16 "==": TOKEN_EQ,
17 "!=": TOKEN_NE,
18 ">": TOKEN_GT,
19 ">=": TOKEN_GTEQ,
20 "<": TOKEN_LT,
21 "<=": TOKEN_LTEQ,
22 "=": TOKEN_ASSIGN,
23 ".": TOKEN_DOT,
24 ":": TOKEN_COLON,
25 "|": TOKEN_PIPE,
26 ",": TOKEN_COMMA,
27 ";": TOKEN_SEMICOLON,
28}

◆ string_re

jinja2.lexer.string_re
Initial value:
1= re.compile(
2 r"('([^'\\]*(?:\\.[^'\\]*)*)'" r'|"([^"\\]*(?:\\.[^"\\]*)*)")', re.S
3)