![]() |
Qucs-S S-parameter Viewer & RF Synthesis Tools
|


Public Member Functions | |
| __init__ (self, left, right, lang, **options) | |
| get_tokens_unprocessed (self, text) | |
Public Member Functions inherited from pip._vendor.pygments.lexer.Lexer | |
| __repr__ (self) | |
| add_filter (self, filter_, **options) | |
| analyse_text (text) | |
| get_tokens (self, text, unfiltered=False) | |
Public Member Functions inherited from pip._vendor.pygments.lexer.LexerMeta | |
| __new__ (mcs, name, bases, d) | |
Public Attributes | |
| left | |
| right | |
| lang | |
Public Attributes inherited from pip._vendor.pygments.lexer.Lexer | |
| options | |
| stripnl | |
| stripall | |
| ensurenl | |
| tabsize | |
| encoding | |
| filters | |
Protected Member Functions | |
| _find_safe_escape_tokens (self, text) | |
| _filter_to (self, it, pred) | |
| _find_escape_tokens (self, text) | |
Additional Inherited Members | |
Static Public Attributes inherited from pip._vendor.pygments.lexer.Lexer | |
| name = None | |
| list | aliases = [] |
| list | filenames = [] |
| list | alias_filenames = [] |
| list | mimetypes = [] |
| int | priority = 0 |
| url = None | |
This lexer takes one lexer as argument, the lexer for the language being formatted, and the left and right delimiters for escaped text. First everything is scanned using the language lexer to obtain strings and comments. All other consecutive tokens are merged and the resulting text is scanned for escaped segments, which are given the Token.Escape type. Finally text that is not escaped is scanned again with the language lexer.
| pip._vendor.pygments.formatters.latex.LatexEmbeddedLexer.__init__ | ( | self, | |
| options, | |||
| right, | |||
| lang, | |||
| ** | options | ||
| ) |
This constructor takes arbitrary options as keyword arguments.
Every subclass must first process its own options and then call
the `Lexer` constructor, since it processes the basic
options like `stripnl`.
An example looks like this:
.. sourcecode:: python
def __init__(self, **options):
self.compress = options.get('compress', '')
Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the
command line usage), there are various utility functions
available to help with that, see `Utilities`_.
Reimplemented from pip._vendor.pygments.lexer.Lexer.
|
protected |
Keep only the tokens that match `pred`, merge the others together
|
protected |
Find escape tokens within text, give token=None otherwise
|
protected |
find escape tokens that are not in strings or comments
| pip._vendor.pygments.formatters.latex.LatexEmbeddedLexer.get_tokens_unprocessed | ( | self, | |
| text | |||
| ) |
This method should process the text and return an iterable of ``(index, tokentype, value)`` tuples where ``index`` is the starting position of the token within the input text. It must be overridden by subclasses. It is recommended to implement it as a generator to maximize effectiveness.
Reimplemented from pip._vendor.pygments.lexer.Lexer.