![]() |
Qucs-S S-parameter Viewer & RF Synthesis Tools
|


Public Member Functions | |
| __init__ (self, **options) | |
| __repr__ (self) | |
| add_filter (self, filter_, **options) | |
| analyse_text (text) | |
| get_tokens (self, text, unfiltered=False) | |
| get_tokens_unprocessed (self, text) | |
Public Member Functions inherited from pip._vendor.pygments.lexer.LexerMeta | |
| __new__ (mcs, name, bases, d) | |
Public Attributes | |
| options | |
| stripnl | |
| stripall | |
| ensurenl | |
| tabsize | |
| encoding | |
| filters | |
Static Public Attributes | |
| name = None | |
| list | aliases = [] |
| list | filenames = [] |
| list | alias_filenames = [] |
| list | mimetypes = [] |
| int | priority = 0 |
| url = None | |
Lexer for a specific language.
See also :doc:`lexerdevelopment`, a high-level guide to writing
lexers.
Lexer classes have attributes used for choosing the most appropriate
lexer based on various criteria.
.. autoattribute:: name
:no-value:
.. autoattribute:: aliases
:no-value:
.. autoattribute:: filenames
:no-value:
.. autoattribute:: alias_filenames
.. autoattribute:: mimetypes
:no-value:
.. autoattribute:: priority
Lexers included in Pygments should have an additional attribute:
.. autoattribute:: url
:no-value:
You can pass options to the constructor. The basic options recognized
by all lexers and processed by the base `Lexer` class are:
``stripnl``
Strip leading and trailing newlines from the input (default: True).
``stripall``
Strip all leading and trailing whitespace from the input
(default: False).
``ensurenl``
Make sure that the input ends with a newline (default: True). This
is required for some lexers that consume input linewise.
.. versionadded:: 1.3
``tabsize``
If given and greater than 0, expand tabs in the input (default: 0).
``encoding``
If given, must be an encoding name. This encoding will be used to
convert the input string to Unicode, if it is not already a Unicode
string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
Latin1 detection. Can also be ``'chardet'`` to use the chardet
library, if it is installed.
``inencoding``
Overrides the ``encoding`` if given.
| pip._vendor.pygments.lexer.Lexer.__init__ | ( | self, | |
| ** | options | ||
| ) |
This constructor takes arbitrary options as keyword arguments.
Every subclass must first process its own options and then call
the `Lexer` constructor, since it processes the basic
options like `stripnl`.
An example looks like this:
.. sourcecode:: python
def __init__(self, **options):
self.compress = options.get('compress', '')
Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the
command line usage), there are various utility functions
available to help with that, see `Utilities`_.
Reimplemented in pip._vendor.pygments.lexers.python.PythonConsoleLexer, pip._vendor.pygments.lexer.DelegatingLexer, and pip._vendor.pygments.formatters.latex.LatexEmbeddedLexer.
| pip._vendor.pygments.lexer.Lexer.add_filter | ( | self, | |
| filter_, | |||
| ** | options | ||
| ) |
Add a new stream filter to this lexer.
| pip._vendor.pygments.lexer.Lexer.analyse_text | ( | text | ) |
A static method which is called for lexer guessing. It should analyse the text and return a float in the range from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer will not be selected as the most probable one, if it returns ``1.0``, it will be selected immediately. This is used by `guess_lexer`. The `LexerMeta` metaclass automatically wraps this function so that it works like a static method (no ``self`` or ``cls`` parameter) and the return value is automatically converted to `float`. If the return value is an object that is boolean `False` it's the same as if the return values was ``0.0``.
Reimplemented in pip._vendor.pygments.lexers.python.PythonLexer, and pip._vendor.pygments.lexers.python.NumPyLexer.
| pip._vendor.pygments.lexer.Lexer.get_tokens | ( | self, | |
| text, | |||
unfiltered = False |
|||
| ) |
This method is the basic interface of a lexer. It is called by the `highlight()` function. It must process the text and return an iterable of ``(tokentype, value)`` pairs from `text`. Normally, you don't need to override this method. The default implementation processes the options recognized by all lexers (`stripnl`, `stripall` and so on), and then yields all tokens from `get_tokens_unprocessed()`, with the ``index`` dropped. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined.
| pip._vendor.pygments.lexer.Lexer.get_tokens_unprocessed | ( | self, | |
| text | |||
| ) |
This method should process the text and return an iterable of ``(index, tokentype, value)`` tuples where ``index`` is the starting position of the token within the input text. It must be overridden by subclasses. It is recommended to implement it as a generator to maximize effectiveness.
Reimplemented in pip._vendor.pygments.formatters.latex.LatexEmbeddedLexer, pip._vendor.pygments.lexer.DelegatingLexer, pip._vendor.pygments.lexers.python.NumPyLexer, pip._vendor.pygments.lexer.RegexLexer, pip._vendor.pygments.lexer.ProfilingRegexLexer, and pip._vendor.pygments.lexer.ExtendedRegexLexer.