Files
wg-backend-django/acer-env/lib/python3.10/site-packages/sqlparse/__pycache__/lexer.cpython-310.pyc

34 lines
2.0 KiB
Plaintext
Raw Normal View History

2022-11-30 15:58:16 +07:00
o
<00>Ԅc<D484> <00>@sPdZddlmZddlmZddlmZddlmZGdd<07>d<07>Z d d d
<EFBFBD>Z
dS) z SQL Lexer<65>)<01>
TextIOBase)<01>tokens)<01> SQL_REGEX)<01>consumec@seZdZdZeddd<04><01>ZdS)<06>Lexerz?Lexer
Empty class. Leaving for backwards-compatibility
Nccs<00>t|t<01>r
|<00><02>}t|t<03>rn,t|t<04>r3|r|<00>|<01>}nz|<00>d<01>}Wnty2|<00>d<02>}Yn
wtd<03>t |<00><01><01><01>t
|<00>}|D]>\}}t D]1\}}|||<03>}|sTqHt|t j <0A>rb||<07><0E>fVn t|<06>rm||<07><0E><00>Vt||<07><11>|d<00>nt j|fVqBdS)a<>
Return an iterable of (tokentype, value) pairs generated from
`text`. If `unfiltered` is set to `True`, the filtering mechanism
is bypassed even if filters are defined.
Also preprocess the text, i.e. expand tabs and strip it if
wanted and applies registered filters.
Split ``text`` into (tokentype, text) pairs.
``stack`` is the initial stack (default: ``['root']``)
zutf-8zunicode-escapez+Expected text or file-like object, got {!r}<7D>N)<13>
isinstancer<00>read<61>str<74>bytes<65>decode<64>UnicodeDecodeError<6F> TypeError<6F>format<61>type<70> enumeraterr<00>
_TokenType<EFBFBD>group<75>callabler<00>end<6E>Error)<08>text<78>encoding<6E>iterable<6C>pos<6F>char<61>rexmatch<63>action<6F>m<>r<00>a/home/infidel/Sync/Project/ocp-wg-backend/acer-env/lib/python3.10/site-packages/sqlparse/lexer.py<70>
get_tokenss><02>


  <02>
<04>  
  <02><04>zLexer.get_tokens<6E>N)<06>__name__<5F>
__module__<EFBFBD> __qualname__<5F>__doc__<5F> staticmethodr!rrrr rsrNcCst<00><00>||<01>S)z<>Tokenize sql.
Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream
of ``(token type, value)`` items.
)rr!)<02>sqlrrrr <00>tokenizeLsr)r") r&<00>ior<00>sqlparser<00>sqlparse.keywordsr<00>sqlparse.utilsrrr)rrrr <00><module>s    6