Files
wg-backend-django/dell-env/lib/python3.11/site-packages/sqlparse/__pycache__/lexer.cpython-311.pyc

29 lines
3.3 KiB
Plaintext
Raw Normal View History

2023-10-30 14:40:43 +07:00
<EFBFBD>
<06>]d<> <00><00>\<00>dZddlmZddlmZddlmZddlmZGd<06>d<07><00>Z d
d <09>Z
dS) z SQL Lexer<65>)<01>
TextIOBase)<01>tokens)<01> SQL_REGEX)<01>consumec<00>*<00>eZdZdZedd<03><01><00>ZdS)<05>Lexerz?Lexer
Empty class. Leaving for backwards-compatibility
Nc#<00>RK<00>t|t<00><00>r|<00><00><00>}t|t<00><00>rn<>t|t<00><00>rT|r|<00>|<01><00>}nk |<00>d<01><00>}nT#t $r|<00>d<02><00>}Yn3wxYwtd<03>t|<00><00><00><00><00><00><00>t|<00><00>}|D]<5D>\}}tD]<5D>\}}|||<03><00>}|s<01>t|tj <00><00>r||<07><00><00>fV<00>n.t|<06><00>r||<07><00><00><00><00>V<00>t!||<07><00><00>|z
dz
<00><00>ntj|fV<00><00><>dS)a<>
Return an iterable of (tokentype, value) pairs generated from
`text`. If `unfiltered` is set to `True`, the filtering mechanism
is bypassed even if filters are defined.
Also preprocess the text, i.e. expand tabs and strip it if
wanted and applies registered filters.
Split ``text`` into (tokentype, text) pairs.
``stack`` is the initial stack (default: ``['root']``)
zutf-8zunicode-escapez+Expected text or file-like object, got {!r}<7D>N)<13>
isinstancer<00>read<61>str<74>bytes<65>decode<64>UnicodeDecodeError<6F> TypeError<6F>format<61>type<70> enumeraterr<00>
_TokenType<EFBFBD>group<75>callabler<00>end<6E>Error)<08>text<78>encoding<6E>iterable<6C>pos<6F>char<61>rexmatch<63>action<6F>ms <20>g/home/infidel/Sync/TIP/WireGuard/ocp-wg-backend/dell-env/lib/python3.11/site-packages/sqlparse/lexer.py<70>
get_tokenszLexer.get_tokenss<><00><00><00><00> <16>d<EFBFBD>J<EFBFBD> '<27> '<27> <1F><17>9<EFBFBD>9<EFBFBD>;<3B>;<3B>D<EFBFBD> <15>d<EFBFBD>C<EFBFBD> <20> <20> 0<> <10> <17><04>e<EFBFBD> $<24> $<24>
0<><17> 9<><1B>{<7B>{<7B>8<EFBFBD>,<2C>,<2C><04><04>9<><1F>;<3B>;<3B>w<EFBFBD>/<2F>/<2F>D<EFBFBD>D<EFBFBD><44>)<29>9<>9<>9<><1F>;<3B>;<3B>'7<>8<>8<>D<EFBFBD>D<EFBFBD>D<EFBFBD>9<><39><EFBFBD><EFBFBD><1C>I<>"<22>F<EFBFBD>4<EFBFBD><04>:<3A>:<3A>.<2E>.<2E>0<>0<> 0<><1D>T<EFBFBD>?<3F>?<3F><08>!<21> )<29> )<29>I<EFBFBD>C<EFBFBD><14>$-<2D> )<29> )<29> <20><08>&<26><1C>H<EFBFBD>T<EFBFBD>3<EFBFBD>'<27>'<27><01><18>,<2C><1C><1F><06><06>(9<>:<3A>:<3A>,<2C> <20>!<21>'<27>'<27>)<29>)<29>+<2B>+<2B>+<2B>+<2B>+<2B><1D>f<EFBFBD>%<25>%<25>,<2C> <20>&<26><11><17><17><19><19>+<2B>+<2B>+<2B>+<2B>+<2B><17><08>!<21>%<25>%<25>'<27>'<27>C<EFBFBD>-<2D>!<21>"3<>4<>4<>4<><15><05><1C>l<EFBFBD>D<EFBFBD>(<28>(<28>(<28>(<28><> )<29> )s<00>0B<00>B(<03>'B(<03>N)<06>__name__<5F>
__module__<EFBFBD> __qualname__<5F>__doc__<5F> staticmethodr#<00><00>r"rrs><00><00><00><00><00><00><08><08><12>-)<29>-)<29>-)<29><12>\<5C>-)<29>-)<29>-)r+rNc<00>F<00>t<00><00><00>||<01><00>S)z<>Tokenize sql.
Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream
of ``(token type, value)`` items.
)rr#)<02>sqlrs r"<00>tokenizer.Ls<00><00> <11>7<EFBFBD>7<EFBFBD> <1D> <1D>c<EFBFBD>8<EFBFBD> ,<2C> ,<2C>,r+r$) r(<00>ior<00>sqlparser<00>sqlparse.keywordsr<00>sqlparse.utilsrrr.r*r+r"<00><module>r3s<><00><01><10><0F><1A><19><19><19><19><19><1B><1B><1B><1B><1B><1B>'<27>'<27>'<27>'<27>'<27>'<27>"<22>"<22>"<22>"<22>"<22>"<22>3)<29>3)<29>3)<29>3)<29>3)<29>3)<29>3)<29>3)<29>l-<2D>-<2D>-<2D>-<2D>-<2D>-r+