Vega strike Python Modules doc
0.5.1
Documentation of the " Modules " folder of Vega strike
|
Functions | |
def | group |
def | any |
def | maybe |
Variables | |
string | __author__ = 'Ka-Ping Yee <ping@lfw.org>' |
__credits__ = \ | |
list | __all__ = [x for x in dir(token) if x[0] != '_'] |
COMMENT = N_TOKENS | |
int | NL = N_TOKENS+1 |
string | Whitespace = r'[ \f\t]*' |
string | Comment = r'#[^\r\n]*' |
tuple | Ignore = Whitespace+any(r'\\\r?\n' + Whitespace) |
string | Name = r'[a-zA-Z_]\w*' |
string | Hexnumber = r'0[xX][\da-fA-F]*[lL]?' |
string | Octnumber = r'0[0-7]*[lL]?' |
string | Decnumber = r'[1-9]\d*[lL]?' |
tuple | Intnumber = group(Hexnumber, Octnumber, Decnumber) |
string | Exponent = r'[eE][-+]?\d+' |
tuple | Pointfloat = group(r'\d+\.\d*', r'\.\d+') |
string | Expfloat = r'\d+' |
tuple | Floatnumber = group(Pointfloat, Expfloat) |
tuple | Imagnumber = group(r'\d+[jJ]', Floatnumber + r'[jJ]') |
tuple | Number = group(Imagnumber, Floatnumber, Intnumber) |
string | Single = r"[^'\\]*(?:\\.[^'\\]*)*'" |
string | Double = r'[^"\\]*(?:\\.[^"\\]*)*"' |
string | Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" |
string | Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' |
Tokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to get the next line of input (or "" for EOF). It generates 5-tuples with these members: the token type (see token.py) the token (a string) the starting (row, column) indices of the token (a 2-tuple of ints) the ending (row, column) indices of the token (a 2-tuple of ints) the original line (string) It is designed to match the working of the Python tokenizer exactly, except that it produces COMMENT tokens for comments and gives type OP for all operators Older entry points tokenize_loop(readline, tokeneater) tokenize(readline, tokeneater=printtoken) are the same, except instead of generating tokens, tokeneater is a callback function to which the 5 fields described above are passed as 5 arguments, each time a new token is found.
def tokenize.any | ( | choices) |
Definition at line 45 of file tokenize.py.
def tokenize.group | ( | choices) |
Definition at line 44 of file tokenize.py.
References dospath.join().
def tokenize.maybe | ( | choices) |
Definition at line 46 of file tokenize.py.
list __all__ = [x for x in dir(token) if x[0] != '_'] |
Definition at line 35 of file tokenize.py.
string __author__ = 'Ka-Ping Yee <ping@lfw.org>' |
Definition at line 27 of file tokenize.py.
__credits__ = \ |
Definition at line 28 of file tokenize.py.
COMMENT = N_TOKENS |
Definition at line 38 of file tokenize.py.
string Comment = r'#[^\r\n]*' |
Definition at line 49 of file tokenize.py.
string Decnumber = r'[1-9]\d*[lL]?' |
Definition at line 55 of file tokenize.py.
string Double = r'[^"\\]*(?:\\.[^"\\]*)*"' |
Definition at line 67 of file tokenize.py.
string Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' |
Definition at line 71 of file tokenize.py.
string Expfloat = r'\d+' |
Definition at line 59 of file tokenize.py.
string Exponent = r'[eE][-+]?\d+' |
Definition at line 57 of file tokenize.py.
tuple Floatnumber = group(Pointfloat, Expfloat) |
Definition at line 60 of file tokenize.py.
string Hexnumber = r'0[xX][\da-fA-F]*[lL]?' |
Definition at line 53 of file tokenize.py.
tuple Ignore = Whitespace+any(r'\\\r?\n' + Whitespace) |
Definition at line 50 of file tokenize.py.
tuple Imagnumber = group(r'\d+[jJ]', Floatnumber + r'[jJ]') |
Definition at line 61 of file tokenize.py.
string Name = r'[a-zA-Z_]\w*' |
Definition at line 51 of file tokenize.py.
int NL = N_TOKENS+1 |
Definition at line 40 of file tokenize.py.
tuple Number = group(Imagnumber, Floatnumber, Intnumber) |
Definition at line 62 of file tokenize.py.
string Octnumber = r'0[0-7]*[lL]?' |
Definition at line 54 of file tokenize.py.
tuple Pointfloat = group(r'\d+\.\d*', r'\.\d+') |
Definition at line 58 of file tokenize.py.
string Single = r"[^'\\]*(?:\\.[^'\\]*)*'" |
Definition at line 65 of file tokenize.py.
string Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" |
Definition at line 69 of file tokenize.py.
string Whitespace = r'[ \f\t]*' |
Definition at line 48 of file tokenize.py.