zinteger.py
contains dynamically-generated wrappers for ctypes integers, with the ability to error-check in caes of overflow and underflow, in just less than 460 lines of code.
To use zintegre.py
, place the file in your work directory. You can do it easily using this command:
wget -O <subfolder>/zinteger.py https://gist.githubusercontent.com/Chubek/27eec6435df6b30684dee201dc5edf4b/raw/83ab6def8aad495dd7ab2eb3a283656ef5d04959/zinteger.py
Then you can import one of the following 8 dynamically-generated wrappers. You may perform any of the unary, binary and in-place operators that the int
objects have, with int
objects, themselves, or a ctype
of that the wrapped corresponds to.
For example, ZUint8
can be added/subtracted etc with ZUint8
, int
, and c_uint8/c_ubyte
.
Example:
from zinteger import ZUint8, ZUint16, ZUint32, ZUint64
from zinteger import ZUint8, ZUint16, ZUint32, ZUint64
#or
from zinteger import *
# unsigned 8bit integer
u8 = ZUint8(12)
int1 = 12
print(u8 + int1) #24
print(u8 * 2) #24
# this will warn you of overflow
u16 = ZUint16(0xffff1)
# this will warn you of underflow
i8 = ZInt8(-0x81)
# this will abort because of overflow
i32 = ZInt32(0xffffffff, abort=True)
#this will not warn
i64 = ZUInt64(-1)
assert(hash(ZUint(8)) == hash(8), "Error")
assert(f"{Uint8(255):b} + {Uint8(255):012b}" == f"{255:b} + {255:08b}", "Error")
Tables below shows the mapping of types between Zinteger objects and ctypes objects. Remember the third column may differ in your system.
Zinteger | cint | ctype |
---|---|---|
ZUint8 | c_uint8 | |
ZUint16 | c_uint16 | |
ZUint32 | c_uint32 | |
ZUint64 | c_uint64 | |
ZInt8 | c_int8 | |
ZInt16 | c_int16 | |
ZInt32 | c_int32 | |
ZInt64 | c_int64 |
The difference between cint
and ctype
is, ctype
is the actual identifier for the type in C, whereas cint
is the typedef
in stdint.h
within your system. It may vary from system to system. c_uint64
may either map to c_ulong
or c_ulonglong
, for example.
This may not seem useful at first, but if you use Python as a utility for low-level programming like I do, it is necessary to set bounds for the integers. It has happened to me often that I mistakenly shifted a number beyond the bit limit that was required of that integer. My main use for Python these days is using it as a very apt, able utility belt for systems stuff. For example, I'm currently coding a bioinfromatics ABI in Assembly, and I neded to encode some values. I mistakenly shifted left by 64 bits instead of 4, which I meant to.
Since Python integers are stored on memory and are objects rather than identifiers (more on that a bit), it is considered bignum, and bignum can go as far as the memory allows. You can theoretically store an integer in one page of the memory.
Python's ctypes
library provides an FFI with C and provides its numerical types to be used. They are not faster to use, exactly, because you still have to convert them to Python's int
object before operations. But they do the job for type safety.
You may ask why I did not just use a library off PyPi? A simple code like this does not require a third-party package. I hold the license to this code and I can use it wherever I want. I neither condemn, nor condone heavy use of third-party packages in your codebase. Most especially, for code that you are writing for fun. it's better if you write everything yourself. But that's just my hubmle opinion and I do not wish to force it on anyone. As I say in Disclaimer, this code is heavily untested and I myself would use it with caution. Third-party libraries that have been thoroughly tested are a necessacity in high-stakes projects. But for educational and recreational projects absolutely minimal use of them is best.
Metaprograms are those programs that emit, generate, compile, and interpret code. Compilers, assemblers, interpreters etc are Level-1 metaprogrammers, whereas stuff such as macros, preprocessors and metaclasses are level-2 metaprogammers. In GNU Assembler you can define macros in the .bss
section of the memory. They will be used during assembling and will be phased out in form of uninitialized data in the aforementioned section. In C we have preprocessors (#define
), which are literal string replacements --- completely unhygentic, the preprocessor just emits the macro. In Rust we have both hygenic (macro_rules!)
) and unhygenic macros (idents imported from packages compiled with --crate-typ proc-macro
with #[proc-macro]
macro themselves). I am not at all familiar with Lisp sadly but it indeed has a ginormous macro system. Ruby is another example of a language with macro. I have coded in Nim a few times and Nim has two ways of metaprogramming, Templates, and Macros. Templates in Nim are simple unhygenic code emitters but Macros can actively modify the Abstract Syntax Tree. IMO Nim metaprogramming is the perfect approach. Metacode where you get to modify the AST with surgical precision is desired. C's preprocessors are useful but they are the high-level counterpart to .macro
directive in GNU Assembler. In my project TransGatacca which I linked below, I make good use of GNU Assembler's macro system to achieve having the same code work in two Assembly languages. Now it's time to mention that there's another differentiation in types of metacoding. GAS macros are heteregenous metacodes, they are in a different language than the target language itself, whereas other macros we mentioned are homogenous, they use the same language. C++ has templates too, like Nim, but I'm a C guy and don't work much with Cooked Pizza Party. Another type of metaprogramming, according to Wikipedia, is multi-staged programming but again, my stack is C, Rust, x&a64 Assembly, Python and Go. Don't ask me anything about anything I don't know!
But these are all compile-time metaprograms. In Python we don't have conventional compilation. Everything is defined at runtime so macros don't make sense. What we have in Python are metaclasses, which are a form of runtime metaprogramming. We also have exec
and eval
, which can be used as some form of macro.
Everything in Python is an object, Classes are objects that can create instances of objects. User-defined functions are callable objects. Types are type objects. For example, int
is a type object. In C, for example, int
is a keyword. But in Python it is an object. Every object can be modified in runtime, so can be types, user-defined classes, and user-defined functions which are themselves objects.
You can theoretically, create a class and programmatically modify it. One may also do that with a function. Python's metaclasses can be used to create object factories, and these objects can be any of the ones mentioned in the page I link bewlow. Python does not emit code, it modifies the AST at runtime. Every object has a namespace dictionary, a name, a qualified name, and especial methods which are enclosed within a pair of double-underlines (line __name__
or __init__
). Callable functions, too, are objects, and they can have their tree modified programatically. A function has a namespace __dict__
too, and also, a __defaults__
property which holds default keyword arguments. Python also has modules that have the same stuff, plus some other things such as an __all__
property which holds what should be imported from that module when from module import *
is triggered. A module has a __name__
. Every file is a module for itself and when you run a file with a shebang or by passing it to Python the name will be __main__
. A module also has a property for the file it resides in. All these may or may not be writable. For example, a classes __dict__
namespaces is not writable, but a function's is.
We can create descriptors with metaclasses. Descriptors are objects whose access to them has been overriden through __get__
, __set__
, and __del__
. When you define these special functions for a class, the behavior with which they are accessed changes according to your will. If class A
has its __get__
method invariably return literal 2 everytime one of its instances is accessed as a property, it will return 2, for example, when we set B.a = A()
and we do b = B(); print(b.a)
. This will print 2 in stdout.
Decorators can be used to modify the syntax three of the object they are decorated with. You can see the example of my __exec
decorator, which takes a callable object, sets globals()[first kwarg of func] = func()
and then returns the function verbatim.
The __generate*
functions are factories for dunder methods, such as __add__
and the such, and __zinteger
decorator is a decorator that sets the main fields of the main classes. It makes use of the type()
function. type()
can be used to dynamically generate classes if we use it with parameters type(objname, instmethod, {namespace})
I recommend giving the code a read. It's marely 460 lines. I tried to write it clean and concise. I recon if I wanted to repeat the same process for every class 8 times, the code basse would be in a several thousand lines. As a rule of thumb, the smaller the ratio of lines sof code against functionality is, the better.
The official documentation for Python's object data model, which metaclasses are a part of and allows for runtime metaprogramming, is the best resource to learn about them.
The code in zinteger.py
is not fully tested. I provide this code online with no warranty whatsoever. I recommend everyone to write their own utilities instead of using random snippets of code they find online. Please only use this code in high-stakes projects after full scrutiny.
Please take a look at my Github profile, my recent project PoxHash which is a block hash algorthim in C, Rust, Go, Python, Nim and JS, my utility Bash script DynoFiler and my current WIP project TransGatacca which is a bioinformatics ABI in x86-64 and Aarch64 Assemby languages with planned APIs in C, Rust and bindings in Python. You can contact me at Chubak#7400 in Discord (Discord link in profile). I'm also available for systems, network and scientific utility projects (rates in profile).
I hope you find this useful, educational and informative.
Thanks, Chubak