FWIW for C only I've used libtcc repo.or.cz/w/tinycc.git with great success. The API is a joy, as we all expect from a Bellard project. It focuses on compilation speed, the generated code is not at all optimized.
how deterministic is the emit really. if i feed same expression tree twice,same node layout same captures. do i get exact same bytes out every time (ignoring reloc) or not. if output produced is byte stable across runs for same input graph ,that opens up memoized JIT paths.worth checking if current impl already does this or needs a pass to normalise alloc order
Several possible reasons:
- parallelism
- concurrent machine code gen
- different optimisations for different runs, producing differing machine code order, etc
They built this to translate a search query that is only known at runtime. Presumably they already have an AST or similar, so calling methods as it is being walked isn't any harder than operators.
This line is part of the code that creates an AST-like structure that is then fed into the compiler. The actual multiplication is done by calling the function handle returned from the Compile method.
I think what GP was referring to that there is nothing stopping the code from being designed so that:
AST<float> p1 = exp.GetP1();
AST<float> rsqr = p1 * p1; // AST<float> implements an operator* overload that produces an AST<float> object
Even if many frown upon operator overloading due to the annoying magical-ness of the standard-librarys appropriation of the shift operators for "piping" (<< and >>), it's still what makes many people prefer C++ for vector-math,etc tasks.
So whilst the result isn't a direct multiplication it should still be an acceptable use since the resulting code will actually be doing multiplications.
Considering what goes on under the hood however, I guess there might be some compiler optimization reasons to keep everything in the expression object in the example as the holder of data instead of spread out in an allocation tree with lots of pointer-chasing.
Yes, but what I suspect the commenter was saying is that you can build the expression usung operator overloading as well, so you can type ”a + b”, not ”a.Add(b)”.
I love it when libraries like this do that. z3 in python is similar, you just build your constraints using normal syntax and it all just works. Great use of operator overloading.
Interesting, this is very similar to llvmlite.Builder which is a wrapper over llvm. I am probably going to create something similar for my Python -> C -> assembly JIT.
The LLVM ORC and Clang-REPL projects would be worth checking out if you haven't already: there's a healthy community of high performance computing folks working in this space over at https://compiler-research.org.
In particular, this talk might be interesting:
"Unlocking the Power of C++ as a Service:
Uniting Python's Usability with C++'s Performance"
The point is not to compile entire Python programs, the point is to optimize specific parts of Python that matters. To illustrate, consider a calculating sum of 1 to N in python
def sum(N):
x = 0
for i in range(N):
x += i
return x
There's absolute zero reason why this code has to involve pushing and popping stuff on the python virtual stack. This should be compiled into assembly with a small conversion between C/PyObject.
The goal is to get to a point where we can even do non-trivial things inside this optimized context.
Python will never be able to go down to assembly because Python support doing "weird shit" like dynamically creating modules, hell, even creating a Python file, running eval on that, and loading it as a new module. How are you even going to transpile that to assembly?
So I approach the problem the same way numba is approaching. But hopefully more modern and simpler (implementation wise). Planning on doing it using Rust and the backend should be agnostic (GCC, Clang, whatever C compiler there is)
this looks convenient to use from c++, but the example code it generates is rather suboptimal (see https://godbolt.org/z/3rWceeYoW in which no normal compiler would set up and tear down a stack frame for that) so i'm guessing there isn't any support for optimisations? what's the advantage of this over just compiling + calling dlopen/LoadLibrary on the result?
For simple functions an C compiler will generate code that is perhaps 50% faster than this standard prologue/epilogue (modern CPU's eat up most of the "bloat" whereas the branch to _any_ function will cause some branch predictor pressure), as soon as the function grows the gains will be smaller as long as the code runs somewhat in a straight line and isn't subject to cache misses.
Compared to even an optimized interpreter this will be somewhere between 4x to 20x faster (mainly due to having far far smaller branch predictor costs), so even if it doesn't generate optimal code it will still be within an magnitude of optimal native code whereas an interpreter will be much further behind.
dlopen/LoadLibrary,etc comes with far more memory pressure and OS bookkeeping.
Having written small compilers or other code-generators targeting both the JVM and .NET runtimes, i can say that the .NET equivalents have some extra simple options for scenarios like this.
Both have always supported building libraries/assemblies and loading them (the ASM library+custom classloaders for Java and AssemblyBuilder in .NET are about equally capable).
However .NET also has DynamicMethod that is specifically built to quickly build just small functions that aren't tied to larger contexts (similar API to asm/assemblybuilder).
But an even easier option for stuff exactly like in the article that people don't widely really seem to know about is that Linq (yes that "sql-like" stuff) actually contains parts for deferred method building that can be easily leveraged to quickly produce customized native code methods.
The neat part about the Linq-code generator is that you can just drop in plain C# code-blocks to be translataed by the C# compiler into Linq snippets and then with some helpers transform everything to Linq-tree's that can then be compiled.
The benefit over Asm/AssemblyBuilder/DynamicMethod is that Linq nodes are basically an built-in AST that can be directly leveraged whereas the other API's requires some mapping of your own AST's to the respective stack-machines.
Hotspot (TM) JIT compiles java code to machine code when it detects hot code paths, speeding up execution during runtime, exactly the use case described in the article.
Bing uses internally a better version, but improvements are not merged back to github. See https://github.com/BitFunnel/NativeJIT/issues/84#issuecommen...
This is C++, no? Why not use operator overloading for the project?
They built this to translate a search query that is only known at runtime. Presumably they already have an AST or similar, so calling methods as it is being walked isn't any harder than operators.
AST<float> p1 = exp.GetP1();
AST<float> rsqr = p1 * p1; // AST<float> implements an operator* overload that produces an AST<float> object
Even if many frown upon operator overloading due to the annoying magical-ness of the standard-librarys appropriation of the shift operators for "piping" (<< and >>), it's still what makes many people prefer C++ for vector-math,etc tasks.
So whilst the result isn't a direct multiplication it should still be an acceptable use since the resulting code will actually be doing multiplications.
Considering what goes on under the hood however, I guess there might be some compiler optimization reasons to keep everything in the expression object in the example as the holder of data instead of spread out in an allocation tree with lots of pointer-chasing.
I love it when libraries like this do that. z3 in python is similar, you just build your constraints using normal syntax and it all just works. Great use of operator overloading.
In particular, this talk might be interesting:
"Unlocking the Power of C++ as a Service: Uniting Python's Usability with C++'s Performance"
Video: https://www.youtube.com/watch?v=rdfBnGjyFrc Slides: https://llvm.org/devmtg/2023-10/slides/techtalks/Vassilev-Un...
def sum(N): x = 0 for i in range(N): x += i return x
There's absolute zero reason why this code has to involve pushing and popping stuff on the python virtual stack. This should be compiled into assembly with a small conversion between C/PyObject.
The goal is to get to a point where we can even do non-trivial things inside this optimized context.
Python will never be able to go down to assembly because Python support doing "weird shit" like dynamically creating modules, hell, even creating a Python file, running eval on that, and loading it as a new module. How are you even going to transpile that to assembly?
So I approach the problem the same way numba is approaching. But hopefully more modern and simpler (implementation wise). Planning on doing it using Rust and the backend should be agnostic (GCC, Clang, whatever C compiler there is)
Expect that you don't, and deoptimise when you do: https://bibliography.selflanguage.org/_static/dynamic-deopti...
It's really not that impossible.
Compared to even an optimized interpreter this will be somewhere between 4x to 20x faster (mainly due to having far far smaller branch predictor costs), so even if it doesn't generate optimal code it will still be within an magnitude of optimal native code whereas an interpreter will be much further behind.
dlopen/LoadLibrary,etc comes with far more memory pressure and OS bookkeeping.
Both have always supported building libraries/assemblies and loading them (the ASM library+custom classloaders for Java and AssemblyBuilder in .NET are about equally capable).
However .NET also has DynamicMethod that is specifically built to quickly build just small functions that aren't tied to larger contexts (similar API to asm/assemblybuilder).
But an even easier option for stuff exactly like in the article that people don't widely really seem to know about is that Linq (yes that "sql-like" stuff) actually contains parts for deferred method building that can be easily leveraged to quickly produce customized native code methods.
The neat part about the Linq-code generator is that you can just drop in plain C# code-blocks to be translataed by the C# compiler into Linq snippets and then with some helpers transform everything to Linq-tree's that can then be compiled.
The benefit over Asm/AssemblyBuilder/DynamicMethod is that Linq nodes are basically an built-in AST that can be directly leveraged whereas the other API's requires some mapping of your own AST's to the respective stack-machines.
https://asm.ow2.io/
https://learn.microsoft.com/en-us/dotnet/api/system.reflecti...
https://learn.microsoft.com/en-us/dotnet/api/system.reflecti...
https://learn.microsoft.com/en-us/dotnet/api/system.linq.exp...
Why?