How realistic is it for the Trifecta Tech implementation to start displacing the "official" implementation used by linux distros, which hasn't seen an upstream release since 2019?
Fedora recently swapped the original Adler zlib implementation with zlib-ng, so that sort of thing isn't impossible. You just need to provide a C ABI compatible with the original one.
The commenters below are confusing two things - Rust binaries can be dynamically linked, but because Rust doesn’t have a stable ABI you can’t do this across compiler versions the way you would with C. So in practice, everything is statically linked.
Rust cannot dynamic link to rust. It can dynamic link to C and be dynamicly linked by C - if you combine the two you can cheat but it is still C that you are dealing with not rust even if rust is on both sides.
Rust can absolutely link to Rust libraries dynamically. There is no stable ABI, so it has to be the same compiler version, but it will still be dynamically linked.
You can use dynamic linking in Rust with C ABI. Which means going through `unsafe` keyword - also known as 'trust me bro'. Static linking directly to Rust source means it is checked by compiler so there is no need for unsafe.
What's the reason for using bz2 here? Wouldn't it be faster to do a one off conversion to zstd? It beats bzip2 in every metric at higher compression levels as far as I know.
bzip2 (particularly parallel implementations thereof) are already relatively competitive for compression. The decompression time is where it lags behind because lz77 based algorithms can be incredibly fast at decompression.
There's certainly a contrast between the "Oops a huge file causes a runtime failure" reported for that crate and a bunch of "Oops we have bounds misses" in C. I wonder how hard anybody worked on trying to exploit the bounds misses to get code execution. It may or may not be impossible to achieve that escalation.
i'd be curious if they're using the same llvm codegen (with the same optimization) backend for the c and rust versions. if so, where the speedups are coming from?
(ie, is it some kind of rust auto-simd thing, did they use the opportunity to hand optimize other parts or is it making use of newer optimized libraries, or... other)
C is honestly a pretty bad language for writing modern high performance code. Between C99 and C21, there was a ~20 year gap where the language just didn't add features needed to idiomatically target lots of the new instructions added (without inline asm). Just getting good abstract machine instructions for clz/popcnt/clmul/pdep etc helps a lot for writing this kind of code.
Popcount, clz, and ctz are provided as nonstandard functions in GCC (and clang might also support them in GNU mode, but I don't know for sure). PDEP and PEXT do not seem to be, but I think they should be (and PEXT is something that INTERCAL already had, anyways) (although PDEP and PEXP can be used with -mbmi2 on x86, but are not available for general use). The MOR and MXOR of MMIX are also something that I would want to be available as built-in functions.
I hope they or Prossimo will also look and reimplement in the similar fashion the core Internet protocols - BGP, OSPF and RIP, other routing implementations, DNS servers, and so on.
Without commenting on whether an LLM is the right approach, I don't think this task is particularly hard to audit. There is almost assuredly a huge test suite for bzip2 archives; fuzzing file formats is very easy; and you can restrict / audit the use of unsafe by the translator.
I suspect attempting to debug it would be a nightmare though. Given the LLM could hallucinate anything anywhere you’d likely waste a ton of time.
I suspect it would be faster to just try and write a new implementation based on the spec and debug that against the test suite. You’d likely be closer.
In fact, since they used c2rust, they had a perfectly working version from the start. From there they just had to clean up the Rust code and make sure it didn’t break anything. Clearly the best of the three options.
The presenter uses uutils sort (on Shakespeare's corpus) to show how much faster it is than coreutils, and /g/ found out it was only faster because it had no locale awareness, which is especially dishonest because the presenter claims drop-in, 1-to-1 compatibility as an explicit goal of the project, so this discrepancy between the two at least should have been acknowledged by him.
1. The uutils project didn’t also make all locales cases for sort faster even though the majority of people will be using UTF-8, C or POSIX where it is indeed faster
2. There’s a lot of debating about different test cases which is a never ending quibble with sorting routines (go look at some of the cutting edge sort algorithm development).
This complaint is hyperfocusing on 1 of the many utilities they claim they’re faster on and quibbling about what to me are important but ultimately minor critiques. I really don’t see the debacle.
As for the license, that’s more your opinion. Rust as a language generally has dual licensed their code as MIT and Apache2 and most open source projects follow this tradition. I don’t see the conspiracy that you do. And just so I’m clear, the corporation your criticizing here as the amorphous evil entity funding this is Ubuntu right?
>1. The uutils project didn’t also make all locales cases for sort faster even though the majority of people will be using UTF-8, C or POSIX where it is indeed faster
locale != encoding.
Try sort a phone book with tr_TR.UTF-8 vs en_US.UTF-8
You should of course verify these results in your scenario. However, I somewhat doubt that the person exists who cares greatly about performance, and is still willing to consider bzip2. There isn't a point anywhere in the design space where bzip2 beats zstd. You can get smaller outputs from zstd in 1/20th the time for many common inputs, or you can spend the same amount of time and get a significantly smaller output, and zstd decompression is again 20-50x faster depending. So the speed of your bzip2 implementation hardly seems worth arguing over.
They kicked off the article saying that no one uses bzip2 anymore. A million cycles saved for something no one uses (according to them) is still 0% battery life saved.
If modern CPUs are so power efficient and have so many spare cycles to allocate to e.g. eye candy no one asked for, then no one is counting and the comparison is irrelevant.
It sounds like the main motivation for the conversion was to simplify builds and reduce the chance of security issues. Old parts of protocols that no one pays much attention to anymore does seem to be a common place where those pop up. The performance gain looks more like just a nice side effect of the rewrite, I imagine they were at most targeting performance parity.
The Wikipedia data dumps [0] are multistream bz2. This makes them relatively easy to partially ingest, and I'm happy to be able to remove the C dependency from the Rust code I have that deals with said dumps.
The same could be said of many things that, nonetheless, are still used by many, and will continue to be used by many for decades to come. A thing does not need to be best to justify someone wanting to make it a bit better.
“Best” is measured along a lot more axis than just performance. And you don’t always get to choose what format you use. It may be dictated to you by some 3rd party you can’t influence.
So? If I need to consume a resource compressed using bz2, I'm not just going to sit around and wait for them to use zstd. I'm going to break out bz2. If I can use a modern rewrite that's faster, I'll take every advantage I can get.
You know it is just Wirth's law in action: "Software gets slower faster than hardware gets faster." [^1]
In fact Jevons Paradox: When technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand - essentially, efficiency improvements can lead to increased consumption rather than the intended conservation. [^2][^3]
It seems to me like binary file format parsing (and construction) is probably a good place for using languages that aren't as prone to buffer-overflows and the like. Especially if it's for a common format and the code might be used in all sorts of security-contexts.
Buffer overflows are more a library problem, not a language problem, though for newer ecosystems like Rust the distinction is kind of lost on people. But point being, if you rewrote bzip2 using an equivalent to std::Vec, you'd end up in the same place. Unfortunately, the norm among C developers, especially in the past, was to open code most buffer manipulation, so you wind up with 1000 manually written overflow checks, some of which are wrong or outright missing, as opposed to a single check in a shared implementation. Indeed, even that Rust code had an off-by-one (in "safe" code), it just wasn't considered a security issue because it would result in data corruption, not an overflow.
What Rust-the-language does offer is temporal safety (i.e. the borrow checker), and there's no easy way to get that in C.
you're just an end user, you don't have to maintain the suite.
In OSS every hour of volunteer time is precious Manna from heaven, flavored with unicorn tears. So any way to remove Toil and introduce automation is gold.
Rust's strict compiler and an appropriate test suite guarantees a level of correctness far beyond C. There's less onus on the reviewer to ensure everything still works as expected when reviewing a pull request.
It's a lot like X11 vs. Wayland. The current graphics developers, who trend younger, don't want to maintain the boomer-written C code in the X server. Too risky and time-consuming. So one of the goals of Wayland is to completely abolish X so it can be replaced with something more long-term maintainable. Turns out, current systems-level developers don't want to maintain boomer-written GNU code or any C code at all, really, for similar reasons. C is inherently problematic because even seasoned developers have trouble avoiding its footguns. So an unstated, but important, goal of Rust is to abolish all critical C code and replace it with Rust code. Ubuntu is on board with this.
> lot of this "rewrite X in Rust" stuff feels like
Indeed. You know the react-angular-vue nevermind is churn? It appears that the trend of people pushing stuff because it benefit their careers is coming to the low level world.
I for one still find it mistifying that Linus torvals let this people into the kernel. Linus, who famous banned c++ from the kernel not because of c++ in itself, but to ban c++ programmer culture.
Fedora recently swapped the original Adler zlib implementation with zlib-ng, so that sort of thing isn't impossible. You just need to provide a C ABI compatible with the original one.
https://uutils.github.io/
The performance boost in tools like ripgrep and tokei is insane compared to the tools they replace (grep and cloc respectively).
How does this interact with dynamic linking? Doesn't the current Rust toolchain mandate static linking?
Use crate-type=["cdylib"]
Ironically there is one CVE reported in the bzip2 crate
[1] https://app.opencve.io/cve/?product=bzip2&vendor=bzip2_proje...
They're releasing 0.6.0 today :>
(ie, is it some kind of rust auto-simd thing, did they use the opportunity to hand optimize other parts or is it making use of newer optimized libraries, or... other)
Linked from the article is another on how they used c2rust to do the initial translation.
https://trifectatech.org/blog/translating-bzip2-with-c2rust/
For our purposes, it points out places where the code isn’t very optimal because the C code has no guarantees on the ranges of variables, etc.
It also points out a lot of people just use ‘int’ even when the number will never be very big.
But with the proper type the Rust compiler can decide to do something else if it will perform better.
So I suspect your idea that it allows unlocking better optimizations though more knowledge is probably the right answer.
https://github.com/immunant/c2rust reportedly works pretty well. Blog post from a few years ago of them transpiling quake3 to rust: https://immunant.com/blog/2020/01/quake3/. The rust produced ain't pretty, but you can then start cleaning it up and making it more "rusty"
https://trifectatech.org/blog/translating-bzip2-with-c2rust/
I suspect attempting to debug it would be a nightmare though. Given the LLM could hallucinate anything anywhere you’d likely waste a ton of time.
I suspect it would be faster to just try and write a new implementation based on the spec and debug that against the test suite. You’d likely be closer.
In fact, since they used c2rust, they had a perfectly working version from the start. From there they just had to clean up the Rust code and make sure it didn’t break anything. Clearly the best of the three options.
After the uutils debacle, does anyone still trust these "rewrote in Rust" promotional benchmarks without independent verification?
Which debacle?
It's a 4chan archive (and one of its most robust), and the archived thread was on /g/ last March.
> That the project is not currently at fast as GNU? Where is the lying?
Watch the FOSDEM presentation at 15 minutes in: https://fosdem.org/2025/schedule/event/fosdem-2025-6196-rewr...
The presenter uses uutils sort (on Shakespeare's corpus) to show how much faster it is than coreutils, and /g/ found out it was only faster because it had no locale awareness, which is especially dishonest because the presenter claims drop-in, 1-to-1 compatibility as an explicit goal of the project, so this discrepancy between the two at least should have been acknowledged by him.
1. The uutils project didn’t also make all locales cases for sort faster even though the majority of people will be using UTF-8, C or POSIX where it is indeed faster
2. There’s a lot of debating about different test cases which is a never ending quibble with sorting routines (go look at some of the cutting edge sort algorithm development).
This complaint is hyperfocusing on 1 of the many utilities they claim they’re faster on and quibbling about what to me are important but ultimately minor critiques. I really don’t see the debacle.
As for the license, that’s more your opinion. Rust as a language generally has dual licensed their code as MIT and Apache2 and most open source projects follow this tradition. I don’t see the conspiracy that you do. And just so I’m clear, the corporation your criticizing here as the amorphous evil entity funding this is Ubuntu right?
locale != encoding.
Try sort a phone book with tr_TR.UTF-8 vs en_US.UTF-8
Then you don’t have a choice.
And if you have to use it, 14% is a really nice speed up.
Counting CPU cycles as if it's an accomplishment seems irrelevant in a world where 50% of modern CPU resources are allocated toward UI eye candy.
That's the kind of attitude that leads to 50% of modern CPU resources being allocated toward UI eye candy.
If modern CPUs are so power efficient and have so many spare cycles to allocate to e.g. eye candy no one asked for, then no one is counting and the comparison is irrelevant.
[0]: https://meta.wikimedia.org/wiki/Data_dump_torrents#English_W...
“Best” is measured along a lot more axis than just performance. And you don’t always get to choose what format you use. It may be dictated to you by some 3rd party you can’t influence.
Attitude which leads to electron apps replacing native ones, and I hate it. I am not buying better cpus and more ram just to have it wasted like this
In fact Jevons Paradox: When technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises due to increasing demand - essentially, efficiency improvements can lead to increased consumption rather than the intended conservation. [^2][^3]
[^1]: https://www.comp.nus.edu.sg/~damithch/quotes/quote27.htm
[^2]: https://www.greenchoices.org/news/blog-posts/the-jevons-para...
[^3]: https://quickonomics.com/terms/jevons-paradox/
The same about exported symbols and being able to compile to wasm easily.
What Rust-the-language does offer is temporal safety (i.e. the borrow checker), and there's no easy way to get that in C.
In OSS every hour of volunteer time is precious Manna from heaven, flavored with unicorn tears. So any way to remove Toil and introduce automation is gold.
Rust's strict compiler and an appropriate test suite guarantees a level of correctness far beyond C. There's less onus on the reviewer to ensure everything still works as expected when reviewing a pull request.
It's a win-win situation.
And that's assuming they aren't lying about the counting: https://desuarchive.org/g/thread/104831348/#q104831479
Indeed. You know the react-angular-vue nevermind is churn? It appears that the trend of people pushing stuff because it benefit their careers is coming to the low level world.
I for one still find it mistifying that Linus torvals let this people into the kernel. Linus, who famous banned c++ from the kernel not because of c++ in itself, but to ban c++ programmer culture.