This is from 2023 (not a complaint, just observing that the result might be stale and even lower upper bounds may have been achieved).
Its quite curious to consider the connection between compression and intelligence. It's hard to quantify comprehension, i.e. how do you see if a system effectively comprehends some data? Lossless compression rates are very attractive, since the task is to not lose data but squeeze it as close as possible to its information content.
It does raise other questions though: which corpus is considered representative? A base model without finetuning might be more vulgar but also more effective at compressing the comparatively vulgar corpus. The corpus the corpus expressed by an RLHF/whatever reinforced and pretty-prompted chatbot however will be very good at compressing its own outputs but less good at compressing the actual vile human corpus, although both the base model and the aligned model will be relatively good at compressing each others output as well, they will each excel at compressing their own implicit corpus.
Another question: as the bits/per character upper bound falls monotonically it will suffer diminishing returns. How does one square that with the proposal that lossless compression corresponds to intelligence? It would clearly not be a linear correspondence, and it suggests that one would need exponentially larger and larger corpus to beat the prior compression rates.
How long can it write before repeating itself?
====
It also raises lots of societal questions:
less than 1 bit per character, how many characters in library genesis / anna's archive etc?
"Lossless" does not mean that the LLM can accurately reconstruct human-written sentences. Rather, it means that the LLM generates a fully reproducible bitstream based on its own predicted probability distribution.
Reconstructing human-written sentences accurately is impossible because it requires modeling the "true source"—the human brain state (memory, emotion, etc.)—rather than the LLM itself.
Instead, a practical approach is to reconstruct the LLM output itself based on seeds or to store it in a compressible probabilistic structure.
Its unclear what you claim lossless compression does or doesn't do, especially since you tie in storing an RNG's seed value at the end of your comment.
"LLMZip: Lossless Text Compression Using Large Language Models"
Implies they use the LLM's next token probability distribution to bring the most likely ones up for the likelihood sorted list of tokens (the higher the next token from the input stream -generated by humans or not- the fewer bits needed to encode its position starting the count from top to bottom, so the better the LLM can predict the true probability of the next token, the better it will be able to compress human-generated text in general)
Do you deny LLM's can be used this way for lossless compression?
Such a system can accurately reconstruct the uncompressed original input text (say generated by a human) from its compressed form.
Sure, a model-based coder can losslessly compress any token stream.
I just meant that for human-written text, the model’s prediction diverges from how the text was actually produced — so the compression is formally lossless, but not semantically faithful or efficient.
Its quite curious to consider the connection between compression and intelligence. It's hard to quantify comprehension, i.e. how do you see if a system effectively comprehends some data? Lossless compression rates are very attractive, since the task is to not lose data but squeeze it as close as possible to its information content.
It does raise other questions though: which corpus is considered representative? A base model without finetuning might be more vulgar but also more effective at compressing the comparatively vulgar corpus. The corpus the corpus expressed by an RLHF/whatever reinforced and pretty-prompted chatbot however will be very good at compressing its own outputs but less good at compressing the actual vile human corpus, although both the base model and the aligned model will be relatively good at compressing each others output as well, they will each excel at compressing their own implicit corpus.
Another question: as the bits/per character upper bound falls monotonically it will suffer diminishing returns. How does one square that with the proposal that lossless compression corresponds to intelligence? It would clearly not be a linear correspondence, and it suggests that one would need exponentially larger and larger corpus to beat the prior compression rates.
How long can it write before repeating itself?
====
It also raises lots of societal questions: less than 1 bit per character, how many characters in library genesis / anna's archive etc?
Reconstructing human-written sentences accurately is impossible because it requires modeling the "true source"—the human brain state (memory, emotion, etc.)—rather than the LLM itself.
Instead, a practical approach is to reconstruct the LLM output itself based on seeds or to store it in a compressible probabilistic structure.
"LLMZip: Lossless Text Compression Using Large Language Models"
Implies they use the LLM's next token probability distribution to bring the most likely ones up for the likelihood sorted list of tokens (the higher the next token from the input stream -generated by humans or not- the fewer bits needed to encode its position starting the count from top to bottom, so the better the LLM can predict the true probability of the next token, the better it will be able to compress human-generated text in general)
Do you deny LLM's can be used this way for lossless compression?
Such a system can accurately reconstruct the uncompressed original input text (say generated by a human) from its compressed form.