Long Range ZIP or Lzma RZIP is a compression program optimised for large files. The larger the file and the more memory you have, the better the compression advantage this will provide, especially once the files are larger than 100MB. The advantage can be chosen to be either size (much smaller than bzip2) or speed (much faster than bzip2).
Compression of directories (recursive) requires lrztar which first tars the directory, then compresses the single file just like tar does when users compress with gzip or xz (tar zcf ... and tar Jcz ... respectfully).
This will produce an LZMA compressed archive "foo.tar.lrz" from a directory named "foo".
$ lrztar foo
This will produce an lzma compressed archive "bar.lrz" from a file named "bar"
$ lrzip bar
For extreme compression, add the -z switch which enables ZPAQ but takes notably longer than lzma.
$ lrztar -z foo
For extremely fast compression and decompression, use the -l switch for LZO.
$ lrzip -l bar
To completely extract an archived directory.
$ lrzuntar foo.tar.lrz
To decompress "bar.lrz to "bar".
$ lrunzip bar.lrz
Lrzip uses an extended version of rzip which does a first pass long distance redundancy reduction. The lrzip modifications make it scale according to memory size. The data is then either:
- Compressed by lzma (default) which gives excellent compression at approximately twice the speed of bzip2 compression
- Compressed by a number of other compressors chosen for different reasons, in order of likelihood of usefulness:
- ZPAQ: Extreme compression up to 20% smaller than lzma but ultra slow at compression AND decompression.
- LZO: Extremely fast compression and decompression which on most machines compresses faster than disk writing making it as fast (or even faster) than simply copying a large file.
- GZIP: Almost as fast as LZO but with better compression.
- BZIP2: A defacto linux standard of sorts but is the middle ground between lzma and gzip and neither here nor there.
- Leaving it uncompressed and rzip prepared. This form improves substantially any compression performed on the resulting file in both size and speed (due to the nature of rzip preparation merging similar compressible blocks of data and creating a smaller file). By "improving" I mean it will either speed up the very slow compressors with minor detriment to compression, or greatly increase the compression of simple compression algorithms.
The major disadvantages are:
- The main lrzip application only works on single files so it requires the lrztar wrapper to fake a complete archiver.
- It requires a lot of memory to get the best performance out of (easily 1 GB RAM, with 1 GB data), and is not really usable (for compression) with less than 256MB. Decompression requires less ram and works on smaller ram machines. Sometimes swap may need to be enabled on these lower ram machines for the operating system to be happy.
- STDIN/STDOUT works fine on both compression and decompression, but larger files compressed in this manner will end up being less efficiently compressed.
The unique feature of lrzip is that it tries to make the most of the available ram in your system at all times for maximum benefit. It does this by default, choosing the largest sized window possible without running out of memory. It also has a unique "sliding mmap" feature which makes it possible to even use a compression window larger than your ramsize, if the file is that large. It does this (with the -U option) by implementing one large mmap buffer as per normal, and a smaller moving buffer to track which part of the file is currently being examined, emulating a much larger single mmapped buffer. Unfortunately this mode can be many times slower.
See the README.benchmarks included in the source/docs.
See the README included with the source package.