Major changes, including Brotli and Lizard

- update of zstd-mt library
- add brotli v0.6.0
- add lizard v2.0
- xxhash is from zstd for lz4, lz5 and lizard now
- update also the documentation, where needed
This commit is contained in:
Tino Reichardt
2017-05-25 18:40:15 +02:00
parent 40e87f615c
commit 5ff0657d9f
173 changed files with 3936 additions and 6591 deletions

View File

@@ -1,7 +1,7 @@
#define MY_VER_MAJOR 17
#define MY_VER_MINOR 00
#define MY_VER_BUILD 0
#define MY_VERSION_NUMBERS "17.00 ZS v1.2.0 R2"
#define MY_VERSION_NUMBERS "17.00 ZS v1.2.0 R3"
#define MY_VERSION MY_VERSION_NUMBERS
#ifdef MY_CPU_NAME
@@ -10,12 +10,12 @@
#define MY_VERSION_CPU MY_VERSION
#endif
#define MY_DATE "2017-05-19"
#define MY_DATE "2017-05-25"
#undef MY_COPYRIGHT
#undef MY_VERSION_COPYRIGHT_DATE
#define MY_AUTHOR_NAME "Igor Pavlov, Tino Reichardt"
#define MY_COPYRIGHT_PD "Igor Pavlov : Public domain"
#define MY_COPYRIGHT_CR "Copyright (c) 1999-2017 Igor Pavlov"
#define MY_COPYRIGHT_CR "Copyright (c) 1999-2017 Igor Pavlov, 2016-2017 Tino Reichardt"
#ifdef USE_COPYRIGHT_CR
#define MY_COPYRIGHT MY_COPYRIGHT_CR

View File

@@ -1,9 +1,9 @@
#define MY_VER_MAJOR 1
#define MY_VER_MINOR 2
#define MY_VER_BUILD 0
#define MY_VERSION_NUMBERS "1.2.0 R2"
#define MY_VERSION "1.2.0 R2"
#define MY_DATE "2017-05-19"
#define MY_VERSION_NUMBERS "1.2.0 R3"
#define MY_VERSION MY_VERSION_NUMBERS
#define MY_DATE "2017-05-25"
#undef MY_COPYRIGHT
#undef MY_VERSION_COPYRIGHT_DATE
#define MY_AUTHOR_NAME "Tino Reichardt"

36
C/brotli/Brotli-Adjust.sh Normal file
View File

@@ -0,0 +1,36 @@
#!/bin/sh
# C/brotli/*
# /TR 2017-05-25
find . -type d -exec chmod 775 {} \;
find . -type f -exec chmod 644 {} \;
chmod +x $0
mv include/brotli/* .
rm -rf include
for i in */*.c *.h; do
sed -i 's|<brotli/port.h>|"port.h"|g' "$i"
sed -i 's|<brotli/types.h>|"types.h"|g' "$i"
sed -i 's|<brotli/encode.h>|"encode.h"|g' "$i"
sed -i 's|<brotli/decode.h>|"decode.h"|g' "$i"
done
for i in */*.h; do
sed -i 's|<brotli/port.h>|"../port.h"|g' "$i"
sed -i 's|<brotli/types.h>|"../types.h"|g' "$i"
sed -i 's|<brotli/encode.h>|"../encode.h"|g' "$i"
sed -i 's|<brotli/decode.h>|"../decode.h"|g' "$i"
done
cd common
sed -i 's|include "./|include "./common/|g' *.c
for f in *.c; do mv $f ../br_$f; done
cd ../dec
sed -i 's|include "./|include "./dec/|g' *.c
sed -i 's|include "../common|include "./common|g' *.c
for f in *.c; do mv $f ../br_$f; done
cd ../enc
sed -i 's|include "./|include "./enc/|g' *.c
sed -i 's|include "../common|include "./common/|g' *.c
for f in *.c; do mv $f ../br_$f; done

19
C/brotli/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
Copyright (c) 2009, 2010, 2013-2016 by the Brotli Authors.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

71
C/brotli/README.md Normal file
View File

@@ -0,0 +1,71 @@
<p align="center"><img src="https://brotli.org/brotli.svg" alt="Brotli" width="64"></p>
### Introduction
Brotli is a generic-purpose lossless compression algorithm that compresses data
using a combination of a modern variant of the LZ77 algorithm, Huffman coding
and 2nd order context modeling, with a compression ratio comparable to the best
currently available general-purpose compression methods. It is similar in speed
with deflate but offers more dense compression.
The specification of the Brotli Compressed Data Format is defined in [RFC 7932](https://www.ietf.org/rfc/rfc7932.txt).
Brotli is open-sourced under the MIT License, see the LICENSE file.
Brotli mailing list:
https://groups.google.com/forum/#!forum/brotli
[![Build Status](https://travis-ci.org/google/brotli.svg?branch=master)](https://travis-ci.org/google/brotli)
### Build instructions
#### Make
To build and run tests, simply do:
$ ./configure && make
If you want to install brotli, use one of the more advanced build systems below.
#### Bazel
See [Bazel](http://www.bazel.build/)
#### CMake
The basic commands to build, test and install brotli are:
$ mkdir out && cd out && ../configure-cmake && make
$ make test
$ make install
You can use other [CMake](https://cmake.org/) configuration. For example, to
build static libraries and use a custom installation directory:
$ mkdir out-static && \
cd out-static && \
../configure-cmake --disable-shared-libs --prefix='/my/prefix/dir/'
$ make install
#### Premake5
See [Premake5](https://premake.github.io/)
#### Python
To install the Python module from source, run the following:
$ python setup.py install
See the [Python readme](python/README.md) for more details on testing
and development.
### Benchmarks
* [Squash Compression Benchmark](https://quixdb.github.io/squash-benchmark/) / [Unstable Squash Compression Benchmark](https://quixdb.github.io/squash-benchmark/unstable/)
* [Large Text Compression Benchmark](http://mattmahoney.net/dc/text.html)
* [Lzturbo Benchmark](https://sites.google.com/site/powturbo/home/benchmark)
### Related projects
Independent [decoder](https://github.com/madler/brotli) implementation by Mark Adler, based entirely on format specification.
JavaScript port of brotli [decoder](https://github.com/devongovett/brotli.js). Could be used directly via `npm install brotli`

View File

@@ -6,16 +6,16 @@
/* Function to find backward reference copies. */
#include "./backward_references.h"
#include "./enc/backward_references.h"
#include "../common/constants.h"
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "./command.h"
#include "./dictionary_hash.h"
#include "./memory.h"
#include "./port.h"
#include "./quality.h"
#include "./common//constants.h"
#include "./common//dictionary.h"
#include "types.h"
#include "./enc/command.h"
#include "./enc/dictionary_hash.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/quality.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -51,47 +51,47 @@ static BROTLI_INLINE size_t ComputeDistanceCode(size_t distance,
#define HASHER() H2
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H3
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H4
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H5
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H6
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H40
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H41
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H42
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#define HASHER() H54
/* NOLINTNEXTLINE(build/include) */
#include "./backward_references_inc.h"
#include "./enc/backward_references_inc.h"
#undef HASHER
#undef FN

View File

@@ -6,20 +6,20 @@
/* Function to find backward reference copies. */
#include "./backward_references_hq.h"
#include "./enc/backward_references_hq.h"
#include <string.h> /* memcpy, memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./command.h"
#include "./fast_log.h"
#include "./find_match_length.h"
#include "./literal_cost.h"
#include "./memory.h"
#include "./port.h"
#include "./prefix.h"
#include "./quality.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/command.h"
#include "./enc/fast_log.h"
#include "./enc/find_match_length.h"
#include "./enc/literal_cost.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/prefix.h"
#include "./enc/quality.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,28 +6,28 @@
/* Functions to estimate the bit cost of Huffman trees. */
#include "./bit_cost.h"
#include "./enc/bit_cost.h"
#include "../common/constants.h"
#include <brotli/types.h>
#include "./fast_log.h"
#include "./histogram.h"
#include "./port.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/fast_log.h"
#include "./enc/histogram.h"
#include "./enc/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
#endif
#define FN(X) X ## Literal
#include "./bit_cost_inc.h" /* NOLINT(build/include) */
#include "./enc/bit_cost_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Command
#include "./bit_cost_inc.h" /* NOLINT(build/include) */
#include "./enc/bit_cost_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Distance
#include "./bit_cost_inc.h" /* NOLINT(build/include) */
#include "./enc/bit_cost_inc.h" /* NOLINT(build/include) */
#undef FN
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -6,10 +6,10 @@
/* Bit reading helpers */
#include "./bit_reader.h"
#include "./dec/bit_reader.h"
#include <brotli/types.h>
#include "./port.h"
#include "types.h"
#include "./dec/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,19 +6,19 @@
/* Block split point selection utilities. */
#include "./block_splitter.h"
#include "./enc/block_splitter.h"
#include <assert.h>
#include <string.h> /* memcpy, memset */
#include "./bit_cost.h"
#include "./cluster.h"
#include "./command.h"
#include "./fast_log.h"
#include "./histogram.h"
#include "./memory.h"
#include "./port.h"
#include "./quality.h"
#include "./enc/bit_cost.h"
#include "./enc/cluster.h"
#include "./enc/command.h"
#include "./enc/fast_log.h"
#include "./enc/histogram.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/quality.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -92,19 +92,19 @@ static BROTLI_INLINE double BitCost(size_t count) {
#define FN(X) X ## Literal
#define DataType uint8_t
/* NOLINTNEXTLINE(build/include) */
#include "./block_splitter_inc.h"
#include "./enc/block_splitter_inc.h"
#undef DataType
#undef FN
#define FN(X) X ## Command
#define DataType uint16_t
/* NOLINTNEXTLINE(build/include) */
#include "./block_splitter_inc.h"
#include "./enc/block_splitter_inc.h"
#undef FN
#define FN(X) X ## Distance
/* NOLINTNEXTLINE(build/include) */
#include "./block_splitter_inc.h"
#include "./enc/block_splitter_inc.h"
#undef DataType
#undef FN

View File

@@ -8,19 +8,19 @@
compression algorithms here, just the right ordering of bits to match the
specs. */
#include "./brotli_bit_stream.h"
#include "./enc/brotli_bit_stream.h"
#include <string.h> /* memcpy, memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./context.h"
#include "./entropy_encode.h"
#include "./entropy_encode_static.h"
#include "./fast_log.h"
#include "./memory.h"
#include "./port.h"
#include "./write_bits.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/context.h"
#include "./enc/entropy_encode.h"
#include "./enc/entropy_encode_static.h"
#include "./enc/fast_log.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/write_bits.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -926,17 +926,17 @@ static void StoreSymbolWithContext(BlockEncoder* self, size_t symbol,
#define FN(X) X ## Literal
/* NOLINTNEXTLINE(build/include) */
#include "./block_encoder_inc.h"
#include "./enc/block_encoder_inc.h"
#undef FN
#define FN(X) X ## Command
/* NOLINTNEXTLINE(build/include) */
#include "./block_encoder_inc.h"
#include "./enc/block_encoder_inc.h"
#undef FN
#define FN(X) X ## Distance
/* NOLINTNEXTLINE(build/include) */
#include "./block_encoder_inc.h"
#include "./enc/block_encoder_inc.h"
#undef FN
static void JumpToByteBoundary(size_t* storage_ix, uint8_t* storage) {

View File

@@ -6,14 +6,14 @@
/* Functions for clustering similar histograms together. */
#include "./cluster.h"
#include "./enc/cluster.h"
#include <brotli/types.h>
#include "./bit_cost.h" /* BrotliPopulationCost */
#include "./fast_log.h"
#include "./histogram.h"
#include "./memory.h"
#include "./port.h"
#include "types.h"
#include "./enc/bit_cost.h" /* BrotliPopulationCost */
#include "./enc/fast_log.h"
#include "./enc/histogram.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -38,15 +38,15 @@ static BROTLI_INLINE double ClusterCostDiff(size_t size_a, size_t size_b) {
#define CODE(X) X
#define FN(X) X ## Literal
#include "./cluster_inc.h" /* NOLINT(build/include) */
#include "./enc/cluster_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Command
#include "./cluster_inc.h" /* NOLINT(build/include) */
#include "./enc/cluster_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Distance
#include "./cluster_inc.h" /* NOLINT(build/include) */
#include "./enc/cluster_inc.h" /* NOLINT(build/include) */
#undef FN
#undef CODE

View File

@@ -12,19 +12,19 @@
Adapted from the CompressFragment() function in
https://github.com/google/snappy/blob/master/snappy.cc */
#include "./compress_fragment.h"
#include "./enc/compress_fragment.h"
#include <string.h> /* memcmp, memcpy, memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./brotli_bit_stream.h"
#include "./entropy_encode.h"
#include "./fast_log.h"
#include "./find_match_length.h"
#include "./memory.h"
#include "./port.h"
#include "./write_bits.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/brotli_bit_stream.h"
#include "./enc/entropy_encode.h"
#include "./enc/fast_log.h"
#include "./enc/find_match_length.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/write_bits.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -10,20 +10,20 @@
second pass we emit them into the bit stream using prefix codes built based
on the actual command and literal byte histograms. */
#include "./compress_fragment_two_pass.h"
#include "./enc/compress_fragment_two_pass.h"
#include <string.h> /* memcmp, memcpy, memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./bit_cost.h"
#include "./brotli_bit_stream.h"
#include "./entropy_encode.h"
#include "./fast_log.h"
#include "./find_match_length.h"
#include "./memory.h"
#include "./port.h"
#include "./write_bits.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/bit_cost.h"
#include "./enc/brotli_bit_stream.h"
#include "./enc/entropy_encode.h"
#include "./enc/fast_log.h"
#include "./enc/find_match_length.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/write_bits.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -4,7 +4,7 @@
See file LICENSE for detail or copy at https://opensource.org/licenses/MIT
*/
#include <brotli/decode.h>
#include "decode.h"
#ifdef __ARM_NEON__
#include <arm_neon.h>
@@ -13,16 +13,16 @@
#include <stdlib.h> /* free, malloc */
#include <string.h> /* memcpy, memset */
#include "../common/constants.h"
#include "../common/dictionary.h"
#include "../common/version.h"
#include "./bit_reader.h"
#include "./context.h"
#include "./huffman.h"
#include "./port.h"
#include "./prefix.h"
#include "./state.h"
#include "./transform.h"
#include "./common/constants.h"
#include "./common/dictionary.h"
#include "./common/version.h"
#include "./dec/bit_reader.h"
#include "./dec/context.h"
#include "./dec/huffman.h"
#include "./dec/port.h"
#include "./dec/prefix.h"
#include "./dec/state.h"
#include "./dec/transform.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -4,7 +4,7 @@
See file LICENSE for detail or copy at https://opensource.org/licenses/MIT
*/
#include "./dictionary.h"
#include "./common/dictionary.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,8 +6,8 @@
/* Hash table on the 4-byte prefixes of static dictionary words. */
#include <brotli/port.h>
#include "./dictionary_hash.h"
#include "port.h"
#include "./enc/dictionary_hash.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,31 +6,31 @@
/* Implementation of Brotli compressor. */
#include <brotli/encode.h>
#include "encode.h"
#include <stdlib.h> /* free, malloc */
#include <string.h> /* memcpy, memset */
#include "../common/version.h"
#include "./backward_references.h"
#include "./backward_references_hq.h"
#include "./bit_cost.h"
#include "./brotli_bit_stream.h"
#include "./compress_fragment.h"
#include "./compress_fragment_two_pass.h"
#include "./context.h"
#include "./entropy_encode.h"
#include "./fast_log.h"
#include "./hash.h"
#include "./histogram.h"
#include "./memory.h"
#include "./metablock.h"
#include "./port.h"
#include "./prefix.h"
#include "./quality.h"
#include "./ringbuffer.h"
#include "./utf8_util.h"
#include "./write_bits.h"
#include "./common//version.h"
#include "./enc/backward_references.h"
#include "./enc/backward_references_hq.h"
#include "./enc/bit_cost.h"
#include "./enc/brotli_bit_stream.h"
#include "./enc/compress_fragment.h"
#include "./enc/compress_fragment_two_pass.h"
#include "./enc/context.h"
#include "./enc/entropy_encode.h"
#include "./enc/fast_log.h"
#include "./enc/hash.h"
#include "./enc/histogram.h"
#include "./enc/memory.h"
#include "./enc/metablock.h"
#include "./enc/port.h"
#include "./enc/prefix.h"
#include "./enc/quality.h"
#include "./enc/ringbuffer.h"
#include "./enc/utf8_util.h"
#include "./enc/write_bits.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -1792,23 +1792,6 @@ uint32_t BrotliEncoderVersion(void) {
return BROTLI_VERSION;
}
/* DEPRECATED >>> */
size_t BrotliEncoderInputBlockSize(BrotliEncoderState* s) {
return InputBlockSize(s);
}
void BrotliEncoderCopyInputToRingBuffer(BrotliEncoderState* s,
const size_t input_size,
const uint8_t* input_buffer) {
CopyInputToRingBuffer(s, input_size, input_buffer);
}
BROTLI_BOOL BrotliEncoderWriteData(
BrotliEncoderState* s, const BROTLI_BOOL is_last,
const BROTLI_BOOL force_flush, size_t* out_size, uint8_t** output) {
return EncodeData(s, is_last, force_flush, out_size, output);
}
/* <<< DEPRECATED */
#if defined(__cplusplus) || defined(c_plusplus)
} /* extern "C" */
#endif

View File

@@ -6,13 +6,13 @@
/* Entropy encoding (Huffman) utilities. */
#include "./entropy_encode.h"
#include "./enc/entropy_encode.h"
#include <string.h> /* memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./port.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,11 +6,11 @@
/* Build per-context histograms of literals, commands and distance codes. */
#include "./histogram.h"
#include "./enc/histogram.h"
#include "./block_splitter.h"
#include "./command.h"
#include "./context.h"
#include "./enc/block_splitter.h"
#include "./enc/command.h"
#include "./enc/context.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,13 +6,13 @@
/* Utilities for building Huffman decoding tables. */
#include "./huffman.h"
#include "./dec/huffman.h"
#include <string.h> /* memcpy, memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "./port.h"
#include "./common/constants.h"
#include "types.h"
#include "./dec/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -7,12 +7,12 @@
/* Literal cost model to allow backward reference replacement to be efficient.
*/
#include "./literal_cost.h"
#include "./enc/literal_cost.h"
#include <brotli/types.h>
#include "./fast_log.h"
#include "./port.h"
#include "./utf8_util.h"
#include "types.h"
#include "./enc/fast_log.h"
#include "./enc/port.h"
#include "./enc/utf8_util.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -7,14 +7,14 @@
/* Algorithms for distributing the literals and commands of a metablock between
block types and contexts. */
#include "./memory.h"
#include "./enc/memory.h"
#include <assert.h>
#include <stdlib.h> /* exit, free, malloc */
#include <string.h> /* memcpy */
#include <brotli/types.h>
#include "./port.h"
#include "types.h"
#include "./enc/port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -7,19 +7,19 @@
/* Algorithms for distributing the literals and commands of a metablock between
block types and contexts. */
#include "./metablock.h"
#include "./enc/metablock.h"
#include "../common/constants.h"
#include <brotli/types.h>
#include "./bit_cost.h"
#include "./block_splitter.h"
#include "./cluster.h"
#include "./context.h"
#include "./entropy_encode.h"
#include "./histogram.h"
#include "./memory.h"
#include "./port.h"
#include "./quality.h"
#include "./common//constants.h"
#include "types.h"
#include "./enc/bit_cost.h"
#include "./enc/block_splitter.h"
#include "./enc/cluster.h"
#include "./enc/context.h"
#include "./enc/entropy_encode.h"
#include "./enc/histogram.h"
#include "./enc/memory.h"
#include "./enc/port.h"
#include "./enc/quality.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -145,15 +145,15 @@ void BrotliBuildMetaBlock(MemoryManager* m,
}
#define FN(X) X ## Literal
#include "./metablock_inc.h" /* NOLINT(build/include) */
#include "./enc/metablock_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Command
#include "./metablock_inc.h" /* NOLINT(build/include) */
#include "./enc/metablock_inc.h" /* NOLINT(build/include) */
#undef FN
#define FN(X) X ## Distance
#include "./metablock_inc.h" /* NOLINT(build/include) */
#include "./enc/metablock_inc.h" /* NOLINT(build/include) */
#undef FN
#define BROTLI_MAX_STATIC_CONTEXTS 13

View File

@@ -4,12 +4,12 @@
See file LICENSE for detail or copy at https://opensource.org/licenses/MIT
*/
#include "./state.h"
#include "./dec/state.h"
#include <stdlib.h> /* free, malloc */
#include <brotli/types.h>
#include "./huffman.h"
#include "types.h"
#include "./dec/huffman.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -4,12 +4,12 @@
See file LICENSE for detail or copy at https://opensource.org/licenses/MIT
*/
#include "./static_dict.h"
#include "./enc/static_dict.h"
#include "../common/dictionary.h"
#include "./find_match_length.h"
#include "./port.h"
#include "./static_dict_lut.h"
#include "./common//dictionary.h"
#include "./enc/find_match_length.h"
#include "./enc/port.h"
#include "./enc/static_dict_lut.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -6,9 +6,9 @@
/* Heuristics for deciding about the UTF8-ness of strings. */
#include "./utf8_util.h"
#include "./enc/utf8_util.h"
#include <brotli/types.h>
#include "types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -1,5 +0,0 @@
#include <brotli/encode.h>
#include <brotli/decode.h>
#include <brotli/port.h>
#include <brotli/types.h>

View File

@@ -9,8 +9,8 @@
#ifndef BROTLI_COMMON_DICTIONARY_H_
#define BROTLI_COMMON_DICTIONARY_H_
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -14,6 +14,6 @@
BrotliEncoderVersion methods. */
/* Semantic version, calculated as (MAJOR << 24) | (MINOR << 12) | PATCH */
#define BROTLI_VERSION 0x0006000
#define BROTLI_VERSION 0x1000000
#endif /* BROTLI_COMMON_VERSION_H_ */

View File

@@ -11,7 +11,7 @@
#include <string.h> /* memcpy */
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -99,7 +99,7 @@
#ifndef BROTLI_DEC_CONTEXT_H_
#define BROTLI_DEC_CONTEXT_H_
#include <brotli/types.h>
#include "../types.h"
enum ContextType {
CONTEXT_LSB6 = 0,

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_DEC_HUFFMAN_H_
#define BROTLI_DEC_HUFFMAN_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -30,7 +30,7 @@
#include <stdio.h>
#endif
#include <brotli/port.h>
#include "../port.h"
#if defined(__arm__) || defined(__thumb__) || \
defined(_M_ARM) || defined(_M_ARMT) || defined(__ARM64_ARCH_8__)

View File

@@ -12,7 +12,7 @@
#define BROTLI_DEC_PREFIX_H_
#include "../common/constants.h"
#include <brotli/types.h>
#include "../types.h"
/* Represents the range of values belonging to a prefix code: */
/* [offset, offset + 2^nbits) */

View File

@@ -11,7 +11,7 @@
#include "../common/constants.h"
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "../types.h"
#include "./bit_reader.h"
#include "./huffman.h"
#include "./port.h"

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_DEC_TRANSFORM_H_
#define BROTLI_DEC_TRANSFORM_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -12,8 +12,8 @@
#ifndef BROTLI_DEC_DECODE_H_
#define BROTLI_DEC_DECODE_H_
#include <brotli/port.h>
#include <brotli/types.h>
#include "port.h"
#include "types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -11,7 +11,7 @@
#include "../common/constants.h"
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "../types.h"
#include "./command.h"
#include "./hash.h"
#include "./port.h"

View File

@@ -11,7 +11,7 @@
#include "../common/constants.h"
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "../types.h"
#include "./command.h"
#include "./hash.h"
#include "./memory.h"

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_BIT_COST_H_
#define BROTLI_ENC_BIT_COST_H_
#include <brotli/types.h>
#include "../types.h"
#include "./fast_log.h"
#include "./histogram.h"
#include "./port.h"

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_BLOCK_SPLITTER_H_
#define BROTLI_ENC_BLOCK_SPLITTER_H_
#include <brotli/types.h>
#include "../types.h"
#include "./command.h"
#include "./memory.h"
#include "./port.h"

View File

@@ -16,7 +16,7 @@
#ifndef BROTLI_ENC_BROTLI_BIT_STREAM_H_
#define BROTLI_ENC_BROTLI_BIT_STREAM_H_
#include <brotli/types.h>
#include "../types.h"
#include "./command.h"
#include "./context.h"
#include "./entropy_encode.h"

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_CLUSTER_H_
#define BROTLI_ENC_CLUSTER_H_
#include <brotli/types.h>
#include "../types.h"
#include "./histogram.h"
#include "./memory.h"
#include "./port.h"

View File

@@ -10,8 +10,8 @@
#define BROTLI_ENC_COMMAND_H_
#include "../common/constants.h"
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#include "./fast_log.h"
#include "./prefix.h"

View File

@@ -12,7 +12,7 @@
#ifndef BROTLI_ENC_COMPRESS_FRAGMENT_H_
#define BROTLI_ENC_COMPRESS_FRAGMENT_H_
#include <brotli/types.h>
#include "../types.h"
#include "./memory.h"
#include "./port.h"

View File

@@ -13,7 +13,7 @@
#ifndef BROTLI_ENC_COMPRESS_FRAGMENT_TWO_PASS_H_
#define BROTLI_ENC_COMPRESS_FRAGMENT_TWO_PASS_H_
#include <brotli/types.h>
#include "../types.h"
#include "./memory.h"
#include "./port.h"

View File

@@ -9,8 +9,8 @@
#ifndef BROTLI_ENC_CONTEXT_H_
#define BROTLI_ENC_CONTEXT_H_
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_DICTIONARY_HASH_H_
#define BROTLI_ENC_DICTIONARY_HASH_H_
#include <brotli/types.h>
#include "../types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_ENTROPY_ENCODE_H_
#define BROTLI_ENC_ENTROPY_ENCODE_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -10,8 +10,8 @@
#define BROTLI_ENC_ENTROPY_ENCODE_STATIC_H_
#include "../common/constants.h"
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#include "./write_bits.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -11,8 +11,8 @@
#include <math.h>
#include <brotli/types.h>
#include <brotli/port.h>
#include "../types.h"
#include "../port.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_FIND_MATCH_LENGTH_H_
#define BROTLI_ENC_FIND_MATCH_LENGTH_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -14,7 +14,7 @@
#include "../common/constants.h"
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "../types.h"
#include "./fast_log.h"
#include "./find_match_length.h"
#include "./memory.h"

View File

@@ -12,7 +12,7 @@
#include <string.h> /* memset */
#include "../common/constants.h"
#include <brotli/types.h>
#include "../types.h"
#include "./block_splitter.h"
#include "./command.h"
#include "./context.h"

View File

@@ -10,7 +10,7 @@
#ifndef BROTLI_ENC_LITERAL_COST_H_
#define BROTLI_ENC_LITERAL_COST_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_MEMORY_H_
#define BROTLI_ENC_MEMORY_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -10,7 +10,7 @@
#ifndef BROTLI_ENC_METABLOCK_H_
#define BROTLI_ENC_METABLOCK_H_
#include <brotli/types.h>
#include "../types.h"
#include "./block_splitter.h"
#include "./command.h"
#include "./context.h"

View File

@@ -12,8 +12,8 @@
#include <assert.h>
#include <string.h> /* memcpy */
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#if defined OS_LINUX || defined OS_CYGWIN
#include <endian.h>

View File

@@ -11,8 +11,8 @@
#define BROTLI_ENC_PREFIX_H_
#include "../common/constants.h"
#include <brotli/port.h>
#include <brotli/types.h>
#include "../port.h"
#include "../types.h"
#include "./fast_log.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -10,7 +10,7 @@
#ifndef BROTLI_ENC_QUALITY_H_
#define BROTLI_ENC_QUALITY_H_
#include <brotli/encode.h>
#include "../encode.h"
#define FAST_ONE_PASS_COMPRESSION_QUALITY 0
#define FAST_TWO_PASS_COMPRESSION_QUALITY 1

View File

@@ -11,7 +11,7 @@
#include <string.h> /* memcpy */
#include <brotli/types.h>
#include "../types.h"
#include "./memory.h"
#include "./port.h"
#include "./quality.h"

View File

@@ -10,7 +10,7 @@
#define BROTLI_ENC_STATIC_DICT_H_
#include "../common/dictionary.h"
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_STATIC_DICT_LUT_H_
#define BROTLI_ENC_STATIC_DICT_LUT_H_
#include <brotli/types.h>
#include "../types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {

View File

@@ -9,7 +9,7 @@
#ifndef BROTLI_ENC_UTF8_UTIL_H_
#define BROTLI_ENC_UTF8_UTIL_H_
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -12,7 +12,7 @@
#include <assert.h>
#include <stdio.h> /* printf */
#include <brotli/types.h>
#include "../types.h"
#include "./port.h"
#if defined(__cplusplus) || defined(c_plusplus)

View File

@@ -12,8 +12,8 @@
#ifndef BROTLI_ENC_ENCODE_H_
#define BROTLI_ENC_ENCODE_H_
#include <brotli/port.h>
#include <brotli/types.h>
#include "port.h"
#include "types.h"
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
@@ -36,11 +36,6 @@ extern "C" {
/** Maximal value for ::BROTLI_PARAM_QUALITY parameter. */
#define BROTLI_MAX_QUALITY 11
BROTLI_DEPRECATED static const int kBrotliMinWindowBits =
BROTLI_MIN_WINDOW_BITS;
BROTLI_DEPRECATED static const int kBrotliMaxWindowBits =
BROTLI_MAX_WINDOW_BITS;
/** Options for ::BROTLI_PARAM_MODE parameter. */
typedef enum BrotliEncoderMode {
/**
@@ -228,20 +223,6 @@ BROTLI_ENC_API BrotliEncoderState* BrotliEncoderCreateInstance(
*/
BROTLI_ENC_API void BrotliEncoderDestroyInstance(BrotliEncoderState* state);
/* Calculates maximum input size that can be processed at once. */
BROTLI_DEPRECATED BROTLI_ENC_API size_t BrotliEncoderInputBlockSize(
BrotliEncoderState* state);
/* Copies the given input data to the internal ring buffer. */
BROTLI_DEPRECATED BROTLI_ENC_API void BrotliEncoderCopyInputToRingBuffer(
BrotliEncoderState* state, const size_t input_size,
const uint8_t* input_buffer);
/* Processes the accumulated input. */
BROTLI_DEPRECATED BROTLI_ENC_API BROTLI_BOOL BrotliEncoderWriteData(
BrotliEncoderState* state, const BROTLI_BOOL is_last,
const BROTLI_BOOL force_flush, size_t* out_size, uint8_t** output);
/**
* Prepends imaginary LZ77 dictionary.
*

View File

@@ -117,6 +117,10 @@ OR:
#define BROTLI_INLINE
#endif
#else /* _MSC_VER */
# pragma warning(disable : 4100)
# pragma warning(disable : 4127)
# pragma warning(disable : 4389)
# pragma warning(disable : 4701)
#define BROTLI_INLINE __forceinline
#endif /* _MSC_VER */

25
C/lizard/LICENSE Normal file
View File

@@ -0,0 +1,25 @@
LZ5 Library
Copyright (C) 2011-2016, Yann Collet.
Copyright (C) 2015-2016, Przemyslaw Skibinski <inikep@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

14
C/lizard/LZ5_to_LIZ.sh Normal file
View File

@@ -0,0 +1,14 @@
#!/bin/sh
# C/lizard/*
# /TR 2017-05-25
for i in *.c *.h; do
sed -i 's/LZ5_/LIZ_/g' "$i"
sed -i 's/LZ5F_/LIZF_/g' "$i"
sed -i 's/"lz5_/"liz_/g' "$i"
done
for f in lz5*; do
l=`echo $f|sed -e 's/lz5/liz/g'`
mv $f $l
done

95
C/lizard/README.md Normal file
View File

@@ -0,0 +1,95 @@
LZ5 - efficient compression with very fast decompression
--------------------------------------------------------
LZ5 is a lossless compression algorithm which contains 4 compression methods:
- fastLZ4 : compression levels -10...-19 are designed to give better decompression speed than [LZ4] i.e. over 2000 MB/s
- LZ5v2 : compression levels -20...-29 are designed to give better ratio than [LZ4] keeping 75% decompression speed
- fastLZ4 + Huffman : compression levels -30...-39 add Huffman coding to fastLZ4
- LZ5v2 + Huffman : compression levels -40...-49 give the best ratio (comparable to [zlib] and low levels of [zstd]/[brotli]) at decompression speed of 1000 MB/s
LZ5 library is based on frequently used [LZ4] library by Yann Collet.
LZ5 library is provided as open-source software using BSD 2-Clause license.
The high compression/decompression speed is achieved without any SSE and AVX extensions.
|Branch |Status |
|------------|---------|
|lz5_v1.5 | [![Build Status][travis15Badge]][travisLink] [![Build status][Appveyor15Badge]][AppveyorLink] |
|lz5_v2.0 | [![Build Status][travis20Badge]][travisLink] [![Build status][Appveyor20Badge]][AppveyorLink] |
[travis15Badge]: https://travis-ci.org/inikep/lz5.svg?branch=lz5_v1.5 "Continuous Integration test suite"
[travis20Badge]: https://travis-ci.org/inikep/lz5.svg?branch=lz5_v2.0 "Continuous Integration test suite"
[travisLink]: https://travis-ci.org/inikep/lz5
[Appveyor15Badge]: https://ci.appveyor.com/api/projects/status/o0ib75nwokjiui36/branch/lz5_v1.5?svg=true "Visual test suite"
[Appveyor20Badge]: https://ci.appveyor.com/api/projects/status/o0ib75nwokjiui36/branch/lz5_v2.0?svg=true "Visual test suite"
[AppveyorLink]: https://ci.appveyor.com/project/inikep/lz5
[LZ4]: https://github.com/lz4/lz4
[zlib]: https://github.com/madler/zlib
[zstd]: https://github.com/facebook/zstd
[brotli]: https://github.com/google/brotli
Benchmarks
-------------------------
The following results are obtained with [lzbench](https://github.com/inikep/lzbench) and `-t16,16`
using 1 core of Intel Core i5-4300U, Windows 10 64-bit (MinGW-w64 compilation under gcc 6.2.0)
with [silesia.tar] which contains tarred files from [Silesia compression corpus](http://sun.aei.polsl.pl/~sdeor/index.php?page=silesia).
| Compressor name | Compression| Decompress.| Compr. size | Ratio |
| --------------- | -----------| -----------| ----------- | ----- |
| memcpy | 7332 MB/s | 8719 MB/s | 211947520 |100.00 |
| lz4 1.7.3 | 440 MB/s | 2318 MB/s | 100880800 | 47.60 |
| lz4hc 1.7.3 -1 | 98 MB/s | 2121 MB/s | 87591763 | 41.33 |
| lz4hc 1.7.3 -4 | 55 MB/s | 2259 MB/s | 79807909 | 37.65 |
| lz4hc 1.7.3 -9 | 22 MB/s | 2315 MB/s | 77892285 | 36.75 |
| lz4hc 1.7.3 -12 | 17 MB/s | 2323 MB/s | 77849762 | 36.73 |
| lz4hc 1.7.3 -16 | 10 MB/s | 2323 MB/s | 77841782 | 36.73 |
| lz5 2.0 -10 | 346 MB/s | 2610 MB/s | 103402971 | 48.79 |
| lz5 2.0 -12 | 103 MB/s | 2458 MB/s | 86232422 | 40.69 |
| lz5 2.0 -15 | 50 MB/s | 2552 MB/s | 81187330 | 38.31 |
| lz5 2.0 -19 | 3.04 MB/s | 2497 MB/s | 77416400 | 36.53 |
| lz5 2.0 -21 | 157 MB/s | 1795 MB/s | 89239174 | 42.10 |
| lz5 2.0 -23 | 30 MB/s | 1778 MB/s | 81097176 | 38.26 |
| lz5 2.0 -26 | 6.63 MB/s | 1734 MB/s | 74503695 | 35.15 |
| lz5 2.0 -29 | 1.37 MB/s | 1634 MB/s | 68694227 | 32.41 |
| lz5 2.0 -30 | 246 MB/s | 909 MB/s | 85727429 | 40.45 |
| lz5 2.0 -32 | 94 MB/s | 1244 MB/s | 76929454 | 36.30 |
| lz5 2.0 -35 | 47 MB/s | 1435 MB/s | 73850400 | 34.84 |
| lz5 2.0 -39 | 2.94 MB/s | 1502 MB/s | 69807522 | 32.94 |
| lz5 2.0 -41 | 126 MB/s | 961 MB/s | 76100661 | 35.91 |
| lz5 2.0 -43 | 28 MB/s | 1101 MB/s | 70955653 | 33.48 |
| lz5 2.0 -46 | 6.25 MB/s | 1073 MB/s | 65413061 | 30.86 |
| lz5 2.0 -49 | 1.27 MB/s | 1064 MB/s | 60679215 | 28.63 |
| zlib 1.2.8 -1 | 66 MB/s | 244 MB/s | 77259029 | 36.45 |
| zlib 1.2.8 -6 | 20 MB/s | 263 MB/s | 68228431 | 32.19 |
| zlib 1.2.8 -9 | 8.37 MB/s | 266 MB/s | 67644548 | 31.92 |
| zstd 1.1.1 -1 | 235 MB/s | 645 MB/s | 73659468 | 34.75 |
| zstd 1.1.1 -2 | 181 MB/s | 600 MB/s | 70168955 | 33.11 |
| zstd 1.1.1 -5 | 88 MB/s | 565 MB/s | 65002208 | 30.67 |
| zstd 1.1.1 -8 | 31 MB/s | 619 MB/s | 61026497 | 28.79 |
| zstd 1.1.1 -11 | 16 MB/s | 613 MB/s | 59523167 | 28.08 |
| zstd 1.1.1 -15 | 4.97 MB/s | 639 MB/s | 58007773 | 27.37 |
| zstd 1.1.1 -18 | 2.87 MB/s | 583 MB/s | 55294241 | 26.09 |
| zstd 1.1.1 -22 | 1.44 MB/s | 505 MB/s | 52731930 | 24.88 |
| brotli 0.5.2 -0 | 217 MB/s | 244 MB/s | 78226979 | 36.91 |
| brotli 0.5.2 -2 | 96 MB/s | 283 MB/s | 68066621 | 32.11 |
| brotli 0.5.2 -5 | 24 MB/s | 312 MB/s | 60801716 | 28.69 |
| brotli 0.5.2 -8 | 5.56 MB/s | 324 MB/s | 57382470 | 27.07 |
| brotli 0.5.2 -11 | 0.39 MB/s | 266 MB/s | 51138054 | 24.13 |
[silesia.tar]: https://drive.google.com/file/d/0BwX7dtyRLxThenZpYU9zLTZhR1k/view?usp=sharing
Documentation
-------------------------
The raw LZ5 block compression format is detailed within [lz5_Block_format].
To compress an arbitrarily long file or data stream, multiple blocks are required.
Organizing these blocks and providing a common header format to handle their content
is the purpose of the Frame format, defined into [lz5_Frame_format].
Interoperable versions of LZ5 must respect this frame format.
[lz5_Block_format]: doc/lz5_Block_format.md
[lz5_Frame_format]: doc/lz5_Frame_format.md

View File

@@ -31,8 +31,8 @@
You can contact the author at :
- Source repository : https://github.com/Cyan4973/FiniteStateEntropy
****************************************************************** */
#ifndef FSE_H
#define FSE_H
#ifndef LIZFSE_H
#define LIZFSE_H
#if defined (__cplusplus)
extern "C" {
@@ -48,67 +48,67 @@ extern "C" {
/*-****************************************
* FSE simple functions
******************************************/
/*! FSE_compress() :
/*! LIZFSE_compress() :
Compress content of buffer 'src', of size 'srcSize', into destination buffer 'dst'.
'dst' buffer must be already allocated. Compression runs faster is dstCapacity >= FSE_compressBound(srcSize).
'dst' buffer must be already allocated. Compression runs faster is dstCapacity >= LIZFSE_compressBound(srcSize).
@return : size of compressed data (<= dstCapacity).
Special values : if return == 0, srcData is not compressible => Nothing is stored within dst !!!
if return == 1, srcData is a single byte symbol * srcSize times. Use RLE compression instead.
if FSE_isError(return), compression failed (more details using FSE_getErrorName())
if LIZFSE_isError(return), compression failed (more details using LIZFSE_getErrorName())
*/
size_t FSE_compress(void* dst, size_t dstCapacity,
size_t LIZFSE_compress(void* dst, size_t dstCapacity,
const void* src, size_t srcSize);
/*! FSE_decompress():
/*! LIZFSE_decompress():
Decompress FSE data from buffer 'cSrc', of size 'cSrcSize',
into already allocated destination buffer 'dst', of size 'dstCapacity'.
@return : size of regenerated data (<= maxDstSize),
or an error code, which can be tested using FSE_isError() .
or an error code, which can be tested using LIZFSE_isError() .
** Important ** : FSE_decompress() does not decompress non-compressible nor RLE data !!!
** Important ** : LIZFSE_decompress() does not decompress non-compressible nor RLE data !!!
Why ? : making this distinction requires a header.
Header management is intentionally delegated to the user layer, which can better manage special cases.
*/
size_t FSE_decompress(void* dst, size_t dstCapacity,
size_t LIZFSE_decompress(void* dst, size_t dstCapacity,
const void* cSrc, size_t cSrcSize);
/*-*****************************************
* Tool functions
******************************************/
size_t FSE_compressBound(size_t size); /* maximum compressed size */
size_t LIZFSE_compressBound(size_t size); /* maximum compressed size */
/* Error Management */
unsigned FSE_isError(size_t code); /* tells if a return value is an error code */
const char* FSE_getErrorName(size_t code); /* provides error code string (useful for debugging) */
unsigned LIZFSE_isError(size_t code); /* tells if a return value is an error code */
const char* LIZFSE_getErrorName(size_t code); /* provides error code string (useful for debugging) */
/*-*****************************************
* FSE advanced functions
******************************************/
/*! FSE_compress2() :
Same as FSE_compress(), but allows the selection of 'maxSymbolValue' and 'tableLog'
/*! LIZFSE_compress2() :
Same as LIZFSE_compress(), but allows the selection of 'maxSymbolValue' and 'tableLog'
Both parameters can be defined as '0' to mean : use default value
@return : size of compressed data
Special values : if return == 0, srcData is not compressible => Nothing is stored within cSrc !!!
if return == 1, srcData is a single byte symbol * srcSize times. Use RLE compression.
if FSE_isError(return), it's an error code.
if LIZFSE_isError(return), it's an error code.
*/
size_t FSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
size_t LIZFSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
/*-*****************************************
* FSE detailed API
******************************************/
/*!
FSE_compress() does the following:
LIZFSE_compress() does the following:
1. count symbol occurrence from source[] into table count[]
2. normalize counters so that sum(count[]) == Power_of_2 (2^tableLog)
3. save normalized counters to memory buffer using writeNCount()
4. build encoding table 'CTable' from normalized counters
5. encode the data stream using encoding table 'CTable'
FSE_decompress() does the following:
LIZFSE_decompress() does the following:
1. read normalized counters with readNCount()
2. build decoding table 'DTable' from normalized counters
3. decode the data stream using decoding table 'DTable'
@@ -120,128 +120,128 @@ or to save and provide normalized distribution using external method.
/* *** COMPRESSION *** */
/*! FSE_count():
/*! LIZFSE_count():
Provides the precise count of each byte within a table 'count'.
'count' is a table of unsigned int, of minimum size (*maxSymbolValuePtr+1).
*maxSymbolValuePtr will be updated if detected smaller than initial value.
@return : the count of the most frequent symbol (which is not identified).
if return == srcSize, there is only one symbol.
Can also return an error code, which can be tested with FSE_isError(). */
size_t FSE_count(unsigned* count, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize);
Can also return an error code, which can be tested with LIZFSE_isError(). */
size_t LIZFSE_count(unsigned* count, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize);
/*! FSE_optimalTableLog():
/*! LIZFSE_optimalTableLog():
dynamically downsize 'tableLog' when conditions are met.
It saves CPU time, by using smaller tables, while preserving or even improving compression ratio.
@return : recommended tableLog (necessarily <= 'maxTableLog') */
unsigned FSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
unsigned LIZFSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
/*! FSE_normalizeCount():
/*! LIZFSE_normalizeCount():
normalize counts so that sum(count[]) == Power_of_2 (2^tableLog)
'normalizedCounter' is a table of short, of minimum size (maxSymbolValue+1).
@return : tableLog,
or an errorCode, which can be tested using FSE_isError() */
size_t FSE_normalizeCount(short* normalizedCounter, unsigned tableLog, const unsigned* count, size_t srcSize, unsigned maxSymbolValue);
or an errorCode, which can be tested using LIZFSE_isError() */
size_t LIZFSE_normalizeCount(short* normalizedCounter, unsigned tableLog, const unsigned* count, size_t srcSize, unsigned maxSymbolValue);
/*! FSE_NCountWriteBound():
/*! LIZFSE_NCountWriteBound():
Provides the maximum possible size of an FSE normalized table, given 'maxSymbolValue' and 'tableLog'.
Typically useful for allocation purpose. */
size_t FSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog);
size_t LIZFSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog);
/*! FSE_writeNCount():
/*! LIZFSE_writeNCount():
Compactly save 'normalizedCounter' into 'buffer'.
@return : size of the compressed table,
or an errorCode, which can be tested using FSE_isError(). */
size_t FSE_writeNCount (void* buffer, size_t bufferSize, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
or an errorCode, which can be tested using LIZFSE_isError(). */
size_t LIZFSE_writeNCount (void* buffer, size_t bufferSize, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! Constructor and Destructor of FSE_CTable.
Note that FSE_CTable size depends on 'tableLog' and 'maxSymbolValue' */
typedef unsigned FSE_CTable; /* don't allocate that. It's only meant to be more restrictive than void* */
FSE_CTable* FSE_createCTable (unsigned tableLog, unsigned maxSymbolValue);
void FSE_freeCTable (FSE_CTable* ct);
/*! Constructor and Destructor of LIZFSE_CTable.
Note that LIZFSE_CTable size depends on 'tableLog' and 'maxSymbolValue' */
typedef unsigned LIZFSE_CTable; /* don't allocate that. It's only meant to be more restrictive than void* */
LIZFSE_CTable* LIZFSE_createCTable (unsigned tableLog, unsigned maxSymbolValue);
void LIZFSE_freeCTable (LIZFSE_CTable* ct);
/*! FSE_buildCTable():
Builds `ct`, which must be already allocated, using FSE_createCTable().
@return : 0, or an errorCode, which can be tested using FSE_isError() */
size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! LIZFSE_buildCTable():
Builds `ct`, which must be already allocated, using LIZFSE_createCTable().
@return : 0, or an errorCode, which can be tested using LIZFSE_isError() */
size_t LIZFSE_buildCTable(LIZFSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! FSE_compress_usingCTable():
/*! LIZFSE_compress_usingCTable():
Compress `src` using `ct` into `dst` which must be already allocated.
@return : size of compressed data (<= `dstCapacity`),
or 0 if compressed data could not fit into `dst`,
or an errorCode, which can be tested using FSE_isError() */
size_t FSE_compress_usingCTable (void* dst, size_t dstCapacity, const void* src, size_t srcSize, const FSE_CTable* ct);
or an errorCode, which can be tested using LIZFSE_isError() */
size_t LIZFSE_compress_usingCTable (void* dst, size_t dstCapacity, const void* src, size_t srcSize, const LIZFSE_CTable* ct);
/*!
Tutorial :
----------
The first step is to count all symbols. FSE_count() does this job very fast.
The first step is to count all symbols. LIZFSE_count() does this job very fast.
Result will be saved into 'count', a table of unsigned int, which must be already allocated, and have 'maxSymbolValuePtr[0]+1' cells.
'src' is a table of bytes of size 'srcSize'. All values within 'src' MUST be <= maxSymbolValuePtr[0]
maxSymbolValuePtr[0] will be updated, with its real value (necessarily <= original value)
FSE_count() will return the number of occurrence of the most frequent symbol.
LIZFSE_count() will return the number of occurrence of the most frequent symbol.
This can be used to know if there is a single symbol within 'src', and to quickly evaluate its compressibility.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError()).
If there is an error, the function will return an ErrorCode (which can be tested using LIZFSE_isError()).
The next step is to normalize the frequencies.
FSE_normalizeCount() will ensure that sum of frequencies is == 2 ^'tableLog'.
LIZFSE_normalizeCount() will ensure that sum of frequencies is == 2 ^'tableLog'.
It also guarantees a minimum of 1 to any Symbol with frequency >= 1.
You can use 'tableLog'==0 to mean "use default tableLog value".
If you are unsure of which tableLog value to use, you can ask FSE_optimalTableLog(),
If you are unsure of which tableLog value to use, you can ask LIZFSE_optimalTableLog(),
which will provide the optimal valid tableLog given sourceSize, maxSymbolValue, and a user-defined maximum (0 means "default").
The result of FSE_normalizeCount() will be saved into a table,
The result of LIZFSE_normalizeCount() will be saved into a table,
called 'normalizedCounter', which is a table of signed short.
'normalizedCounter' must be already allocated, and have at least 'maxSymbolValue+1' cells.
The return value is tableLog if everything proceeded as expected.
It is 0 if there is a single symbol within distribution.
If there is an error (ex: invalid tableLog value), the function will return an ErrorCode (which can be tested using FSE_isError()).
If there is an error (ex: invalid tableLog value), the function will return an ErrorCode (which can be tested using LIZFSE_isError()).
'normalizedCounter' can be saved in a compact manner to a memory area using FSE_writeNCount().
'normalizedCounter' can be saved in a compact manner to a memory area using LIZFSE_writeNCount().
'buffer' must be already allocated.
For guaranteed success, buffer size must be at least FSE_headerBound().
For guaranteed success, buffer size must be at least LIZFSE_headerBound().
The result of the function is the number of bytes written into 'buffer'.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError(); ex : buffer size too small).
If there is an error, the function will return an ErrorCode (which can be tested using LIZFSE_isError(); ex : buffer size too small).
'normalizedCounter' can then be used to create the compression table 'CTable'.
The space required by 'CTable' must be already allocated, using FSE_createCTable().
You can then use FSE_buildCTable() to fill 'CTable'.
If there is an error, both functions will return an ErrorCode (which can be tested using FSE_isError()).
The space required by 'CTable' must be already allocated, using LIZFSE_createCTable().
You can then use LIZFSE_buildCTable() to fill 'CTable'.
If there is an error, both functions will return an ErrorCode (which can be tested using LIZFSE_isError()).
'CTable' can then be used to compress 'src', with FSE_compress_usingCTable().
Similar to FSE_count(), the convention is that 'src' is assumed to be a table of char of size 'srcSize'
'CTable' can then be used to compress 'src', with LIZFSE_compress_usingCTable().
Similar to LIZFSE_count(), the convention is that 'src' is assumed to be a table of char of size 'srcSize'
The function returns the size of compressed data (without header), necessarily <= `dstCapacity`.
If it returns '0', compressed data could not fit into 'dst'.
If there is an error, the function will return an ErrorCode (which can be tested using FSE_isError()).
If there is an error, the function will return an ErrorCode (which can be tested using LIZFSE_isError()).
*/
/* *** DECOMPRESSION *** */
/*! FSE_readNCount():
/*! LIZFSE_readNCount():
Read compactly saved 'normalizedCounter' from 'rBuffer'.
@return : size read from 'rBuffer',
or an errorCode, which can be tested using FSE_isError().
or an errorCode, which can be tested using LIZFSE_isError().
maxSymbolValuePtr[0] and tableLogPtr[0] will also be updated with their respective values */
size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSymbolValuePtr, unsigned* tableLogPtr, const void* rBuffer, size_t rBuffSize);
size_t LIZFSE_readNCount (short* normalizedCounter, unsigned* maxSymbolValuePtr, unsigned* tableLogPtr, const void* rBuffer, size_t rBuffSize);
/*! Constructor and Destructor of FSE_DTable.
/*! Constructor and Destructor of LIZFSE_DTable.
Note that its size depends on 'tableLog' */
typedef unsigned FSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */
FSE_DTable* FSE_createDTable(unsigned tableLog);
void FSE_freeDTable(FSE_DTable* dt);
typedef unsigned LIZFSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */
LIZFSE_DTable* LIZFSE_createDTable(unsigned tableLog);
void LIZFSE_freeDTable(LIZFSE_DTable* dt);
/*! FSE_buildDTable():
Builds 'dt', which must be already allocated, using FSE_createDTable().
return : 0, or an errorCode, which can be tested using FSE_isError() */
size_t FSE_buildDTable (FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! LIZFSE_buildDTable():
Builds 'dt', which must be already allocated, using LIZFSE_createDTable().
return : 0, or an errorCode, which can be tested using LIZFSE_isError() */
size_t LIZFSE_buildDTable (LIZFSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog);
/*! FSE_decompress_usingDTable():
/*! LIZFSE_decompress_usingDTable():
Decompress compressed source `cSrc` of size `cSrcSize` using `dt`
into `dst` which must be already allocated.
@return : size of regenerated data (necessarily <= `dstCapacity`),
or an errorCode, which can be tested using FSE_isError() */
size_t FSE_decompress_usingDTable(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt);
or an errorCode, which can be tested using LIZFSE_isError() */
size_t LIZFSE_decompress_usingDTable(void* dst, size_t dstCapacity, const void* cSrc, size_t cSrcSize, const LIZFSE_DTable* dt);
/*!
Tutorial :
@@ -251,28 +251,28 @@ Tutorial :
If block is a single repeated byte, use memset() instead )
The first step is to obtain the normalized frequencies of symbols.
This can be performed by FSE_readNCount() if it was saved using FSE_writeNCount().
This can be performed by LIZFSE_readNCount() if it was saved using LIZFSE_writeNCount().
'normalizedCounter' must be already allocated, and have at least 'maxSymbolValuePtr[0]+1' cells of signed short.
In practice, that means it's necessary to know 'maxSymbolValue' beforehand,
or size the table to handle worst case situations (typically 256).
FSE_readNCount() will provide 'tableLog' and 'maxSymbolValue'.
The result of FSE_readNCount() is the number of bytes read from 'rBuffer'.
LIZFSE_readNCount() will provide 'tableLog' and 'maxSymbolValue'.
The result of LIZFSE_readNCount() is the number of bytes read from 'rBuffer'.
Note that 'rBufferSize' must be at least 4 bytes, even if useful information is less than that.
If there is an error, the function will return an error code, which can be tested using FSE_isError().
If there is an error, the function will return an error code, which can be tested using LIZFSE_isError().
The next step is to build the decompression tables 'FSE_DTable' from 'normalizedCounter'.
This is performed by the function FSE_buildDTable().
The space required by 'FSE_DTable' must be already allocated using FSE_createDTable().
If there is an error, the function will return an error code, which can be tested using FSE_isError().
The next step is to build the decompression tables 'LIZFSE_DTable' from 'normalizedCounter'.
This is performed by the function LIZFSE_buildDTable().
The space required by 'LIZFSE_DTable' must be already allocated using LIZFSE_createDTable().
If there is an error, the function will return an error code, which can be tested using LIZFSE_isError().
`FSE_DTable` can then be used to decompress `cSrc`, with FSE_decompress_usingDTable().
`LIZFSE_DTable` can then be used to decompress `cSrc`, with LIZFSE_decompress_usingDTable().
`cSrcSize` must be strictly correct, otherwise decompression will fail.
FSE_decompress_usingDTable() result will tell how many bytes were regenerated (<=`dstCapacity`).
If there is an error, the function will return an error code, which can be tested using FSE_isError(). (ex: dst buffer too small)
LIZFSE_decompress_usingDTable() result will tell how many bytes were regenerated (<=`dstCapacity`).
If there is an error, the function will return an error code, which can be tested using LIZFSE_isError(). (ex: dst buffer too small)
*/
#ifdef FSE_STATIC_LINKING_ONLY
#ifdef LIZFSE_STATIC_LINKING_ONLY
/* *** Dependency *** */
#include "bitstream.h"
@@ -282,35 +282,35 @@ If there is an error, the function will return an error code, which can be teste
* Static allocation
*******************************************/
/* FSE buffer bounds */
#define FSE_NCOUNTBOUND 512
#define FSE_BLOCKBOUND(size) (size + (size>>7))
#define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
#define LIZFSE_NCOUNTBOUND 512
#define LIZFSE_BLOCKBOUND(size) (size + (size>>7))
#define LIZFSE_COMPRESSBOUND(size) (LIZFSE_NCOUNTBOUND + LIZFSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
/* It is possible to statically allocate FSE CTable/DTable as a table of unsigned using below macros */
#define FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<(maxTableLog-1)) + ((maxSymbolValue+1)*2))
#define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<<maxTableLog))
#define LIZFSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<(maxTableLog-1)) + ((maxSymbolValue+1)*2))
#define LIZFSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<<maxTableLog))
/* *****************************************
* FSE advanced API
*******************************************/
size_t FSE_countFast(unsigned* count, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize);
/**< same as FSE_count(), but blindly trusts that all byte values within src are <= *maxSymbolValuePtr */
size_t LIZFSE_countFast(unsigned* count, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize);
/**< same as LIZFSE_count(), but blindly trusts that all byte values within src are <= *maxSymbolValuePtr */
unsigned FSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus);
/**< same as FSE_optimalTableLog(), which used `minus==2` */
unsigned LIZFSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus);
/**< same as LIZFSE_optimalTableLog(), which used `minus==2` */
size_t FSE_buildCTable_raw (FSE_CTable* ct, unsigned nbBits);
/**< build a fake FSE_CTable, designed to not compress an input, where each symbol uses nbBits */
size_t LIZFSE_buildCTable_raw (LIZFSE_CTable* ct, unsigned nbBits);
/**< build a fake LIZFSE_CTable, designed to not compress an input, where each symbol uses nbBits */
size_t FSE_buildCTable_rle (FSE_CTable* ct, unsigned char symbolValue);
/**< build a fake FSE_CTable, designed to compress always the same symbolValue */
size_t LIZFSE_buildCTable_rle (LIZFSE_CTable* ct, unsigned char symbolValue);
/**< build a fake LIZFSE_CTable, designed to compress always the same symbolValue */
size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits);
/**< build a fake FSE_DTable, designed to read an uncompressed bitstream where each symbol uses nbBits */
size_t LIZFSE_buildDTable_raw (LIZFSE_DTable* dt, unsigned nbBits);
/**< build a fake LIZFSE_DTable, designed to read an uncompressed bitstream where each symbol uses nbBits */
size_t FSE_buildDTable_rle (FSE_DTable* dt, unsigned char symbolValue);
/**< build a fake FSE_DTable, designed to always generate the same symbolValue */
size_t LIZFSE_buildDTable_rle (LIZFSE_DTable* dt, unsigned char symbolValue);
/**< build a fake LIZFSE_DTable, designed to always generate the same symbolValue */
/* *****************************************
@@ -329,16 +329,16 @@ typedef struct
const void* stateTable;
const void* symbolTT;
unsigned stateLog;
} FSE_CState_t;
} LIZFSE_CState_t;
static void FSE_initCState(FSE_CState_t* CStatePtr, const FSE_CTable* ct);
static void LIZFSE_initCState(LIZFSE_CState_t* CStatePtr, const LIZFSE_CTable* ct);
static void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* CStatePtr, unsigned symbol);
static void LIZFSE_encodeSymbol(BIT_CStream_t* bitC, LIZFSE_CState_t* CStatePtr, unsigned symbol);
static void FSE_flushCState(BIT_CStream_t* bitC, const FSE_CState_t* CStatePtr);
static void LIZFSE_flushCState(BIT_CStream_t* bitC, const LIZFSE_CState_t* CStatePtr);
/**<
These functions are inner components of FSE_compress_usingCTable().
These functions are inner components of LIZFSE_compress_usingCTable().
They allow the creation of custom streams, mixing multiple tables and bit sources.
A key property to keep in mind is that encoding and decoding are done **in reverse direction**.
@@ -346,20 +346,20 @@ So the first symbol you will encode is the last you will decode, like a LIFO sta
You will need a few variables to track your CStream. They are :
FSE_CTable ct; // Provided by FSE_buildCTable()
LIZFSE_CTable ct; // Provided by LIZFSE_buildCTable()
BIT_CStream_t bitStream; // bitStream tracking structure
FSE_CState_t state; // State tracking structure (can have several)
LIZFSE_CState_t state; // State tracking structure (can have several)
The first thing to do is to init bitStream and state.
size_t errorCode = BIT_initCStream(&bitStream, dstBuffer, maxDstSize);
FSE_initCState(&state, ct);
LIZFSE_initCState(&state, ct);
Note that BIT_initCStream() can produce an error code, so its result should be tested, using FSE_isError();
Note that BIT_initCStream() can produce an error code, so its result should be tested, using LIZFSE_isError();
You can then encode your input data, byte after byte.
FSE_encodeSymbol() outputs a maximum of 'tableLog' bits at a time.
LIZFSE_encodeSymbol() outputs a maximum of 'tableLog' bits at a time.
Remember decoding will be done in reverse direction.
FSE_encodeByte(&bitStream, &state, symbol);
LIZFSE_encodeByte(&bitStream, &state, symbol);
At any time, you can also add any bit sequence.
Note : maximum allowed nbBits is 25, for compatibility with 32-bits decoders
@@ -371,12 +371,12 @@ Writing data to memory is a manual operation, performed by the flushBits functio
BIT_flushBits(&bitStream);
Your last FSE encoding operation shall be to flush your last state value(s).
FSE_flushState(&bitStream, &state);
LIZFSE_flushState(&bitStream, &state);
Finally, you must close the bitStream.
The function returns the size of CStream in bytes.
If data couldn't fit into dstBuffer, it will return a 0 ( == not compressible)
If there is an error, it returns an errorCode (which can be tested using FSE_isError()).
If there is an error, it returns an errorCode (which can be tested using LIZFSE_isError()).
size_t size = BIT_closeCStream(&bitStream);
*/
@@ -388,37 +388,37 @@ typedef struct
{
size_t state;
const void* table; /* precise table may vary, depending on U16 */
} FSE_DState_t;
} LIZFSE_DState_t;
static void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt);
static void LIZFSE_initDState(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const LIZFSE_DTable* dt);
static unsigned char FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
static unsigned char LIZFSE_decodeSymbol(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
static unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr);
static unsigned LIZFSE_endOfDState(const LIZFSE_DState_t* DStatePtr);
/**<
Let's now decompose FSE_decompress_usingDTable() into its unitary components.
Let's now decompose LIZFSE_decompress_usingDTable() into its unitary components.
You will decode FSE-encoded symbols from the bitStream,
and also any other bitFields you put in, **in reverse order**.
You will need a few variables to track your bitStream. They are :
BIT_DStream_t DStream; // Stream context
FSE_DState_t DState; // State context. Multiple ones are possible
FSE_DTable* DTablePtr; // Decoding table, provided by FSE_buildDTable()
LIZFSE_DState_t DState; // State context. Multiple ones are possible
LIZFSE_DTable* DTablePtr; // Decoding table, provided by LIZFSE_buildDTable()
The first thing to do is to init the bitStream.
errorCode = BIT_initDStream(&DStream, srcBuffer, srcSize);
You should then retrieve your initial state(s)
(in reverse flushing order if you have several ones) :
errorCode = FSE_initDState(&DState, &DStream, DTablePtr);
errorCode = LIZFSE_initDState(&DState, &DStream, DTablePtr);
You can then decode your data, symbol after symbol.
For information the maximum number of bits read by FSE_decodeSymbol() is 'tableLog'.
For information the maximum number of bits read by LIZFSE_decodeSymbol() is 'tableLog'.
Keep in mind that symbols are decoded in reverse order, like a LIFO stack (last in, first out).
unsigned char symbol = FSE_decodeSymbol(&DState, &DStream);
unsigned char symbol = LIZFSE_decodeSymbol(&DState, &DStream);
You can retrieve any bitfield you eventually stored into the bitStream (in reverse order)
Note : maximum allowed nbBits is 25, for 32-bits compatibility
@@ -426,7 +426,7 @@ Note : maximum allowed nbBits is 25, for 32-bits compatibility
All above operations only read from local register (which size depends on size_t).
Refueling the register from memory is manually performed by the reload method.
endSignal = FSE_reloadDStream(&DStream);
endSignal = LIZFSE_reloadDStream(&DStream);
BIT_reloadDStream() result tells if there is still some more data to read from DStream.
BIT_DStream_unfinished : there is still some data left into the DStream.
@@ -443,14 +443,14 @@ When it's done, verify decompression is fully completed, by checking both DStrea
Checking if DStream has reached its end is performed by :
BIT_endOfDStream(&DStream);
Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible.
FSE_endOfDState(&DState);
LIZFSE_endOfDState(&DState);
*/
/* *****************************************
* FSE unsafe API
*******************************************/
static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
static unsigned char LIZFSE_decodeSymbolFast(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD);
/* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */
@@ -460,9 +460,9 @@ static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t
typedef struct {
int deltaFindState;
U32 deltaNbBits;
} FSE_symbolCompressionTransform; /* total 8 bytes */
} LIZFSE_symbolCompressionTransform; /* total 8 bytes */
MEM_STATIC void FSE_initCState(FSE_CState_t* statePtr, const FSE_CTable* ct)
MEM_STATIC void LIZFSE_initCState(LIZFSE_CState_t* statePtr, const LIZFSE_CTable* ct)
{
const void* ptr = ct;
const U16* u16ptr = (const U16*) ptr;
@@ -474,13 +474,13 @@ MEM_STATIC void FSE_initCState(FSE_CState_t* statePtr, const FSE_CTable* ct)
}
/*! FSE_initCState2() :
* Same as FSE_initCState(), but the first symbol to include (which will be the last to be read)
/*! LIZFSE_initCState2() :
* Same as LIZFSE_initCState(), but the first symbol to include (which will be the last to be read)
* uses the smallest state value possible, saving the cost of this symbol */
MEM_STATIC void FSE_initCState2(FSE_CState_t* statePtr, const FSE_CTable* ct, U32 symbol)
MEM_STATIC void LIZFSE_initCState2(LIZFSE_CState_t* statePtr, const LIZFSE_CTable* ct, U32 symbol)
{
FSE_initCState(statePtr, ct);
{ const FSE_symbolCompressionTransform symbolTT = ((const FSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
LIZFSE_initCState(statePtr, ct);
{ const LIZFSE_symbolCompressionTransform symbolTT = ((const LIZFSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
const U16* stateTable = (const U16*)(statePtr->stateTable);
U32 nbBitsOut = (U32)((symbolTT.deltaNbBits + (1<<15)) >> 16);
statePtr->value = (nbBitsOut << 16) - symbolTT.deltaNbBits;
@@ -488,16 +488,16 @@ MEM_STATIC void FSE_initCState2(FSE_CState_t* statePtr, const FSE_CTable* ct, U3
}
}
MEM_STATIC void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* statePtr, U32 symbol)
MEM_STATIC void LIZFSE_encodeSymbol(BIT_CStream_t* bitC, LIZFSE_CState_t* statePtr, U32 symbol)
{
const FSE_symbolCompressionTransform symbolTT = ((const FSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
const LIZFSE_symbolCompressionTransform symbolTT = ((const LIZFSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
const U16* const stateTable = (const U16*)(statePtr->stateTable);
U32 nbBitsOut = (U32)((statePtr->value + symbolTT.deltaNbBits) >> 16);
BIT_addBits(bitC, statePtr->value, nbBitsOut);
statePtr->value = stateTable[ (statePtr->value >> nbBitsOut) + symbolTT.deltaFindState];
}
MEM_STATIC void FSE_flushCState(BIT_CStream_t* bitC, const FSE_CState_t* statePtr)
MEM_STATIC void LIZFSE_flushCState(BIT_CStream_t* bitC, const LIZFSE_CState_t* statePtr)
{
BIT_addBits(bitC, statePtr->value, statePtr->stateLog);
BIT_flushBits(bitC);
@@ -508,41 +508,41 @@ MEM_STATIC void FSE_flushCState(BIT_CStream_t* bitC, const FSE_CState_t* statePt
typedef struct {
U16 tableLog;
U16 fastMode;
} FSE_DTableHeader; /* sizeof U32 */
} LIZFSE_DTableHeader; /* sizeof U32 */
typedef struct
{
unsigned short newState;
unsigned char symbol;
unsigned char nbBits;
} FSE_decode_t; /* size == U32 */
} LIZFSE_decode_t; /* size == U32 */
MEM_STATIC void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt)
MEM_STATIC void LIZFSE_initDState(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const LIZFSE_DTable* dt)
{
const void* ptr = dt;
const FSE_DTableHeader* const DTableH = (const FSE_DTableHeader*)ptr;
const LIZFSE_DTableHeader* const DTableH = (const LIZFSE_DTableHeader*)ptr;
DStatePtr->state = BIT_readBits(bitD, DTableH->tableLog);
BIT_reloadDStream(bitD);
DStatePtr->table = dt + 1;
}
MEM_STATIC BYTE FSE_peekSymbol(const FSE_DState_t* DStatePtr)
MEM_STATIC BYTE LIZFSE_peekSymbol(const LIZFSE_DState_t* DStatePtr)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
LIZFSE_decode_t const DInfo = ((const LIZFSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
return DInfo.symbol;
}
MEM_STATIC void FSE_updateState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
MEM_STATIC void LIZFSE_updateState(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
LIZFSE_decode_t const DInfo = ((const LIZFSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
size_t const lowBits = BIT_readBits(bitD, nbBits);
DStatePtr->state = DInfo.newState + lowBits;
}
MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
MEM_STATIC BYTE LIZFSE_decodeSymbol(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
LIZFSE_decode_t const DInfo = ((const LIZFSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
BYTE const symbol = DInfo.symbol;
size_t const lowBits = BIT_readBits(bitD, nbBits);
@@ -551,11 +551,11 @@ MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
return symbol;
}
/*! FSE_decodeSymbolFast() :
/*! LIZFSE_decodeSymbolFast() :
unsafe, only works if no symbol has a probability > 50% */
MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
MEM_STATIC BYTE LIZFSE_decodeSymbolFast(LIZFSE_DState_t* DStatePtr, BIT_DStream_t* bitD)
{
FSE_decode_t const DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
LIZFSE_decode_t const DInfo = ((const LIZFSE_decode_t*)(DStatePtr->table))[DStatePtr->state];
U32 const nbBits = DInfo.nbBits;
BYTE const symbol = DInfo.symbol;
size_t const lowBits = BIT_readBitsFast(bitD, nbBits);
@@ -564,14 +564,14 @@ MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bit
return symbol;
}
MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr)
MEM_STATIC unsigned LIZFSE_endOfDState(const LIZFSE_DState_t* DStatePtr)
{
return DStatePtr->state == 0;
}
#ifndef FSE_COMMONDEFS_ONLY
#ifndef LIZFSE_COMMONDEFS_ONLY
/* **************************************************************
* Tuning parameters
@@ -581,48 +581,48 @@ MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr)
* Increasing memory usage improves compression ratio
* Reduced memory usage can improve speed, due to cache effect
* Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */
#define FSE_MAX_MEMORY_USAGE 14
#define FSE_DEFAULT_MEMORY_USAGE 13
#define LIZFSE_MAX_MEMORY_USAGE 14
#define LIZFSE_DEFAULT_MEMORY_USAGE 13
/*!FSE_MAX_SYMBOL_VALUE :
/*!LIZFSE_MAX_SYMBOL_VALUE :
* Maximum symbol value authorized.
* Required for proper stack allocation */
#define FSE_MAX_SYMBOL_VALUE 255
#define LIZFSE_MAX_SYMBOL_VALUE 255
/* **************************************************************
* template functions type & suffix
****************************************************************/
#define FSE_FUNCTION_TYPE BYTE
#define FSE_FUNCTION_EXTENSION
#define FSE_DECODE_TYPE FSE_decode_t
#define LIZFSE_FUNCTION_TYPE BYTE
#define LIZFSE_FUNCTION_EXTENSION
#define LIZFSE_DECODE_TYPE LIZFSE_decode_t
#endif /* !FSE_COMMONDEFS_ONLY */
#endif /* !LIZFSE_COMMONDEFS_ONLY */
/* ***************************************************************
* Constants
*****************************************************************/
#define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2)
#define FSE_MAX_TABLESIZE (1U<<FSE_MAX_TABLELOG)
#define FSE_MAXTABLESIZE_MASK (FSE_MAX_TABLESIZE-1)
#define FSE_DEFAULT_TABLELOG (FSE_DEFAULT_MEMORY_USAGE-2)
#define FSE_MIN_TABLELOG 5
#define LIZFSE_MAX_TABLELOG (LIZFSE_MAX_MEMORY_USAGE-2)
#define LIZFSE_MAX_TABLESIZE (1U<<LIZFSE_MAX_TABLELOG)
#define LIZFSE_MAXTABLESIZE_MASK (LIZFSE_MAX_TABLESIZE-1)
#define LIZFSE_DEFAULT_TABLELOG (LIZFSE_DEFAULT_MEMORY_USAGE-2)
#define LIZFSE_MIN_TABLELOG 5
#define FSE_TABLELOG_ABSOLUTE_MAX 15
#if FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX
# error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported"
#define LIZFSE_TABLELOG_ABSOLUTE_MAX 15
#if LIZFSE_MAX_TABLELOG > LIZFSE_TABLELOG_ABSOLUTE_MAX
# error "LIZFSE_MAX_TABLELOG > LIZFSE_TABLELOG_ABSOLUTE_MAX is not supported"
#endif
#define FSE_TABLESTEP(tableSize) ((tableSize>>1) + (tableSize>>3) + 3)
#define LIZFSE_TABLESTEP(tableSize) ((tableSize>>1) + (tableSize>>3) + 3)
#endif /* FSE_STATIC_LINKING_ONLY */
#endif /* LIZFSE_STATIC_LINKING_ONLY */
#if defined (__cplusplus)
}
#endif
#endif /* FSE_H */
#endif /* LIZFSE_H */

View File

@@ -31,8 +31,8 @@
You can contact the author at :
- Source repository : https://github.com/Cyan4973/FiniteStateEntropy
****************************************************************** */
#ifndef HUF_H_298734234
#define HUF_H_298734234
#ifndef LIZHUF_H_298734234
#define LIZHUF_H_298734234
#if defined (__cplusplus)
extern "C" {
@@ -45,65 +45,65 @@ extern "C" {
/* *** simple functions *** */
/**
HUF_compress() :
LIZHUF_compress() :
Compress content from buffer 'src', of size 'srcSize', into buffer 'dst'.
'dst' buffer must be already allocated.
Compression runs faster if `dstCapacity` >= HUF_compressBound(srcSize).
`srcSize` must be <= `HUF_BLOCKSIZE_MAX` == 128 KB.
Compression runs faster if `dstCapacity` >= LIZHUF_compressBound(srcSize).
`srcSize` must be <= `LIZHUF_BLOCKSIZE_MAX` == 128 KB.
@return : size of compressed data (<= `dstCapacity`).
Special values : if return == 0, srcData is not compressible => Nothing is stored within dst !!!
if return == 1, srcData is a single repeated byte symbol (RLE compression).
if HUF_isError(return), compression failed (more details using HUF_getErrorName())
if LIZHUF_isError(return), compression failed (more details using LIZHUF_getErrorName())
*/
size_t HUF_compress(void* dst, size_t dstCapacity,
size_t LIZHUF_compress(void* dst, size_t dstCapacity,
const void* src, size_t srcSize);
/**
HUF_decompress() :
LIZHUF_decompress() :
Decompress HUF data from buffer 'cSrc', of size 'cSrcSize',
into already allocated buffer 'dst', of minimum size 'dstSize'.
`dstSize` : **must** be the ***exact*** size of original (uncompressed) data.
Note : in contrast with FSE, HUF_decompress can regenerate
Note : in contrast with FSE, LIZHUF_decompress can regenerate
RLE (cSrcSize==1) and uncompressed (cSrcSize==dstSize) data,
because it knows size to regenerate.
@return : size of regenerated data (== dstSize),
or an error code, which can be tested using HUF_isError()
or an error code, which can be tested using LIZHUF_isError()
*/
size_t HUF_decompress(void* dst, size_t dstSize,
size_t LIZHUF_decompress(void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize);
/* ****************************************
* Tool functions
******************************************/
#define HUF_BLOCKSIZE_MAX (128 * 1024)
size_t HUF_compressBound(size_t size); /**< maximum compressed size (worst case) */
#define LIZHUF_BLOCKSIZE_MAX (128 * 1024)
size_t LIZHUF_compressBound(size_t size); /**< maximum compressed size (worst case) */
/* Error Management */
unsigned HUF_isError(size_t code); /**< tells if a return value is an error code */
const char* HUF_getErrorName(size_t code); /**< provides error code string (useful for debugging) */
unsigned LIZHUF_isError(size_t code); /**< tells if a return value is an error code */
const char* LIZHUF_getErrorName(size_t code); /**< provides error code string (useful for debugging) */
/* *** Advanced function *** */
/** HUF_compress2() :
* Same as HUF_compress(), but offers direct control over `maxSymbolValue` and `tableLog` */
size_t HUF_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
/** LIZHUF_compress2() :
* Same as LIZHUF_compress(), but offers direct control over `maxSymbolValue` and `tableLog` */
size_t LIZHUF_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
#ifdef HUF_STATIC_LINKING_ONLY
#ifdef LIZHUF_STATIC_LINKING_ONLY
/* *** Dependencies *** */
#include "mem.h" /* U32 */
/* *** Constants *** */
#define HUF_TABLELOG_ABSOLUTEMAX 16 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */
#define HUF_TABLELOG_MAX 12 /* max configured tableLog (for static allocation); can be modified up to HUF_ABSOLUTEMAX_TABLELOG */
#define HUF_TABLELOG_DEFAULT 11 /* tableLog by default, when not specified */
#define HUF_SYMBOLVALUE_MAX 255
#if (HUF_TABLELOG_MAX > HUF_TABLELOG_ABSOLUTEMAX)
# error "HUF_TABLELOG_MAX is too large !"
#define LIZHUF_TABLELOG_ABSOLUTEMAX 16 /* absolute limit of LIZHUF_MAX_TABLELOG. Beyond that value, code does not work */
#define LIZHUF_TABLELOG_MAX 12 /* max configured tableLog (for static allocation); can be modified up to LIZHUF_ABSOLUTEMAX_TABLELOG */
#define LIZHUF_TABLELOG_DEFAULT 11 /* tableLog by default, when not specified */
#define LIZHUF_SYMBOLVALUE_MAX 255
#if (LIZHUF_TABLELOG_MAX > LIZHUF_TABLELOG_ABSOLUTEMAX)
# error "LIZHUF_TABLELOG_MAX is too large !"
#endif
@@ -111,118 +111,118 @@ size_t HUF_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize
* Static allocation
******************************************/
/* HUF buffer bounds */
#define HUF_CTABLEBOUND 129
#define HUF_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true if incompressible pre-filtered with fast heuristic */
#define HUF_COMPRESSBOUND(size) (HUF_CTABLEBOUND + HUF_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
#define LIZHUF_CTABLEBOUND 129
#define LIZHUF_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true if incompressible pre-filtered with fast heuristic */
#define LIZHUF_COMPRESSBOUND(size) (LIZHUF_CTABLEBOUND + LIZHUF_BLOCKBOUND(size)) /* Macro version, useful for static allocation */
/* static allocation of HUF's Compression Table */
#define HUF_CREATE_STATIC_CTABLE(name, maxSymbolValue) \
#define LIZHUF_CREATE_STATIC_CTABLE(name, maxSymbolValue) \
U32 name##hb[maxSymbolValue+1]; \
void* name##hv = &(name##hb); \
HUF_CElt* name = (HUF_CElt*)(name##hv) /* no final ; */
LIZHUF_CElt* name = (LIZHUF_CElt*)(name##hv) /* no final ; */
/* static allocation of HUF's DTable */
typedef U32 HUF_DTable;
#define HUF_DTABLE_SIZE(maxTableLog) (1 + (1<<(maxTableLog)))
#define HUF_CREATE_STATIC_DTABLEX2(DTable, maxTableLog) \
HUF_DTable DTable[HUF_DTABLE_SIZE((maxTableLog)-1)] = { ((U32)((maxTableLog)-1)*0x1000001) }
#define HUF_CREATE_STATIC_DTABLEX4(DTable, maxTableLog) \
HUF_DTable DTable[HUF_DTABLE_SIZE(maxTableLog)] = { ((U32)(maxTableLog)*0x1000001) }
typedef U32 LIZHUF_DTable;
#define LIZHUF_DTABLE_SIZE(maxTableLog) (1 + (1<<(maxTableLog)))
#define LIZHUF_CREATE_STATIC_DTABLEX2(DTable, maxTableLog) \
LIZHUF_DTable DTable[LIZHUF_DTABLE_SIZE((maxTableLog)-1)] = { ((U32)((maxTableLog)-1)*0x1000001) }
#define LIZHUF_CREATE_STATIC_DTABLEX4(DTable, maxTableLog) \
LIZHUF_DTable DTable[LIZHUF_DTABLE_SIZE(maxTableLog)] = { ((U32)(maxTableLog)*0x1000001) }
/* ****************************************
* Advanced decompression functions
******************************************/
size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
size_t LIZHUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t LIZHUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
size_t HUF_decompress4X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< decodes RLE and uncompressed */
size_t HUF_decompress4X_hufOnly(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< considers RLE and uncompressed as errors */
size_t HUF_decompress4X2_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t HUF_decompress4X4_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
size_t LIZHUF_decompress4X_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< decodes RLE and uncompressed */
size_t LIZHUF_decompress4X_hufOnly(LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< considers RLE and uncompressed as errors */
size_t LIZHUF_decompress4X2_DCtx(LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t LIZHUF_decompress4X4_DCtx(LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
size_t HUF_decompress1X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);
size_t HUF_decompress1X2_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t HUF_decompress1X4_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
size_t LIZHUF_decompress1X_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);
size_t LIZHUF_decompress1X2_DCtx(LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< single-symbol decoder */
size_t LIZHUF_decompress1X4_DCtx(LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< double-symbols decoder */
/* ****************************************
* HUF detailed API
******************************************/
/*!
HUF_compress() does the following:
1. count symbol occurrence from source[] into table count[] using FSE_count()
2. (optional) refine tableLog using HUF_optimalTableLog()
3. build Huffman table from count using HUF_buildCTable()
4. save Huffman table to memory buffer using HUF_writeCTable()
5. encode the data stream using HUF_compress4X_usingCTable()
LIZHUF_compress() does the following:
1. count symbol occurrence from source[] into table count[] using LIZFSE_count()
2. (optional) refine tableLog using LIZHUF_optimalTableLog()
3. build Huffman table from count using LIZHUF_buildCTable()
4. save Huffman table to memory buffer using LIZHUF_writeCTable()
5. encode the data stream using LIZHUF_compress4X_usingCTable()
The following API allows targeting specific sub-functions for advanced tasks.
For example, it's possible to compress several blocks using the same 'CTable',
or to save and regenerate 'CTable' using external methods.
*/
/* FSE_count() : find it within "fse.h" */
unsigned HUF_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
typedef struct HUF_CElt_s HUF_CElt; /* incomplete type */
size_t HUF_buildCTable (HUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue, unsigned maxNbBits);
size_t HUF_writeCTable (void* dst, size_t maxDstSize, const HUF_CElt* CTable, unsigned maxSymbolValue, unsigned huffLog);
size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable);
/* LIZFSE_count() : find it within "fse.h" */
unsigned LIZHUF_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue);
typedef struct LIZHUF_CElt_s LIZHUF_CElt; /* incomplete type */
size_t LIZHUF_buildCTable (LIZHUF_CElt* CTable, const unsigned* count, unsigned maxSymbolValue, unsigned maxNbBits);
size_t LIZHUF_writeCTable (void* dst, size_t maxDstSize, const LIZHUF_CElt* CTable, unsigned maxSymbolValue, unsigned huffLog);
size_t LIZHUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const LIZHUF_CElt* CTable);
/*! HUF_readStats() :
Read compact Huffman tree, saved by HUF_writeCTable().
/*! LIZHUF_readStats() :
Read compact Huffman tree, saved by LIZHUF_writeCTable().
`huffWeight` is destination buffer.
@return : size read from `src` , or an error Code .
Note : Needed by HUF_readCTable() and HUF_readDTableXn() . */
size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
Note : Needed by LIZHUF_readCTable() and LIZHUF_readDTableXn() . */
size_t LIZHUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize);
/** HUF_readCTable() :
* Loading a CTable saved with HUF_writeCTable() */
size_t HUF_readCTable (HUF_CElt* CTable, unsigned maxSymbolValue, const void* src, size_t srcSize);
/** LIZHUF_readCTable() :
* Loading a CTable saved with LIZHUF_writeCTable() */
size_t LIZHUF_readCTable (LIZHUF_CElt* CTable, unsigned maxSymbolValue, const void* src, size_t srcSize);
/*
HUF_decompress() does the following:
LIZHUF_decompress() does the following:
1. select the decompression algorithm (X2, X4) based on pre-computed heuristics
2. build Huffman table from save, using HUF_readDTableXn()
3. decode 1 or 4 segments in parallel using HUF_decompressSXn_usingDTable
2. build Huffman table from save, using LIZHUF_readDTableXn()
3. decode 1 or 4 segments in parallel using LIZHUF_decompressSXn_usingDTable
*/
/** HUF_selectDecoder() :
/** LIZHUF_selectDecoder() :
* Tells which decoder is likely to decode faster,
* based on a set of pre-determined metrics.
* @return : 0==HUF_decompress4X2, 1==HUF_decompress4X4 .
* @return : 0==LIZHUF_decompress4X2, 1==LIZHUF_decompress4X4 .
* Assumption : 0 < cSrcSize < dstSize <= 128 KB */
U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize);
U32 LIZHUF_selectDecoder (size_t dstSize, size_t cSrcSize);
size_t HUF_readDTableX2 (HUF_DTable* DTable, const void* src, size_t srcSize);
size_t HUF_readDTableX4 (HUF_DTable* DTable, const void* src, size_t srcSize);
size_t LIZHUF_readDTableX2 (LIZHUF_DTable* DTable, const void* src, size_t srcSize);
size_t LIZHUF_readDTableX4 (LIZHUF_DTable* DTable, const void* src, size_t srcSize);
size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t HUF_decompress4X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t HUF_decompress4X4_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t LIZHUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
size_t LIZHUF_decompress4X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
size_t LIZHUF_decompress4X4_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
/* single stream variants */
size_t HUF_compress1X (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
size_t HUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable);
size_t LIZHUF_compress1X (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog);
size_t LIZHUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const LIZHUF_CElt* CTable);
size_t HUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /* single-symbol decoder */
size_t HUF_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /* double-symbol decoder */
size_t LIZHUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /* single-symbol decoder */
size_t LIZHUF_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /* double-symbol decoder */
size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t HUF_decompress1X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t HUF_decompress1X4_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
size_t LIZHUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
size_t LIZHUF_decompress1X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
size_t LIZHUF_decompress1X4_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const LIZHUF_DTable* DTable);
#endif /* HUF_STATIC_LINKING_ONLY */
#endif /* LIZHUF_STATIC_LINKING_ONLY */
#if defined (__cplusplus)
}
#endif
#endif /* HUF_H_298734234 */
#endif /* LIZHUF_H_298734234 */

View File

@@ -32,8 +32,8 @@
- LZ5 source repository : https://github.com/inikep/lz5
*/
#ifndef LZ5_COMMON_H_2983
#define LZ5_COMMON_H_2983
#ifndef LIZ_COMMON_H_2983
#define LIZ_COMMON_H_2983
#if defined (__cplusplus)
extern "C" {
@@ -46,21 +46,21 @@ extern "C" {
#include <stdlib.h> /* malloc, calloc, free */
#include <string.h> /* memset, memcpy */
#include <stdint.h> /* intptr_t */
#include "entropy/mem.h"
#include "lz5_compress.h" /* LZ5_GCC_VERSION */
#include "mem.h"
#include "liz_compress.h" /* LIZ_GCC_VERSION */
//#define LZ5_USE_LOGS
#define LZ5_LOG_COMPRESS(...) //printf(__VA_ARGS__)
#define LZ5_LOG_DECOMPRESS(...) //printf(__VA_ARGS__)
//#define LIZ_USE_LOGS
#define LIZ_LOG_COMPRESS(...) //printf(__VA_ARGS__)
#define LIZ_LOG_DECOMPRESS(...) //printf(__VA_ARGS__)
#define LZ5_LOG_COMPRESS_LZ4(...) //printf(__VA_ARGS__)
#define LIZ_LOG_COMPRESS_LZ4(...) //printf(__VA_ARGS__)
#define COMPLOG_CODEWORDS_LZ4(...) //printf(__VA_ARGS__)
#define LZ5_LOG_DECOMPRESS_LZ4(...) //printf(__VA_ARGS__)
#define LIZ_LOG_DECOMPRESS_LZ4(...) //printf(__VA_ARGS__)
#define DECOMPLOG_CODEWORDS_LZ4(...) //printf(__VA_ARGS__)
#define LZ5_LOG_COMPRESS_LZ5v2(...) //printf(__VA_ARGS__)
#define LIZ_LOG_COMPRESS_LZ5v2(...) //printf(__VA_ARGS__)
#define COMPLOG_CODEWORDS_LZ5v2(...) //printf(__VA_ARGS__)
#define LZ5_LOG_DECOMPRESS_LZ5v2(...) //printf(__VA_ARGS__)
#define LIZ_LOG_DECOMPRESS_LZ5v2(...) //printf(__VA_ARGS__)
#define DECOMPLOG_CODEWORDS_LZ5v2(...) //printf(__VA_ARGS__)
@@ -71,25 +71,25 @@ extern "C" {
**************************************/
#define MINMATCH 4
//#define USE_LZ4_ONLY
//#define LZ5_USE_TEST
//#define LIZ_USE_TEST
#define LZ5_DICT_SIZE (1<<24)
#define LIZ_DICT_SIZE (1<<24)
#define WILDCOPYLENGTH 16
#define LASTLITERALS WILDCOPYLENGTH
#define MFLIMIT (WILDCOPYLENGTH+MINMATCH)
#define LZ5_MAX_PRICE (1<<28)
#define LZ5_INIT_LAST_OFFSET 0
#define LZ5_MAX_16BIT_OFFSET (1<<16)
#define LIZ_MAX_PRICE (1<<28)
#define LIZ_INIT_LAST_OFFSET 0
#define LIZ_MAX_16BIT_OFFSET (1<<16)
#define MM_LONGOFF 16
#define LZ5_BLOCK_SIZE_PAD (LZ5_BLOCK_SIZE+32)
#define LZ5_COMPRESS_ADD_BUF (5*LZ5_BLOCK_SIZE_PAD)
#ifndef LZ5_NO_HUFFMAN
#define LZ5_COMPRESS_ADD_HUF HUF_compressBound(LZ5_BLOCK_SIZE_PAD)
#define LZ5_HUF_BLOCK_SIZE LZ5_BLOCK_SIZE
#define LIZ_BLOCK_SIZE_PAD (LIZ_BLOCK_SIZE+32)
#define LIZ_COMPRESS_ADD_BUF (5*LIZ_BLOCK_SIZE_PAD)
#ifndef LIZ_NO_HUFFMAN
#define LIZ_COMPRESS_ADD_HUF LIZHUF_compressBound(LIZ_BLOCK_SIZE_PAD)
#define LIZ_LIZHUF_BLOCK_SIZE LIZ_BLOCK_SIZE
#else
#define LZ5_COMPRESS_ADD_HUF 0
#define LZ5_HUF_BLOCK_SIZE 1
#define LIZ_COMPRESS_ADD_HUF 0
#define LIZ_LIZHUF_BLOCK_SIZE 1
#endif
/* LZ4 codewords */
@@ -104,29 +104,29 @@ extern "C" {
#define ML_RUN_BITS (ML_BITS_LZ5v2 + RUN_BITS_LZ5v2)
#define MAX_SHORT_LITLEN 7
#define MAX_SHORT_MATCHLEN 15
#define LZ5_LAST_LONG_OFF 31
#define LIZ_LAST_LONG_OFF 31
/* header byte */
#define LZ5_FLAG_LITERALS 1
#define LZ5_FLAG_FLAGS 2
#define LZ5_FLAG_OFFSET16 4
#define LZ5_FLAG_OFFSET24 8
#define LZ5_FLAG_LEN 16
#define LZ5_FLAG_UNCOMPRESSED 128
#define LIZ_FLAG_LITERALS 1
#define LIZ_FLAG_FLAGS 2
#define LIZ_FLAG_OFFSET16 4
#define LIZ_FLAG_OFFSET24 8
#define LIZ_FLAG_LEN 16
#define LIZ_FLAG_UNCOMPRESSED 128
/* stream numbers */
#define LZ5_STREAM_LITERALS 0
#define LZ5_STREAM_FLAGS 1
#define LZ5_STREAM_OFFSET16 2
#define LZ5_STREAM_OFFSET24 3
#define LZ5_STREAM_LEN 4
#define LZ5_STREAM_UNCOMPRESSED 5
#define LIZ_STREAM_LITERALS 0
#define LIZ_STREAM_FLAGS 1
#define LIZ_STREAM_OFFSET16 2
#define LIZ_STREAM_OFFSET24 3
#define LIZ_STREAM_LEN 4
#define LIZ_STREAM_UNCOMPRESSED 5
typedef enum { LZ5_parser_fastSmall, LZ5_parser_fast, LZ5_parser_fastBig, LZ5_parser_noChain, LZ5_parser_hashChain, LZ5_parser_priceFast, LZ5_parser_lowestPrice, LZ5_parser_optimalPrice, LZ5_parser_optimalPriceBT } LZ5_parser_type; /* from faster to stronger */
typedef enum { LZ5_coderwords_LZ4, LZ5_coderwords_LZ5v2 } LZ5_decompress_type;
typedef enum { LIZ_parser_fastSmall, LIZ_parser_fast, LIZ_parser_fastBig, LIZ_parser_noChain, LIZ_parser_hashChain, LIZ_parser_priceFast, LIZ_parser_lowestPrice, LIZ_parser_optimalPrice, LIZ_parser_optimalPriceBT } LIZ_parser_type; /* from faster to stronger */
typedef enum { LIZ_coderwords_LZ4, LIZ_coderwords_LZ5v2 } LIZ_decompress_type;
typedef struct
{
U32 windowLog; /* largest match distance : impact decompression buffer size */
@@ -138,12 +138,12 @@ typedef struct
U32 minMatchLongOff; /* min match size with offsets >= 1<<16 */
U32 sufficientLength; /* used only by optimal parser: size of matches which is acceptable: larger == more compression, slower */
U32 fullSearch; /* used only by optimal parser: perform full search of matches: 1 == more compression, slower */
LZ5_parser_type parserType;
LZ5_decompress_type decompressType;
} LZ5_parameters;
LIZ_parser_type parserType;
LIZ_decompress_type decompressType;
} LIZ_parameters;
struct LZ5_stream_s
struct LIZ_stream_s
{
const BYTE* end; /* next block here to continue on current prefix */
const BYTE* base; /* All index relative to this position */
@@ -153,7 +153,7 @@ struct LZ5_stream_s
U32 nextToUpdate; /* index from which to continue dictionary update */
U32 allocatedMemory;
int compressionLevel;
LZ5_parameters params;
LIZ_parameters params;
U32 hashTableSize;
U32 chainTableSize;
U32* chainTable;
@@ -192,7 +192,7 @@ struct LZ5_stream_s
const BYTE* destBase;
};
struct LZ5_dstream_s
struct LIZ_dstream_s
{
const BYTE* offset16Ptr;
const BYTE* offset24Ptr;
@@ -208,72 +208,72 @@ struct LZ5_dstream_s
intptr_t last_off;
};
typedef struct LZ5_dstream_s LZ5_dstream_t;
typedef struct LIZ_dstream_s LIZ_dstream_t;
/* *************************************
* HC Pre-defined compression levels
***************************************/
#define LZ5_WINDOWLOG_LZ4 16
#define LZ5_CHAINLOG_LZ4 LZ5_WINDOWLOG_LZ4
#define LZ5_HASHLOG_LZ4 18
#define LZ5_HASHLOG_LZ4SM 12
#define LIZ_WINDOWLOG_LZ4 16
#define LIZ_CHAINLOG_LZ4 LIZ_WINDOWLOG_LZ4
#define LIZ_HASHLOG_LZ4 18
#define LIZ_HASHLOG_LZ4SM 12
#define LZ5_WINDOWLOG_LZ5v2 22
#define LZ5_CHAINLOG_LZ5v2 LZ5_WINDOWLOG_LZ5v2
#define LZ5_HASHLOG_LZ5v2 18
#define LIZ_WINDOWLOG_LZ5v2 22
#define LIZ_CHAINLOG_LZ5v2 LIZ_WINDOWLOG_LZ5v2
#define LIZ_HASHLOG_LZ5v2 18
static const LZ5_parameters LZ5_defaultParameters[LZ5_MAX_CLEVEL+1-LZ5_MIN_CLEVEL] =
static const LIZ_parameters LIZ_defaultParameters[LIZ_MAX_CLEVEL+1-LIZ_MIN_CLEVEL] =
{
/* windLog, contentLog, HashLog, H3, Snum, SL, MMLongOff, SuffL, FS, Parser function, Decompressor type */
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4SM, 0, 0, 0, 0, 0, 0, LZ5_parser_fastSmall, LZ5_coderwords_LZ4 }, // level 10
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LZ5_parser_fast, LZ5_coderwords_LZ4 }, // level 11
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LZ5_parser_noChain, LZ5_coderwords_LZ4 }, // level 12
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 2, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 13
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 4, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 14
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 8, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 15
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 16, 4, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 16
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 256, 4, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 17
{ LZ5_WINDOWLOG_LZ4, LZ5_WINDOWLOG_LZ4+1, LZ5_HASHLOG_LZ4, 16, 16, 4, 0, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ4 }, // level 18
{ LZ5_WINDOWLOG_LZ4, LZ5_WINDOWLOG_LZ4+1, 23, 16, 256, 4, 0, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ4 }, // level 19
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4SM, 0, 0, 0, 0, 0, 0, LIZ_parser_fastSmall, LIZ_coderwords_LZ4 }, // level 10
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LIZ_parser_fast, LIZ_coderwords_LZ4 }, // level 11
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LIZ_parser_noChain, LIZ_coderwords_LZ4 }, // level 12
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 2, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 13
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 4, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 14
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 8, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 15
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 16, 4, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 16
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 256, 4, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 17
{ LIZ_WINDOWLOG_LZ4, LIZ_WINDOWLOG_LZ4+1, LIZ_HASHLOG_LZ4, 16, 16, 4, 0, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ4 }, // level 18
{ LIZ_WINDOWLOG_LZ4, LIZ_WINDOWLOG_LZ4+1, 23, 16, 256, 4, 0, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ4 }, // level 19
/* windLog, contentLog, HashLog, H3, Snum, SL, MMLongOff, SuffL, FS, Parser function, Decompressor type */
{ LZ5_WINDOWLOG_LZ5v2, 0, 14, 0, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_fastBig, LZ5_coderwords_LZ5v2 }, // level 20
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 14, 13, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_priceFast, LZ5_coderwords_LZ5v2 }, // level 21
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, LZ5_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_priceFast, LZ5_coderwords_LZ5v2 }, // level 22
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, LZ5_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 23
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 23, 16, 2, 5, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 24
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 25
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2+1, 23, 16, 8, 4, MM_LONGOFF, 64, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 26
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2+1, 23, 16, 128, 4, MM_LONGOFF, 64, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 27
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2+1, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 28
{ 24, 25, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 29
#ifndef LZ5_NO_HUFFMAN
{ LIZ_WINDOWLOG_LZ5v2, 0, 14, 0, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_fastBig, LIZ_coderwords_LZ5v2 }, // level 20
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 14, 13, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_priceFast, LIZ_coderwords_LZ5v2 }, // level 21
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, LIZ_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_priceFast, LIZ_coderwords_LZ5v2 }, // level 22
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, LIZ_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 23
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 23, 16, 2, 5, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 24
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 25
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2+1, 23, 16, 8, 4, MM_LONGOFF, 64, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 26
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2+1, 23, 16, 128, 4, MM_LONGOFF, 64, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 27
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2+1, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 28
{ 24, 25, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 29
#ifndef LIZ_NO_HUFFMAN
/* windLog, contentLog, HashLog, H3, Snum, SL, MMLongOff, SuffL, FS, Parser function, Decompressor type */
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4SM, 0, 0, 0, 0, 0, 0, LZ5_parser_fastSmall, LZ5_coderwords_LZ4 }, // level 30
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LZ5_parser_fast, LZ5_coderwords_LZ4 }, // level 31
{ LZ5_WINDOWLOG_LZ4, 0, 14, 0, 0, 0, 0, 0, 0, LZ5_parser_noChain, LZ5_coderwords_LZ4 }, // level 32
{ LZ5_WINDOWLOG_LZ4, 0, LZ5_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LZ5_parser_noChain, LZ5_coderwords_LZ4 }, // level 33
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 2, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 34
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 4, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 35
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 8, 5, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 36
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 16, 4, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 37
{ LZ5_WINDOWLOG_LZ4, LZ5_CHAINLOG_LZ4, LZ5_HASHLOG_LZ4, 0, 256, 4, 0, 0, 0, LZ5_parser_hashChain, LZ5_coderwords_LZ4 }, // level 38
{ LZ5_WINDOWLOG_LZ4, LZ5_WINDOWLOG_LZ4+1, 23, 16, 256, 4, 0, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ4 }, // level 39
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4SM, 0, 0, 0, 0, 0, 0, LIZ_parser_fastSmall, LIZ_coderwords_LZ4 }, // level 30
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LIZ_parser_fast, LIZ_coderwords_LZ4 }, // level 31
{ LIZ_WINDOWLOG_LZ4, 0, 14, 0, 0, 0, 0, 0, 0, LIZ_parser_noChain, LIZ_coderwords_LZ4 }, // level 32
{ LIZ_WINDOWLOG_LZ4, 0, LIZ_HASHLOG_LZ4, 0, 0, 0, 0, 0, 0, LIZ_parser_noChain, LIZ_coderwords_LZ4 }, // level 33
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 2, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 34
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 4, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 35
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 8, 5, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 36
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 16, 4, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 37
{ LIZ_WINDOWLOG_LZ4, LIZ_CHAINLOG_LZ4, LIZ_HASHLOG_LZ4, 0, 256, 4, 0, 0, 0, LIZ_parser_hashChain, LIZ_coderwords_LZ4 }, // level 38
{ LIZ_WINDOWLOG_LZ4, LIZ_WINDOWLOG_LZ4+1, 23, 16, 256, 4, 0, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ4 }, // level 39
/* windLog, contentLog, HashLog, H3, Snum, SL, MMLongOff, SuffL, FS, Parser function, Decompressor type */
{ LZ5_WINDOWLOG_LZ5v2, 0, 14, 0, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_fastBig, LZ5_coderwords_LZ5v2 }, // level 40
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 14, 13, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_priceFast, LZ5_coderwords_LZ5v2 }, // level 41
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, LZ5_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 0, 0, LZ5_parser_priceFast, LZ5_coderwords_LZ5v2 }, // level 42
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, LZ5_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 43
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 23, 16, 2, 5, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 44
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LZ5_parser_lowestPrice, LZ5_coderwords_LZ5v2 }, // level 45
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LZ5_parser_optimalPrice, LZ5_coderwords_LZ5v2 }, // level 46
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2+1, 23, 16, 8, 4, MM_LONGOFF, 64, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 47
{ LZ5_WINDOWLOG_LZ5v2, LZ5_CHAINLOG_LZ5v2+1, 23, 16, 128, 4, MM_LONGOFF, 64, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 48
{ 24, 25, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LZ5_parser_optimalPriceBT, LZ5_coderwords_LZ5v2 }, // level 49
{ LIZ_WINDOWLOG_LZ5v2, 0, 14, 0, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_fastBig, LIZ_coderwords_LZ5v2 }, // level 40
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 14, 13, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_priceFast, LIZ_coderwords_LZ5v2 }, // level 41
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, LIZ_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 0, 0, LIZ_parser_priceFast, LIZ_coderwords_LZ5v2 }, // level 42
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, LIZ_HASHLOG_LZ5v2, 13, 1, 5, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 43
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 23, 16, 2, 5, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 44
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LIZ_parser_lowestPrice, LIZ_coderwords_LZ5v2 }, // level 45
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2, 23, 16, 8, 4, MM_LONGOFF, 64, 0, LIZ_parser_optimalPrice, LIZ_coderwords_LZ5v2 }, // level 46
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2+1, 23, 16, 8, 4, MM_LONGOFF, 64, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 47
{ LIZ_WINDOWLOG_LZ5v2, LIZ_CHAINLOG_LZ5v2+1, 23, 16, 128, 4, MM_LONGOFF, 64, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 48
{ 24, 25, 23, 24, 1<<10, 4, MM_LONGOFF, 1<<10, 1, LIZ_parser_optimalPriceBT, LIZ_coderwords_LZ5v2 }, // level 49
#endif
// { 10, 10, 10, 0, 0, 4, 0, 0, 0, LZ5_fast }, // min values
// { 24, 24, 28, 24, 1<<24, 7, 0, 1<<24, 2, LZ5_optimal_price }, // max values
// { 10, 10, 10, 0, 0, 4, 0, 0, 0, LIZ_fast }, // min values
// { 24, 24, 28, 24, 1<<24, 7, 0, 1<<24, 2, LIZ_optimal_price }, // max values
};
@@ -298,8 +298,8 @@ static const LZ5_parameters LZ5_defaultParameters[LZ5_MAX_CLEVEL+1-LZ5_MIN_CLEVE
# endif /* __STDC_VERSION__ */
#endif /* _MSC_VER */
#define LZ5_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
#if (LZ5_GCC_VERSION >= 302) || (__INTEL_COMPILER >= 800) || defined(__clang__)
#define LIZ_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
#if (LIZ_GCC_VERSION >= 302) || (__INTEL_COMPILER >= 800) || defined(__clang__)
# define expect(expr,value) (__builtin_expect ((expr),(value)) )
#else
# define expect(expr,value) (expr)
@@ -337,13 +337,13 @@ static const LZ5_parameters LZ5_defaultParameters[LZ5_MAX_CLEVEL+1-LZ5_MIN_CLEVE
#define STEPSIZE sizeof(size_t)
MEM_STATIC void LZ5_copy8(void* dst, const void* src)
MEM_STATIC void LIZ_copy8(void* dst, const void* src)
{
memcpy(dst,src,8);
}
/* customized variant of memcpy, which can overwrite up to 7 bytes beyond dstEnd */
MEM_STATIC void LZ5_wildCopy(void* dstPtr, const void* srcPtr, void* dstEnd)
MEM_STATIC void LIZ_wildCopy(void* dstPtr, const void* srcPtr, void* dstEnd)
{
BYTE* d = (BYTE*)dstPtr;
const BYTE* s = (const BYTE*)srcPtr;
@@ -351,18 +351,18 @@ MEM_STATIC void LZ5_wildCopy(void* dstPtr, const void* srcPtr, void* dstEnd)
#if 0
const size_t l2 = 8 - (((size_t)d) & (sizeof(void*)-1));
LZ5_copy8(d,s); if (d>e-9) return;
LIZ_copy8(d,s); if (d>e-9) return;
d+=l2; s+=l2;
#endif /* join to align */
do { LZ5_copy8(d,s); d+=8; s+=8; } while (d<e);
do { LIZ_copy8(d,s); d+=8; s+=8; } while (d<e);
}
MEM_STATIC void LZ5_wildCopy16(BYTE* dstPtr, const BYTE* srcPtr, BYTE* dstEnd)
MEM_STATIC void LIZ_wildCopy16(BYTE* dstPtr, const BYTE* srcPtr, BYTE* dstEnd)
{
do {
LZ5_copy8(dstPtr, srcPtr);
LZ5_copy8(dstPtr+8, srcPtr+8);
LIZ_copy8(dstPtr, srcPtr);
LIZ_copy8(dstPtr+8, srcPtr+8);
dstPtr += 16;
srcPtr += 16;
}
@@ -370,18 +370,18 @@ MEM_STATIC void LZ5_wildCopy16(BYTE* dstPtr, const BYTE* srcPtr, BYTE* dstEnd)
}
/*
* LZ5_FORCE_SW_BITCOUNT
* LIZ_FORCE_SW_BITCOUNT
* Define this parameter if your target system or compiler does not support hardware bit count
*/
#if defined(_MSC_VER) && defined(_WIN32_WCE) /* Visual Studio for Windows CE does not support Hardware bit count */
# define LZ5_FORCE_SW_BITCOUNT
# define LIZ_FORCE_SW_BITCOUNT
#endif
/* **************************************
* Function body to include for inlining
****************************************/
MEM_STATIC U32 LZ5_highbit32(U32 val)
MEM_STATIC U32 LIZ_highbit32(U32 val)
{
# if defined(_MSC_VER) /* Visual */
unsigned long r=0;
@@ -407,26 +407,26 @@ MEM_STATIC U32 LZ5_highbit32(U32 val)
/*-************************************
* Common functions
**************************************/
MEM_STATIC unsigned LZ5_NbCommonBytes (register size_t val)
MEM_STATIC unsigned LIZ_NbCommonBytes (register size_t val)
{
if (MEM_isLittleEndian()) {
if (MEM_64bits()) {
# if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ5_FORCE_SW_BITCOUNT)
# if defined(_MSC_VER) && defined(_WIN64) && !defined(LIZ_FORCE_SW_BITCOUNT)
unsigned long r = 0;
_BitScanForward64( &r, (U64)val );
return (int)(r>>3);
# elif (defined(__clang__) || (LZ5_GCC_VERSION >= 304)) && !defined(LZ5_FORCE_SW_BITCOUNT)
# elif (defined(__clang__) || (LIZ_GCC_VERSION >= 304)) && !defined(LIZ_FORCE_SW_BITCOUNT)
return (__builtin_ctzll((U64)val) >> 3);
# else
static const int DeBruijnBytePos[64] = { 0, 0, 0, 0, 0, 1, 1, 2, 0, 3, 1, 3, 1, 4, 2, 7, 0, 2, 3, 6, 1, 5, 3, 5, 1, 3, 4, 4, 2, 5, 6, 7, 7, 0, 1, 2, 3, 3, 4, 6, 2, 6, 5, 5, 3, 4, 5, 6, 7, 1, 2, 4, 6, 4, 4, 5, 7, 2, 6, 5, 7, 6, 7, 7 };
return DeBruijnBytePos[((U64)((val & -(long long)val) * 0x0218A392CDABBD3FULL)) >> 58];
# endif
} else /* 32 bits */ {
# if defined(_MSC_VER) && !defined(LZ5_FORCE_SW_BITCOUNT)
# if defined(_MSC_VER) && !defined(LIZ_FORCE_SW_BITCOUNT)
unsigned long r;
_BitScanForward( &r, (U32)val );
return (int)(r>>3);
# elif (defined(__clang__) || (LZ5_GCC_VERSION >= 304)) && !defined(LZ5_FORCE_SW_BITCOUNT)
# elif (defined(__clang__) || (LIZ_GCC_VERSION >= 304)) && !defined(LIZ_FORCE_SW_BITCOUNT)
return (__builtin_ctz((U32)val) >> 3);
# else
static const int DeBruijnBytePos[32] = { 0, 0, 3, 0, 3, 1, 3, 0, 3, 2, 2, 1, 3, 2, 0, 1, 3, 3, 1, 2, 2, 2, 2, 0, 3, 1, 2, 0, 1, 0, 1, 1 };
@@ -435,11 +435,11 @@ MEM_STATIC unsigned LZ5_NbCommonBytes (register size_t val)
}
} else /* Big Endian CPU */ {
if (MEM_64bits()) {
# if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ5_FORCE_SW_BITCOUNT)
# if defined(_MSC_VER) && defined(_WIN64) && !defined(LIZ_FORCE_SW_BITCOUNT)
unsigned long r = 0;
_BitScanReverse64( &r, val );
return (unsigned)(r>>3);
# elif (defined(__clang__) || (LZ5_GCC_VERSION >= 304)) && !defined(LZ5_FORCE_SW_BITCOUNT)
# elif (defined(__clang__) || (LIZ_GCC_VERSION >= 304)) && !defined(LIZ_FORCE_SW_BITCOUNT)
return (__builtin_clzll((U64)val) >> 3);
# else
unsigned r;
@@ -449,11 +449,11 @@ MEM_STATIC unsigned LZ5_NbCommonBytes (register size_t val)
return r;
# endif
} else /* 32 bits */ {
# if defined(_MSC_VER) && !defined(LZ5_FORCE_SW_BITCOUNT)
# if defined(_MSC_VER) && !defined(LIZ_FORCE_SW_BITCOUNT)
unsigned long r = 0;
_BitScanReverse( &r, (unsigned long)val );
return (unsigned)(r>>3);
# elif (defined(__clang__) || (LZ5_GCC_VERSION >= 304)) && !defined(LZ5_FORCE_SW_BITCOUNT)
# elif (defined(__clang__) || (LIZ_GCC_VERSION >= 304)) && !defined(LIZ_FORCE_SW_BITCOUNT)
return (__builtin_clz((U32)val) >> 3);
# else
unsigned r;
@@ -465,14 +465,14 @@ MEM_STATIC unsigned LZ5_NbCommonBytes (register size_t val)
}
}
MEM_STATIC unsigned LZ5_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* pInLimit)
MEM_STATIC unsigned LIZ_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* pInLimit)
{
const BYTE* const pStart = pIn;
while (likely(pIn<pInLimit-(STEPSIZE-1))) {
size_t diff = MEM_readST(pMatch) ^ MEM_readST(pIn);
if (!diff) { pIn+=STEPSIZE; pMatch+=STEPSIZE; continue; }
pIn += LZ5_NbCommonBytes(diff);
pIn += LIZ_NbCommonBytes(diff);
return (unsigned)(pIn - pStart);
}
@@ -483,15 +483,15 @@ MEM_STATIC unsigned LZ5_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* p
}
/* alias to functions with compressionLevel=1 */
int LZ5_sizeofState_MinLevel(void);
int LZ5_compress_MinLevel(const char* source, char* dest, int sourceSize, int maxDestSize);
int LZ5_compress_extState_MinLevel (void* state, const char* source, char* dest, int inputSize, int maxDestSize);
LZ5_stream_t* LZ5_resetStream_MinLevel (LZ5_stream_t* streamPtr);
LZ5_stream_t* LZ5_createStream_MinLevel(void);
int LIZ_sizeofState_MinLevel(void);
int LIZ_compress_MinLevel(const char* source, char* dest, int sourceSize, int maxDestSize);
int LIZ_compress_extState_MinLevel (void* state, const char* source, char* dest, int inputSize, int maxDestSize);
LIZ_stream_t* LIZ_resetStream_MinLevel (LIZ_stream_t* streamPtr);
LIZ_stream_t* LIZ_createStream_MinLevel(void);
#if defined (__cplusplus)
}
#endif
#endif /* LZ5_COMMON_H_2983827168210 */
#endif /* LIZ_COMMON_H_2983827168210 */

View File

@@ -32,8 +32,8 @@
You can contact the author at :
- LZ5 source repository : https://github.com/inikep/lz5
*/
#ifndef LZ5_H_2983
#define LZ5_H_2983
#ifndef LIZ_H_2983
#define LIZ_H_2983
#if defined (__cplusplus)
extern "C" {
@@ -53,12 +53,12 @@ extern "C" {
* Export parameters
*****************************************************************/
/*
* LZ5_DLL_EXPORT :
* LIZ_DLL_EXPORT :
* Enable exporting of functions when building a Windows DLL
*/
#if defined(LZ5_DLL_EXPORT) && (LZ5_DLL_EXPORT==1)
#if defined(LIZ_DLL_EXPORT) && (LIZ_DLL_EXPORT==1)
# define LZ5LIB_API __declspec(dllexport)
#elif defined(LZ5_DLL_IMPORT) && (LZ5_DLL_IMPORT==1)
#elif defined(LIZ_DLL_IMPORT) && (LIZ_DLL_IMPORT==1)
# define LZ5LIB_API __declspec(dllimport) /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#else
# define LZ5LIB_API
@@ -68,47 +68,47 @@ extern "C" {
/*-************************************
* Version
**************************************/
#define LZ5_VERSION_MAJOR 2 /* for breaking interface changes */
#define LZ5_VERSION_MINOR 0 /* for new (non-breaking) interface capabilities */
#define LZ5_VERSION_RELEASE 0 /* for tweaks, bug-fixes, or development */
#define LIZ_VERSION_MAJOR 2 /* for breaking interface changes */
#define LIZ_VERSION_MINOR 0 /* for new (non-breaking) interface capabilities */
#define LIZ_VERSION_RELEASE 0 /* for tweaks, bug-fixes, or development */
#define LZ5_VERSION_NUMBER (LZ5_VERSION_MAJOR *100*100 + LZ5_VERSION_MINOR *100 + LZ5_VERSION_RELEASE)
int LZ5_versionNumber (void);
#define LIZ_VERSION_NUMBER (LIZ_VERSION_MAJOR *100*100 + LIZ_VERSION_MINOR *100 + LIZ_VERSION_RELEASE)
int LIZ_versionNumber (void);
#define LZ5_LIB_VERSION LZ5_VERSION_MAJOR.LZ5_VERSION_MINOR.LZ5_VERSION_RELEASE
#define LZ5_QUOTE(str) #str
#define LZ5_EXPAND_AND_QUOTE(str) LZ5_QUOTE(str)
#define LZ5_VERSION_STRING LZ5_EXPAND_AND_QUOTE(LZ5_LIB_VERSION)
const char* LZ5_versionString (void);
#define LIZ_LIB_VERSION LIZ_VERSION_MAJOR.LIZ_VERSION_MINOR.LIZ_VERSION_RELEASE
#define LIZ_QUOTE(str) #str
#define LIZ_EXPAND_AND_QUOTE(str) LIZ_QUOTE(str)
#define LIZ_VERSION_STRING LIZ_EXPAND_AND_QUOTE(LIZ_LIB_VERSION)
const char* LIZ_versionString (void);
typedef struct LZ5_stream_s LZ5_stream_t;
typedef struct LIZ_stream_s LIZ_stream_t;
#define LZ5_MIN_CLEVEL 10 /* minimum compression level */
#ifndef LZ5_NO_HUFFMAN
#define LZ5_MAX_CLEVEL 49 /* maximum compression level */
#define LIZ_MIN_CLEVEL 10 /* minimum compression level */
#ifndef LIZ_NO_HUFFMAN
#define LIZ_MAX_CLEVEL 49 /* maximum compression level */
#else
#define LZ5_MAX_CLEVEL 29 /* maximum compression level */
#define LIZ_MAX_CLEVEL 29 /* maximum compression level */
#endif
#define LZ5_DEFAULT_CLEVEL 17
#define LIZ_DEFAULT_CLEVEL 17
/*-************************************
* Simple Functions
**************************************/
LZ5LIB_API int LZ5_compress (const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel);
LZ5LIB_API int LIZ_compress (const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel);
/*
LZ5_compress() :
LIZ_compress() :
Compresses 'sourceSize' bytes from buffer 'source'
into already allocated 'dest' buffer of size 'maxDestSize'.
Compression is guaranteed to succeed if 'maxDestSize' >= LZ5_compressBound(sourceSize).
Compression is guaranteed to succeed if 'maxDestSize' >= LIZ_compressBound(sourceSize).
It also runs faster, so it's a recommended setting.
If the function cannot compress 'source' into a more limited 'dest' budget,
compression stops *immediately*, and the function result is zero.
As a consequence, 'dest' content is not valid.
This function never writes outside 'dest' buffer, nor read outside 'source' buffer.
sourceSize : Max supported value is LZ5_MAX_INPUT_VALUE
sourceSize : Max supported value is LIZ_MAX_INPUT_VALUE
maxDestSize : full or partial size of buffer 'dest' (which must be already allocated)
return : the number of bytes written into buffer 'dest' (necessarily <= maxOutputSize)
or 0 if compression fails
@@ -118,35 +118,35 @@ LZ5_compress() :
/*-************************************
* Advanced Functions
**************************************/
#define LZ5_MAX_INPUT_SIZE 0x7E000000 /* 2 113 929 216 bytes */
#define LZ5_BLOCK_SIZE (1<<17)
#define LZ5_BLOCK64K_SIZE (1<<16)
#define LZ5_COMPRESSBOUND(isize) ((unsigned)(isize) > (unsigned)LZ5_MAX_INPUT_SIZE ? 0 : (isize) + 1 + 1 + ((isize/LZ5_BLOCK_SIZE)+1)*4)
#define LIZ_MAX_INPUT_SIZE 0x7E000000 /* 2 113 929 216 bytes */
#define LIZ_BLOCK_SIZE (1<<17)
#define LIZ_BLOCK64K_SIZE (1<<16)
#define LIZ_COMPRESSBOUND(isize) ((unsigned)(isize) > (unsigned)LIZ_MAX_INPUT_SIZE ? 0 : (isize) + 1 + 1 + ((isize/LIZ_BLOCK_SIZE)+1)*4)
/*!
LZ5_compressBound() :
LIZ_compressBound() :
Provides the maximum size that LZ5 compression may output in a "worst case" scenario (input data not compressible)
This function is primarily useful for memory allocation purposes (destination buffer size).
Macro LZ5_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example).
Note that LZ5_compress() compress faster when dest buffer size is >= LZ5_compressBound(srcSize)
inputSize : max supported value is LZ5_MAX_INPUT_SIZE
Macro LIZ_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example).
Note that LIZ_compress() compress faster when dest buffer size is >= LIZ_compressBound(srcSize)
inputSize : max supported value is LIZ_MAX_INPUT_SIZE
return : maximum output size in a "worst case" scenario
or 0, if input size is too large ( > LZ5_MAX_INPUT_SIZE)
or 0, if input size is too large ( > LIZ_MAX_INPUT_SIZE)
*/
LZ5LIB_API int LZ5_compressBound(int inputSize);
LZ5LIB_API int LIZ_compressBound(int inputSize);
/*!
LZ5_compress_extState() :
LIZ_compress_extState() :
Same compression function, just using an externally allocated memory space to store compression state.
Use LZ5_sizeofState() to know how much memory must be allocated,
Use LIZ_sizeofState() to know how much memory must be allocated,
and allocate it on 8-bytes boundaries (using malloc() typically).
Then, provide it as 'void* state' to compression function.
*/
LZ5LIB_API int LZ5_sizeofState(int compressionLevel);
LZ5LIB_API int LIZ_sizeofState(int compressionLevel);
LZ5LIB_API int LZ5_compress_extState(void* state, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel);
LZ5LIB_API int LIZ_compress_extState(void* state, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel);
@@ -154,48 +154,48 @@ LZ5LIB_API int LZ5_compress_extState(void* state, const char* src, char* dst, in
* Streaming Compression Functions
***********************************************/
/*! LZ5_createStream() will allocate and initialize an `LZ5_stream_t` structure.
* LZ5_freeStream() releases its memory.
/*! LIZ_createStream() will allocate and initialize an `LIZ_stream_t` structure.
* LIZ_freeStream() releases its memory.
* In the context of a DLL (liblz5), please use these methods rather than the static struct.
* They are more future proof, in case of a change of `LZ5_stream_t` size.
* They are more future proof, in case of a change of `LIZ_stream_t` size.
*/
LZ5LIB_API LZ5_stream_t* LZ5_createStream(int compressionLevel);
LZ5LIB_API int LZ5_freeStream (LZ5_stream_t* streamPtr);
LZ5LIB_API LIZ_stream_t* LIZ_createStream(int compressionLevel);
LZ5LIB_API int LIZ_freeStream (LIZ_stream_t* streamPtr);
/*! LZ5_resetStream() :
* Use this function to reset/reuse an allocated `LZ5_stream_t` structure
/*! LIZ_resetStream() :
* Use this function to reset/reuse an allocated `LIZ_stream_t` structure
*/
LZ5LIB_API LZ5_stream_t* LZ5_resetStream (LZ5_stream_t* streamPtr, int compressionLevel);
LZ5LIB_API LIZ_stream_t* LIZ_resetStream (LIZ_stream_t* streamPtr, int compressionLevel);
/*! LZ5_loadDict() :
* Use this function to load a static dictionary into LZ5_stream.
/*! LIZ_loadDict() :
* Use this function to load a static dictionary into LIZ_stream.
* Any previous data will be forgotten, only 'dictionary' will remain in memory.
* Loading a size of 0 is allowed.
* Return : dictionary size, in bytes (necessarily <= LZ5_DICT_SIZE)
* Return : dictionary size, in bytes (necessarily <= LIZ_DICT_SIZE)
*/
LZ5LIB_API int LZ5_loadDict (LZ5_stream_t* streamPtr, const char* dictionary, int dictSize);
LZ5LIB_API int LIZ_loadDict (LIZ_stream_t* streamPtr, const char* dictionary, int dictSize);
/*! LZ5_compress_continue() :
/*! LIZ_compress_continue() :
* Compress buffer content 'src', using data from previously compressed blocks as dictionary to improve compression ratio.
* Important : Previous data blocks are assumed to still be present and unmodified !
* 'dst' buffer must be already allocated.
* If maxDstSize >= LZ5_compressBound(srcSize), compression is guaranteed to succeed, and runs faster.
* If maxDstSize >= LIZ_compressBound(srcSize), compression is guaranteed to succeed, and runs faster.
* If not, and if compressed data cannot fit into 'dst' buffer size, compression stops, and function returns a zero.
*/
LZ5LIB_API int LZ5_compress_continue (LZ5_stream_t* streamPtr, const char* src, char* dst, int srcSize, int maxDstSize);
LZ5LIB_API int LIZ_compress_continue (LIZ_stream_t* streamPtr, const char* src, char* dst, int srcSize, int maxDstSize);
/*! LZ5_saveDict() :
/*! LIZ_saveDict() :
* If previously compressed data block is not guaranteed to remain available at its memory location,
* save it into a safer place (char* safeBuffer).
* Note : you don't need to call LZ5_loadDict() afterwards,
* dictionary is immediately usable, you can therefore call LZ5_compress_continue().
* Note : you don't need to call LIZ_loadDict() afterwards,
* dictionary is immediately usable, you can therefore call LIZ_compress_continue().
* Return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error.
*/
LZ5LIB_API int LZ5_saveDict (LZ5_stream_t* streamPtr, char* safeBuffer, int dictSize);
LZ5LIB_API int LIZ_saveDict (LIZ_stream_t* streamPtr, char* safeBuffer, int dictSize);
@@ -205,4 +205,4 @@ LZ5LIB_API int LZ5_saveDict (LZ5_stream_t* streamPtr, char* safeBuffer, int dict
}
#endif
#endif /* LZ5_H_2983827168210 */
#endif /* LIZ_H_2983827168210 */

View File

@@ -1,7 +1,7 @@
#define LZ5_LENGTH_SIZE_LZ4(len) ((len >= (1<<16)+RUN_MASK_LZ4) ? 5 : ((len >= 254+RUN_MASK_LZ4) ? 3 : ((len >= RUN_MASK_LZ4) ? 1 : 0)))
#define LIZ_LENGTH_SIZE_LZ4(len) ((len >= (1<<16)+RUN_MASK_LZ4) ? 5 : ((len >= 254+RUN_MASK_LZ4) ? 3 : ((len >= RUN_MASK_LZ4) ? 1 : 0)))
FORCE_INLINE int LZ5_encodeSequence_LZ4 (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeSequence_LZ4 (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor,
size_t matchLength,
@@ -14,7 +14,7 @@ FORCE_INLINE int LZ5_encodeSequence_LZ4 (
COMPLOG_CODEWORDS_LZ4("literal : %u -- match : %u -- offset : %u\n", (U32)(*ip - *anchor), (U32)matchLength, (U32)(*ip-match));
/* Encode Literal length */
// if (ctx->literalsPtr > ctx->literalsEnd - length - LZ5_LENGTH_SIZE_LZ4(length) - 2 - WILDCOPYLENGTH) { LZ5_LOG_COMPRESS_LZ4("encodeSequence overflow1\n"); return 1; } /* Check output limit */
// if (ctx->literalsPtr > ctx->literalsEnd - length - LIZ_LENGTH_SIZE_LZ4(length) - 2 - WILDCOPYLENGTH) { LIZ_LOG_COMPRESS_LZ4("encodeSequence overflow1\n"); return 1; } /* Check output limit */
if (length >= RUN_MASK_LZ4)
{ size_t len = length - RUN_MASK_LZ4;
*token = RUN_MASK_LZ4;
@@ -26,13 +26,13 @@ FORCE_INLINE int LZ5_encodeSequence_LZ4 (
/* Copy Literals */
if (length > 0) {
LZ5_wildCopy(ctx->literalsPtr, *anchor, (ctx->literalsPtr) + length);
#if 0 //def LZ5_USE_HUFFMAN
LIZ_wildCopy(ctx->literalsPtr, *anchor, (ctx->literalsPtr) + length);
#if 0 //def LIZ_USE_HUFFMAN
ctx->litSum += (U32)length;
ctx->litPriceSum += (U32)(length * ctx->log2LitSum);
{ U32 u;
for (u=0; u < length; u++) {
ctx->litPriceSum -= LZ5_highbit32(ctx->litFreq[ctx->literalsPtr[u]]+1);
ctx->litPriceSum -= LIZ_highbit32(ctx->litFreq[ctx->literalsPtr[u]]+1);
ctx->litFreq[ctx->literalsPtr[u]]++;
} }
#endif
@@ -47,7 +47,7 @@ FORCE_INLINE int LZ5_encodeSequence_LZ4 (
/* Encode MatchLength */
length = matchLength - MINMATCH;
// if (ctx->literalsPtr > ctx->literalsEnd - 5 /*LZ5_LENGTH_SIZE_LZ4(length)*/) { LZ5_LOG_COMPRESS_LZ4("encodeSequence overflow2\n"); return 1; } /* Check output limit */
// if (ctx->literalsPtr > ctx->literalsEnd - 5 /*LIZ_LENGTH_SIZE_LZ4(length)*/) { LIZ_LOG_COMPRESS_LZ4("encodeSequence overflow2\n"); return 1; } /* Check output limit */
if (length >= ML_MASK_LZ4) {
*token += (BYTE)(ML_MASK_LZ4<<RUN_BITS_LZ4);
length -= ML_MASK_LZ4;
@@ -57,11 +57,11 @@ FORCE_INLINE int LZ5_encodeSequence_LZ4 (
}
else *token += (BYTE)(length<<RUN_BITS_LZ4);
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
if (ctx->huffType) {
ctx->flagFreq[*token]++;
ctx->flagSum++;
LZ5_setLog2Prices(ctx);
LIZ_setLog2Prices(ctx);
}
#endif
@@ -73,8 +73,8 @@ FORCE_INLINE int LZ5_encodeSequence_LZ4 (
}
FORCE_INLINE int LZ5_encodeLastLiterals_LZ4 (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeLastLiterals_LZ4 (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor)
{
@@ -88,13 +88,13 @@ FORCE_INLINE int LZ5_encodeLastLiterals_LZ4 (
}
#define LZ5_GET_TOKEN_PRICE_LZ4(token) (ctx->log2FlagSum - LZ5_highbit32(ctx->flagFreq[token]+1))
#define LIZ_GET_TOKEN_PRICE_LZ4(token) (ctx->log2FlagSum - LIZ_highbit32(ctx->flagFreq[token]+1))
FORCE_INLINE size_t LZ5_get_price_LZ4(LZ5_stream_t* const ctx, const BYTE *ip, const size_t litLength, U32 offset, size_t matchLength)
FORCE_INLINE size_t LIZ_get_price_LZ4(LIZ_stream_t* const ctx, const BYTE *ip, const size_t litLength, U32 offset, size_t matchLength)
{
size_t price = 0;
BYTE token = 0;
#if 0 //def LZ5_USE_HUFFMAN
#if 0 //def LIZ_USE_HUFFMAN
const BYTE* literals = ip - litLength;
U32 u;
@@ -104,13 +104,13 @@ FORCE_INLINE size_t LZ5_get_price_LZ4(LZ5_stream_t* const ctx, const BYTE *ip, c
const BYTE* literals2 = ctx->cachedLiterals + ctx->cachedLitLength;
price = ctx->cachedPrice + additional * ctx->log2LitSum;
for (u=0; u < additional; u++)
price -= LZ5_highbit32(ctx->litFreq[literals2[u]]+1);
price -= LIZ_highbit32(ctx->litFreq[literals2[u]]+1);
ctx->cachedPrice = (U32)price;
ctx->cachedLitLength = (U32)litLength;
} else {
price = litLength * ctx->log2LitSum;
for (u=0; u < litLength; u++)
price -= LZ5_highbit32(ctx->litFreq[literals[u]]+1);
price -= LIZ_highbit32(ctx->litFreq[literals[u]]+1);
if (litLength >= 12) {
ctx->cachedLiterals = literals;
@@ -140,8 +140,8 @@ FORCE_INLINE size_t LZ5_get_price_LZ4(LZ5_stream_t* const ctx, const BYTE *ip, c
size_t length;
price += 16; /* Encode Offset */
if (offset < 8) return LZ5_MAX_PRICE; // error
if (matchLength < MINMATCH) return LZ5_MAX_PRICE; // error
if (offset < 8) return LIZ_MAX_PRICE; // error
if (matchLength < MINMATCH) return LIZ_MAX_PRICE; // error
length = matchLength - MINMATCH;
if (length >= ML_MASK_LZ4) {
@@ -156,7 +156,7 @@ FORCE_INLINE size_t LZ5_get_price_LZ4(LZ5_stream_t* const ctx, const BYTE *ip, c
if (ctx->huffType) {
if (offset > 0 || matchLength > 0) price += 2;
price += LZ5_GET_TOKEN_PRICE_LZ4(token);
price += LIZ_GET_TOKEN_PRICE_LZ4(token);
} else {
price += 8; // token
}

View File

@@ -1,13 +1,13 @@
#define LZ5_FREQ_DIV 5
#define LIZ_FREQ_DIV 5
FORCE_INLINE void LZ5_setLog2Prices(LZ5_stream_t* ctx)
FORCE_INLINE void LIZ_setLog2Prices(LIZ_stream_t* ctx)
{
ctx->log2LitSum = LZ5_highbit32(ctx->litSum+1);
ctx->log2FlagSum = LZ5_highbit32(ctx->flagSum+1);
ctx->log2LitSum = LIZ_highbit32(ctx->litSum+1);
ctx->log2FlagSum = LIZ_highbit32(ctx->flagSum+1);
}
MEM_STATIC void LZ5_rescaleFreqs(LZ5_stream_t* ctx)
MEM_STATIC void LIZ_rescaleFreqs(LIZ_stream_t* ctx)
{
unsigned u;
@@ -29,19 +29,19 @@ MEM_STATIC void LZ5_rescaleFreqs(LZ5_stream_t* ctx)
ctx->flagSum = 0;
for (u=0; u < 256; u++) {
ctx->litFreq[u] = 1 + (ctx->litFreq[u]>>LZ5_FREQ_DIV);
ctx->litFreq[u] = 1 + (ctx->litFreq[u]>>LIZ_FREQ_DIV);
ctx->litSum += ctx->litFreq[u];
ctx->flagFreq[u] = 1 + (ctx->flagFreq[u]>>LZ5_FREQ_DIV);
ctx->flagFreq[u] = 1 + (ctx->flagFreq[u]>>LIZ_FREQ_DIV);
ctx->flagSum += ctx->flagFreq[u];
}
}
LZ5_setLog2Prices(ctx);
LIZ_setLog2Prices(ctx);
}
FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeSequence_LZ5v2 (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor,
size_t matchLength,
@@ -51,9 +51,9 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
size_t length = (size_t)(*ip - *anchor);
BYTE* token = (ctx->flagsPtr)++;
if (length > 0 || offset < LZ5_MAX_16BIT_OFFSET) {
if (length > 0 || offset < LIZ_MAX_16BIT_OFFSET) {
/* Encode Literal length */
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - length - LZ5_LENGTH_SIZE_LZ5v2(length) - WILDCOPYLENGTH)) { LZ5_LOG_COMPRESS_LZ5v2("encodeSequence overflow1\n"); return 1; } /* Check output limit */
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - length - LIZ_LENGTH_SIZE_LZ5v2(length) - WILDCOPYLENGTH)) { LIZ_LOG_COMPRESS_LZ5v2("encodeSequence overflow1\n"); return 1; } /* Check output limit */
if (length >= MAX_SHORT_LITLEN)
{ size_t len;
*token = MAX_SHORT_LITLEN;
@@ -65,14 +65,14 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
else *token = (BYTE)length;
/* Copy Literals */
LZ5_wildCopy(ctx->literalsPtr, *anchor, (ctx->literalsPtr) + length);
#ifndef LZ5_NO_HUFFMAN
LIZ_wildCopy(ctx->literalsPtr, *anchor, (ctx->literalsPtr) + length);
#ifndef LIZ_NO_HUFFMAN
if (ctx->huffType) {
ctx->litSum += (U32)length;
ctx->litPriceSum += (U32)(length * ctx->log2LitSum);
{ U32 u;
for (u=0; u < length; u++) {
ctx->litPriceSum -= LZ5_highbit32(ctx->litFreq[ctx->literalsPtr[u]]+1);
ctx->litPriceSum -= LIZ_highbit32(ctx->litFreq[ctx->literalsPtr[u]]+1);
ctx->litFreq[ctx->literalsPtr[u]]++;
} }
}
@@ -80,10 +80,10 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
ctx->literalsPtr += length;
if (offset >= LZ5_MAX_16BIT_OFFSET) {
if (offset >= LIZ_MAX_16BIT_OFFSET) {
COMPLOG_CODEWORDS_LZ5v2("T32+ literal=%u match=%u offset=%d\n", (U32)length, 0, 0);
*token+=(1<<ML_RUN_BITS);
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
if (ctx->huffType) {
ctx->flagFreq[*token]++;
ctx->flagSum++;
@@ -94,15 +94,15 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
}
/* Encode Offset */
if (offset >= LZ5_MAX_16BIT_OFFSET) // 24-bit offset
if (offset >= LIZ_MAX_16BIT_OFFSET) // 24-bit offset
{
if (matchLength < MM_LONGOFF) printf("ERROR matchLength=%d/%d\n", (int)matchLength, MM_LONGOFF), exit(0);
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - 8 /*LZ5_LENGTH_SIZE_LZ5v2(length)*/)) { LZ5_LOG_COMPRESS_LZ5v2("encodeSequence overflow2\n"); return 1; } /* Check output limit */
if (matchLength - MM_LONGOFF >= LZ5_LAST_LONG_OFF)
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - 8 /*LIZ_LENGTH_SIZE_LZ5v2(length)*/)) { LIZ_LOG_COMPRESS_LZ5v2("encodeSequence overflow2\n"); return 1; } /* Check output limit */
if (matchLength - MM_LONGOFF >= LIZ_LAST_LONG_OFF)
{
size_t len = matchLength - MM_LONGOFF - LZ5_LAST_LONG_OFF;
*token = LZ5_LAST_LONG_OFF;
size_t len = matchLength - MM_LONGOFF - LIZ_LAST_LONG_OFF;
*token = LIZ_LAST_LONG_OFF;
if (len >= (1<<16)) { *(ctx->literalsPtr) = 255; MEM_writeLE24(ctx->literalsPtr+1, (U32)(len)); ctx->literalsPtr += 4; }
else if (len >= 254) { *(ctx->literalsPtr) = 254; MEM_writeLE16(ctx->literalsPtr+1, (U16)(len)); ctx->literalsPtr += 3; }
else *(ctx->literalsPtr)++ = (BYTE)len;
@@ -137,7 +137,7 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
/* Encode MatchLength */
length = matchLength;
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - 5 /*LZ5_LENGTH_SIZE_LZ5v2(length)*/)) { LZ5_LOG_COMPRESS_LZ5v2("encodeSequence overflow2\n"); return 1; } /* Check output limit */
// if ((limitedOutputBuffer) && (ctx->literalsPtr > oend - 5 /*LIZ_LENGTH_SIZE_LZ5v2(length)*/)) { LIZ_LOG_COMPRESS_LZ5v2("encodeSequence overflow2\n"); return 1; } /* Check output limit */
if (length >= MAX_SHORT_MATCHLEN) {
*token += (BYTE)(MAX_SHORT_MATCHLEN<<RUN_BITS_LZ5v2);
length -= MAX_SHORT_MATCHLEN;
@@ -148,11 +148,11 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
else *token += (BYTE)(length<<RUN_BITS_LZ5v2);
}
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
if (ctx->huffType) {
ctx->flagFreq[*token]++;
ctx->flagSum++;
LZ5_setLog2Prices(ctx);
LIZ_setLog2Prices(ctx);
}
#endif
@@ -164,8 +164,8 @@ FORCE_INLINE int LZ5_encodeSequence_LZ5v2 (
}
FORCE_INLINE int LZ5_encodeLastLiterals_LZ5v2 (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeLastLiterals_LZ5v2 (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor)
{
@@ -178,31 +178,31 @@ FORCE_INLINE int LZ5_encodeLastLiterals_LZ5v2 (
}
#define LZ5_PRICE_MULT 1
#define LZ5_GET_TOKEN_PRICE_LZ5v2(token) (LZ5_PRICE_MULT * (ctx->log2FlagSum - LZ5_highbit32(ctx->flagFreq[token]+1)))
#define LIZ_PRICE_MULT 1
#define LIZ_GET_TOKEN_PRICE_LZ5v2(token) (LIZ_PRICE_MULT * (ctx->log2FlagSum - LIZ_highbit32(ctx->flagFreq[token]+1)))
FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const BYTE *ip, const BYTE *off24pos, size_t litLength, U32 offset, size_t matchLength)
FORCE_INLINE size_t LIZ_get_price_LZ5v2(LIZ_stream_t* const ctx, int rep, const BYTE *ip, const BYTE *off24pos, size_t litLength, U32 offset, size_t matchLength)
{
size_t price = 0;
BYTE token = 0;
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
const BYTE* literals = ip - litLength;
U32 u;
if ((ctx->huffType) && (ctx->params.parserType != LZ5_parser_lowestPrice)) {
if ((ctx->huffType) && (ctx->params.parserType != LIZ_parser_lowestPrice)) {
if (ctx->cachedLiterals == literals && litLength >= ctx->cachedLitLength) {
size_t const additional = litLength - ctx->cachedLitLength;
// printf("%d ", (int)litLength - (int)ctx->cachedLitLength);
const BYTE* literals2 = ctx->cachedLiterals + ctx->cachedLitLength;
price = ctx->cachedPrice + LZ5_PRICE_MULT * additional * ctx->log2LitSum;
price = ctx->cachedPrice + LIZ_PRICE_MULT * additional * ctx->log2LitSum;
for (u=0; u < additional; u++)
price -= LZ5_PRICE_MULT * LZ5_highbit32(ctx->litFreq[literals2[u]]+1);
price -= LIZ_PRICE_MULT * LIZ_highbit32(ctx->litFreq[literals2[u]]+1);
ctx->cachedPrice = (U32)price;
ctx->cachedLitLength = (U32)litLength;
} else {
price = LZ5_PRICE_MULT * litLength * ctx->log2LitSum;
price = LIZ_PRICE_MULT * litLength * ctx->log2LitSum;
for (u=0; u < litLength; u++)
price -= LZ5_PRICE_MULT * LZ5_highbit32(ctx->litFreq[literals[u]]+1);
price -= LIZ_PRICE_MULT * LIZ_highbit32(ctx->litFreq[literals[u]]+1);
if (litLength >= 12) {
ctx->cachedLiterals = literals;
@@ -222,7 +222,7 @@ FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const
(void)off24pos;
(void)rep;
if (litLength > 0 || offset < LZ5_MAX_16BIT_OFFSET) {
if (litLength > 0 || offset < LIZ_MAX_16BIT_OFFSET) {
/* Encode Literal length */
if (litLength >= MAX_SHORT_LITLEN)
{ size_t len = litLength - MAX_SHORT_LITLEN;
@@ -233,22 +233,22 @@ FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const
}
else token = (BYTE)litLength;
if (offset >= LZ5_MAX_16BIT_OFFSET) {
if (offset >= LIZ_MAX_16BIT_OFFSET) {
token+=(1<<ML_RUN_BITS);
if (ctx->huffType && ctx->params.parserType != LZ5_parser_lowestPrice)
price += LZ5_GET_TOKEN_PRICE_LZ5v2(token);
if (ctx->huffType && ctx->params.parserType != LIZ_parser_lowestPrice)
price += LIZ_GET_TOKEN_PRICE_LZ5v2(token);
else
price += 8;
}
}
/* Encode Offset */
if (offset >= LZ5_MAX_16BIT_OFFSET) { // 24-bit offset
if (matchLength < MM_LONGOFF) return LZ5_MAX_PRICE; // error
if (offset >= LIZ_MAX_16BIT_OFFSET) { // 24-bit offset
if (matchLength < MM_LONGOFF) return LIZ_MAX_PRICE; // error
if (matchLength - MM_LONGOFF >= LZ5_LAST_LONG_OFF) {
size_t len = matchLength - MM_LONGOFF - LZ5_LAST_LONG_OFF;
token = LZ5_LAST_LONG_OFF;
if (matchLength - MM_LONGOFF >= LIZ_LAST_LONG_OFF) {
size_t len = matchLength - MM_LONGOFF - LIZ_LAST_LONG_OFF;
token = LIZ_LAST_LONG_OFF;
if (len >= (1<<16)) price += 32;
else if (len >= 254) price += 24;
else price += 8;
@@ -262,8 +262,8 @@ FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const
if (offset == 0) {
token+=(1<<ML_RUN_BITS);
} else {
if (offset < 8) return LZ5_MAX_PRICE; // error
if (matchLength < MINMATCH) return LZ5_MAX_PRICE; // error
if (offset < 8) return LIZ_MAX_PRICE; // error
if (matchLength < MINMATCH) return LIZ_MAX_PRICE; // error
price += 16;
}
@@ -280,7 +280,7 @@ FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const
}
if (offset > 0 || matchLength > 0) {
int offset_load = LZ5_highbit32(offset);
int offset_load = LIZ_highbit32(offset);
if (ctx->huffType) {
price += ((offset_load>=20) ? ((offset_load-19)*4) : 0);
price += 4 + (matchLength==1);
@@ -288,13 +288,13 @@ FORCE_INLINE size_t LZ5_get_price_LZ5v2(LZ5_stream_t* const ctx, int rep, const
price += ((offset_load>=16) ? ((offset_load-15)*4) : 0);
price += 6 + (matchLength==1);
}
if (ctx->huffType && ctx->params.parserType != LZ5_parser_lowestPrice)
price += LZ5_GET_TOKEN_PRICE_LZ5v2(token);
if (ctx->huffType && ctx->params.parserType != LIZ_parser_lowestPrice)
price += LIZ_GET_TOKEN_PRICE_LZ5v2(token);
else
price += 8;
} else {
if (ctx->huffType && ctx->params.parserType != LZ5_parser_lowestPrice)
price += LZ5_GET_TOKEN_PRICE_LZ5v2(token); // 1=better ratio
if (ctx->huffType && ctx->params.parserType != LIZ_parser_lowestPrice)
price += LIZ_GET_TOKEN_PRICE_LZ5v2(token); // 1=better ratio
}
return price;

View File

@@ -32,26 +32,26 @@
You can contact the author at :
- LZ5 source repository : https://github.com/inikep/lz5
*/
#ifndef LZ5_DECOMPRESS_H_2983
#define LZ5_DECOMPRESS_H_2983
#ifndef LIZ_DECOMPRESS_H_2983
#define LIZ_DECOMPRESS_H_2983
#if defined (__cplusplus)
extern "C" {
#endif
#include "entropy/mem.h" /* U32 */
#include "mem.h" /* U32 */
/*^***************************************************************
* Export parameters
*****************************************************************/
/*
* LZ5_DLL_EXPORT :
* LIZ_DLL_EXPORT :
* Enable exporting of functions when building a Windows DLL
*/
#if defined(LZ5_DLL_EXPORT) && (LZ5_DLL_EXPORT==1)
#if defined(LIZ_DLL_EXPORT) && (LIZ_DLL_EXPORT==1)
# define LZ5DLIB_API __declspec(dllexport)
#elif defined(LZ5_DLL_IMPORT) && (LZ5_DLL_IMPORT==1)
#elif defined(LIZ_DLL_IMPORT) && (LIZ_DLL_IMPORT==1)
# define LZ5DLIB_API __declspec(dllimport) /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/
#else
# define LZ5DLIB_API
@@ -63,7 +63,7 @@ extern "C" {
**************************************/
/*
LZ5_decompress_safe() :
LIZ_decompress_safe() :
compressedSize : is the precise full size of the compressed block.
maxDecompressedSize : is the size of destination buffer, which must be already allocated.
return : the number of bytes decompressed into destination buffer (necessarily <= maxDecompressedSize)
@@ -72,12 +72,12 @@ LZ5_decompress_safe() :
This function is protected against buffer overflow exploits, including malicious data packets.
It never writes outside output buffer, nor reads outside input buffer.
*/
LZ5DLIB_API int LZ5_decompress_safe (const char* source, char* dest, int compressedSize, int maxDecompressedSize);
LZ5DLIB_API int LIZ_decompress_safe (const char* source, char* dest, int compressedSize, int maxDecompressedSize);
/*!
LZ5_decompress_safe_partial() :
LIZ_decompress_safe_partial() :
This function decompress a compressed block of size 'compressedSize' at position 'source'
into destination buffer 'dest' of size 'maxDecompressedSize'.
The function tries to stop decompressing operation as soon as 'targetOutputSize' has been reached,
@@ -88,7 +88,7 @@ LZ5_decompress_safe_partial() :
If the source stream is detected malformed, the function will stop decoding and return a negative result.
This function never writes outside of output buffer, and never reads outside of input buffer. It is therefore protected against malicious data packets
*/
LZ5DLIB_API int LZ5_decompress_safe_partial (const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize);
LZ5DLIB_API int LIZ_decompress_safe_partial (const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize);
@@ -100,60 +100,60 @@ typedef struct {
size_t extDictSize;
const BYTE* prefixEnd;
size_t prefixSize;
} LZ5_streamDecode_t;
} LIZ_streamDecode_t;
/*
* LZ5_streamDecode_t
* LIZ_streamDecode_t
* information structure to track an LZ5 stream.
* init this structure content using LZ5_setStreamDecode or memset() before first use !
* init this structure content using LIZ_setStreamDecode or memset() before first use !
*
* In the context of a DLL (liblz5) please prefer usage of construction methods below.
* They are more future proof, in case of a change of LZ5_streamDecode_t size in the future.
* LZ5_createStreamDecode will allocate and initialize an LZ5_streamDecode_t structure
* LZ5_freeStreamDecode releases its memory.
* They are more future proof, in case of a change of LIZ_streamDecode_t size in the future.
* LIZ_createStreamDecode will allocate and initialize an LIZ_streamDecode_t structure
* LIZ_freeStreamDecode releases its memory.
*/
LZ5DLIB_API LZ5_streamDecode_t* LZ5_createStreamDecode(void);
LZ5DLIB_API int LZ5_freeStreamDecode (LZ5_streamDecode_t* LZ5_stream);
LZ5DLIB_API LIZ_streamDecode_t* LIZ_createStreamDecode(void);
LZ5DLIB_API int LIZ_freeStreamDecode (LIZ_streamDecode_t* LIZ_stream);
/*! LZ5_setStreamDecode() :
/*! LIZ_setStreamDecode() :
* Use this function to instruct where to find the dictionary.
* Setting a size of 0 is allowed (same effect as reset).
* @return : 1 if OK, 0 if error
*/
LZ5DLIB_API int LZ5_setStreamDecode (LZ5_streamDecode_t* LZ5_streamDecode, const char* dictionary, int dictSize);
LZ5DLIB_API int LIZ_setStreamDecode (LIZ_streamDecode_t* LIZ_streamDecode, const char* dictionary, int dictSize);
/*
*_continue() :
These decoding functions allow decompression of multiple blocks in "streaming" mode.
Previously decoded blocks *must* remain available at the memory position where they were decoded (up to LZ5_DICT_SIZE)
Previously decoded blocks *must* remain available at the memory position where they were decoded (up to LIZ_DICT_SIZE)
In the case of a ring buffers, decoding buffer must be either :
- Exactly same size as encoding buffer, with same update rule (block boundaries at same positions)
In which case, the decoding & encoding ring buffer can have any size, including small ones ( < LZ5_DICT_SIZE).
In which case, the decoding & encoding ring buffer can have any size, including small ones ( < LIZ_DICT_SIZE).
- Larger than encoding buffer, by a minimum of maxBlockSize more bytes.
maxBlockSize is implementation dependent. It's the maximum size you intend to compress into a single block.
In which case, encoding and decoding buffers do not need to be synchronized,
and encoding ring buffer can have any size, including small ones ( < LZ5_DICT_SIZE).
- _At least_ LZ5_DICT_SIZE + 8 bytes + maxBlockSize.
and encoding ring buffer can have any size, including small ones ( < LIZ_DICT_SIZE).
- _At least_ LIZ_DICT_SIZE + 8 bytes + maxBlockSize.
In which case, encoding and decoding buffers do not need to be synchronized,
and encoding ring buffer can have any size, including larger than decoding buffer.
Whenever these conditions are not possible, save the last LZ5_DICT_SIZE of decoded data into a safe buffer,
and indicate where it is saved using LZ5_setStreamDecode()
Whenever these conditions are not possible, save the last LIZ_DICT_SIZE of decoded data into a safe buffer,
and indicate where it is saved using LIZ_setStreamDecode()
*/
LZ5DLIB_API int LZ5_decompress_safe_continue (LZ5_streamDecode_t* LZ5_streamDecode, const char* source, char* dest, int compressedSize, int maxDecompressedSize);
LZ5DLIB_API int LIZ_decompress_safe_continue (LIZ_streamDecode_t* LIZ_streamDecode, const char* source, char* dest, int compressedSize, int maxDecompressedSize);
/*
Advanced decoding functions :
*_usingDict() :
These decoding functions work the same as
a combination of LZ5_setStreamDecode() followed by LZ5_decompress_x_continue()
They are stand-alone. They don't need nor update an LZ5_streamDecode_t structure.
a combination of LIZ_setStreamDecode() followed by LIZ_decompress_x_continue()
They are stand-alone. They don't need nor update an LIZ_streamDecode_t structure.
*/
LZ5DLIB_API int LZ5_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxDecompressedSize, const char* dictStart, int dictSize);
LZ5DLIB_API int LIZ_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxDecompressedSize, const char* dictStart, int dictSize);
#if defined (__cplusplus)
}
#endif
#endif /* LZ5_DECOMPRESS_H_2983827168210 */
#endif /* LIZ_DECOMPRESS_H_2983827168210 */

View File

@@ -1,11 +1,11 @@
/*! LZ5_decompress_LZ4() :
/*! LIZ_decompress_LZ4() :
* This generic decompression function cover all use cases.
* It shall be instantiated several times, using different sets of directives
* Note that it is important this generic function is really inlined,
* in order to remove useless branches during compilation optimization.
*/
FORCE_INLINE int LZ5_decompress_LZ4(
LZ5_dstream_t* ctx,
FORCE_INLINE int LIZ_decompress_LZ4(
LIZ_dstream_t* ctx,
BYTE* const dest,
int outputSize, /* this value is the max size of Output Buffer. */
@@ -29,11 +29,11 @@ FORCE_INLINE int LZ5_decompress_LZ4(
const BYTE* const lowLimit = lowPrefix - dictSize;
const BYTE* const dictEnd = (const BYTE*)dictStart + dictSize;
const int checkOffset = (dictSize < (int)(LZ5_DICT_SIZE));
const int checkOffset = (dictSize < (int)(LIZ_DICT_SIZE));
intptr_t length = 0;
(void)compressionLevel;
(void)LZ5_wildCopy;
(void)LIZ_wildCopy;
/* Special cases */
if (unlikely(outputSize==0)) return ((inputSize==1) && (*ctx->flagsPtr==0)) ? 0 : -1; /* Empty output buffer */
@@ -47,7 +47,7 @@ FORCE_INLINE int LZ5_decompress_LZ4(
/* get literal length */
token = *ctx->flagsPtr++;
if ((length=(token & RUN_MASK_LZ4)) == RUN_MASK_LZ4) {
if (unlikely(ctx->literalsPtr > iend - 5)) { LZ5_LOG_DECOMPRESS_LZ4("0"); goto _output_error; }
if (unlikely(ctx->literalsPtr > iend - 5)) { LIZ_LOG_DECOMPRESS_LZ4("0"); goto _output_error; }
length = *ctx->literalsPtr;
if unlikely(length >= 254) {
if (length == 254) {
@@ -60,23 +60,23 @@ FORCE_INLINE int LZ5_decompress_LZ4(
}
length += RUN_MASK_LZ4;
ctx->literalsPtr++;
if (unlikely((size_t)(op+length)<(size_t)(op))) { LZ5_LOG_DECOMPRESS_LZ4("1"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(ctx->literalsPtr+length)<(size_t)(ctx->literalsPtr))) { LZ5_LOG_DECOMPRESS_LZ4("2"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(op+length)<(size_t)(op))) { LIZ_LOG_DECOMPRESS_LZ4("1"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(ctx->literalsPtr+length)<(size_t)(ctx->literalsPtr))) { LIZ_LOG_DECOMPRESS_LZ4("2"); goto _output_error; } /* overflow detection */
}
/* copy literals */
cpy = op + length;
if (unlikely(cpy > oend - WILDCOPYLENGTH || ctx->literalsPtr + length > iend - (2 + WILDCOPYLENGTH))) { LZ5_LOG_DECOMPRESS_LZ4("offset outside buffers\n"); goto _output_error; } /* Error : offset outside buffers */
if (unlikely(cpy > oend - WILDCOPYLENGTH || ctx->literalsPtr + length > iend - (2 + WILDCOPYLENGTH))) { LIZ_LOG_DECOMPRESS_LZ4("offset outside buffers\n"); goto _output_error; } /* Error : offset outside buffers */
#if 1
LZ5_wildCopy16(op, ctx->literalsPtr, cpy);
LIZ_wildCopy16(op, ctx->literalsPtr, cpy);
op = cpy;
ctx->literalsPtr += length;
#else
LZ5_copy8(op, ctx->literalsPtr);
LZ5_copy8(op+8, ctx->literalsPtr+8);
LIZ_copy8(op, ctx->literalsPtr);
LIZ_copy8(op+8, ctx->literalsPtr+8);
if (length > 16)
LZ5_wildCopy16(op + 16, ctx->literalsPtr + 16, cpy);
LIZ_wildCopy16(op + 16, ctx->literalsPtr + 16, cpy);
op = cpy;
ctx->literalsPtr += length;
#endif
@@ -87,12 +87,12 @@ FORCE_INLINE int LZ5_decompress_LZ4(
ctx->literalsPtr += 2;
match = op - offset;
if ((checkOffset) && (unlikely(match < lowLimit))) { LZ5_LOG_DECOMPRESS_LZ4("lowPrefix[%p]-dictSize[%d]=lowLimit[%p] match[%p]=op[%p]-offset[%d]\n", lowPrefix, (int)dictSize, lowLimit, match, op, (int)offset); goto _output_error; } /* Error : offset outside buffers */
if ((checkOffset) && (unlikely(match < lowLimit))) { LIZ_LOG_DECOMPRESS_LZ4("lowPrefix[%p]-dictSize[%d]=lowLimit[%p] match[%p]=op[%p]-offset[%d]\n", lowPrefix, (int)dictSize, lowLimit, match, op, (int)offset); goto _output_error; } /* Error : offset outside buffers */
/* get matchlength */
length = token >> RUN_BITS_LZ4;
if (length == ML_MASK_LZ4) {
if (unlikely(ctx->literalsPtr > iend - 5)) { LZ5_LOG_DECOMPRESS_LZ4("4"); goto _output_error; }
if (unlikely(ctx->literalsPtr > iend - 5)) { LIZ_LOG_DECOMPRESS_LZ4("4"); goto _output_error; }
length = *ctx->literalsPtr;
if unlikely(length >= 254) {
if (length == 254) {
@@ -105,13 +105,13 @@ FORCE_INLINE int LZ5_decompress_LZ4(
}
length += ML_MASK_LZ4;
ctx->literalsPtr++;
if (unlikely((size_t)(op+length)<(size_t)(op))) { LZ5_LOG_DECOMPRESS_LZ4("5"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(op+length)<(size_t)(op))) { LIZ_LOG_DECOMPRESS_LZ4("5"); goto _output_error; } /* overflow detection */
}
length += MINMATCH;
/* check external dictionary */
if ((dict==usingExtDict) && (match < lowPrefix)) {
if (unlikely(op + length > oend - WILDCOPYLENGTH)) { LZ5_LOG_DECOMPRESS_LZ4("6"); goto _output_error; } /* doesn't respect parsing restriction */
if (unlikely(op + length > oend - WILDCOPYLENGTH)) { LIZ_LOG_DECOMPRESS_LZ4("6"); goto _output_error; } /* doesn't respect parsing restriction */
if (length <= (intptr_t)(lowPrefix - match)) {
/* match can be copied as a single segment from external dictionary */
@@ -136,11 +136,11 @@ FORCE_INLINE int LZ5_decompress_LZ4(
/* copy match within block */
cpy = op + length;
if (unlikely(cpy > oend - WILDCOPYLENGTH)) { LZ5_LOG_DECOMPRESS_LZ4("1match=%p lowLimit=%p\n", match, lowLimit); goto _output_error; } /* Error : offset outside buffers */
LZ5_copy8(op, match);
LZ5_copy8(op+8, match+8);
if (unlikely(cpy > oend - WILDCOPYLENGTH)) { LIZ_LOG_DECOMPRESS_LZ4("1match=%p lowLimit=%p\n", match, lowLimit); goto _output_error; } /* Error : offset outside buffers */
LIZ_copy8(op, match);
LIZ_copy8(op+8, match+8);
if (length > 16)
LZ5_wildCopy16(op + 16, match + 16, cpy);
LIZ_wildCopy16(op + 16, match + 16, cpy);
op = cpy;
if ((partialDecoding) && (op >= oexit)) return (int) (op-dest);
}
@@ -148,7 +148,7 @@ FORCE_INLINE int LZ5_decompress_LZ4(
/* last literals */
length = ctx->literalsEnd - ctx->literalsPtr;
cpy = op + length;
if ((ctx->literalsPtr+length != iend) || (cpy > oend)) { LZ5_LOG_DECOMPRESS_LZ4("9"); goto _output_error; } /* Error : input must be consumed */
if ((ctx->literalsPtr+length != iend) || (cpy > oend)) { LIZ_LOG_DECOMPRESS_LZ4("9"); goto _output_error; } /* Error : input must be consumed */
memcpy(op, ctx->literalsPtr, length);
ctx->literalsPtr += length;
op += length;
@@ -158,7 +158,7 @@ FORCE_INLINE int LZ5_decompress_LZ4(
/* Overflow error detected */
_output_error:
LZ5_LOG_DECOMPRESS_LZ4("_output_error=%d ctx->flagsPtr=%p blockBase=%p\n", (int) (-(ctx->flagsPtr-blockBase))-1, ctx->flagsPtr, blockBase);
LZ5_LOG_DECOMPRESS_LZ4("cpy=%p oend=%p ctx->literalsPtr+length[%d]=%p iend=%p\n", cpy, oend, (int)length, ctx->literalsPtr+length, iend);
LIZ_LOG_DECOMPRESS_LZ4("_output_error=%d ctx->flagsPtr=%p blockBase=%p\n", (int) (-(ctx->flagsPtr-blockBase))-1, ctx->flagsPtr, blockBase);
LIZ_LOG_DECOMPRESS_LZ4("cpy=%p oend=%p ctx->literalsPtr+length[%d]=%p iend=%p\n", cpy, oend, (int)length, ctx->literalsPtr+length, iend);
return (int) (-(ctx->flagsPtr-blockBase))-1;
}

View File

@@ -5,14 +5,14 @@
flag 0-30 - 24-bit offset, 31 match lengths (16-46), no literal length
*/
/*! LZ5_decompress_LZ5v2() :
/*! LIZ_decompress_LZ5v2() :
* This generic decompression function cover all use cases.
* It shall be instantiated several times, using different sets of directives
* Note that it is important this generic function is really inlined,
* in order to remove useless branches during compilation optimization.
*/
FORCE_INLINE int LZ5_decompress_LZ5v2(
LZ5_dstream_t* ctx,
FORCE_INLINE int LIZ_decompress_LZ5v2(
LIZ_dstream_t* ctx,
BYTE* const dest,
int outputSize, /* this value is the max size of Output Buffer. */
@@ -37,12 +37,12 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
const BYTE* const lowLimit = lowPrefix - dictSize;
const BYTE* const dictEnd = (const BYTE*)dictStart + dictSize;
const int checkOffset = (dictSize < (int)(LZ5_DICT_SIZE));
const int checkOffset = (dictSize < (int)(LIZ_DICT_SIZE));
intptr_t last_off = ctx->last_off;
intptr_t length = 0;
(void)compressionLevel;
(void)LZ5_wildCopy;
(void)LIZ_wildCopy;
/* Special cases */
if (unlikely(outputSize==0)) return ((inputSize==1) && (*ctx->flagsPtr==0)) ? 0 : -1; /* Empty output buffer */
@@ -58,11 +58,11 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
/* get literal length */
token = *ctx->flagsPtr++;
// LZ5_LOG_DECOMPRESS_LZ5v2("token : %u\n", (U32)token);
// LIZ_LOG_DECOMPRESS_LZ5v2("token : %u\n", (U32)token);
if (token >= 32)
{
if ((length=(token & MAX_SHORT_LITLEN)) == MAX_SHORT_LITLEN) {
if (unlikely(ctx->literalsPtr > iend - 1)) { LZ5_LOG_DECOMPRESS_LZ5v2("1"); goto _output_error; }
if (unlikely(ctx->literalsPtr > iend - 1)) { LIZ_LOG_DECOMPRESS_LZ5v2("1"); goto _output_error; }
length = *ctx->literalsPtr;
if unlikely(length >= 254) {
if (length == 254) {
@@ -75,28 +75,28 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
}
length += MAX_SHORT_LITLEN;
ctx->literalsPtr++;
if (unlikely((size_t)(op+length)<(size_t)(op))) { LZ5_LOG_DECOMPRESS_LZ5v2("2"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(ctx->literalsPtr+length)<(size_t)(ctx->literalsPtr))) { LZ5_LOG_DECOMPRESS_LZ5v2("3"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(op+length)<(size_t)(op))) { LIZ_LOG_DECOMPRESS_LZ5v2("2"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(ctx->literalsPtr+length)<(size_t)(ctx->literalsPtr))) { LIZ_LOG_DECOMPRESS_LZ5v2("3"); goto _output_error; } /* overflow detection */
}
/* copy literals */
cpy = op + length;
if (unlikely(cpy > oend - WILDCOPYLENGTH || ctx->literalsPtr > iend - WILDCOPYLENGTH)) { LZ5_LOG_DECOMPRESS_LZ5v2("offset outside buffers\n"); goto _output_error; } /* Error : offset outside buffers */
if (unlikely(cpy > oend - WILDCOPYLENGTH || ctx->literalsPtr > iend - WILDCOPYLENGTH)) { LIZ_LOG_DECOMPRESS_LZ5v2("offset outside buffers\n"); goto _output_error; } /* Error : offset outside buffers */
#if 1
LZ5_wildCopy16(op, ctx->literalsPtr, cpy);
LIZ_wildCopy16(op, ctx->literalsPtr, cpy);
op = cpy;
ctx->literalsPtr += length;
#else
LZ5_copy8(op, ctx->literalsPtr);
LZ5_copy8(op+8, ctx->literalsPtr+8);
LIZ_copy8(op, ctx->literalsPtr);
LIZ_copy8(op+8, ctx->literalsPtr+8);
if (length > 16)
LZ5_wildCopy16(op + 16, ctx->literalsPtr + 16, cpy);
LIZ_wildCopy16(op + 16, ctx->literalsPtr + 16, cpy);
op = cpy;
ctx->literalsPtr += length;
#endif
/* get offset */
if (unlikely(ctx->offset16Ptr > ctx->offset16End)) { LZ5_LOG_DECOMPRESS_LZ5v2("(ctx->offset16Ptr > ctx->offset16End\n"); goto _output_error; }
if (unlikely(ctx->offset16Ptr > ctx->offset16End)) { LIZ_LOG_DECOMPRESS_LZ5v2("(ctx->offset16Ptr > ctx->offset16End\n"); goto _output_error; }
#if 1
{ /* branchless */
intptr_t new_off = MEM_readLE16(ctx->offset16Ptr);
@@ -117,7 +117,7 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
length = (token >> RUN_BITS_LZ5v2) & MAX_SHORT_MATCHLEN;
// printf("length=%d token=%d\n", (int)length, (int)token);
if (length == MAX_SHORT_MATCHLEN) {
if (unlikely(ctx->literalsPtr > iend - 1)) { LZ5_LOG_DECOMPRESS_LZ5v2("6"); goto _output_error; }
if (unlikely(ctx->literalsPtr > iend - 1)) { LIZ_LOG_DECOMPRESS_LZ5v2("6"); goto _output_error; }
length = *ctx->literalsPtr;
if unlikely(length >= 254) {
if (length == 254) {
@@ -130,15 +130,15 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
}
length += MAX_SHORT_MATCHLEN;
ctx->literalsPtr++;
if (unlikely((size_t)(op+length)<(size_t)(op))) { LZ5_LOG_DECOMPRESS_LZ5v2("7"); goto _output_error; } /* overflow detection */
if (unlikely((size_t)(op+length)<(size_t)(op))) { LIZ_LOG_DECOMPRESS_LZ5v2("7"); goto _output_error; } /* overflow detection */
}
DECOMPLOG_CODEWORDS_LZ5v2("T32+ literal=%u match=%u offset=%d ipos=%d opos=%d\n", (U32)litLength, (U32)length, (int)-last_off, (U32)(ctx->flagsPtr-blockBase), (U32)(op-dest));
}
else
if (token < LZ5_LAST_LONG_OFF)
if (token < LIZ_LAST_LONG_OFF)
{
if (unlikely(ctx->offset24Ptr > ctx->offset24End - 3)) { LZ5_LOG_DECOMPRESS_LZ5v2("8"); goto _output_error; }
if (unlikely(ctx->offset24Ptr > ctx->offset24End - 3)) { LIZ_LOG_DECOMPRESS_LZ5v2("8"); goto _output_error; }
length = token + MM_LONGOFF;
last_off = -(intptr_t)MEM_readLE24(ctx->offset24Ptr);
ctx->offset24Ptr += 3;
@@ -146,7 +146,7 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
}
else
{
if (unlikely(ctx->literalsPtr > iend - 1)) { LZ5_LOG_DECOMPRESS_LZ5v2("9"); goto _output_error; }
if (unlikely(ctx->literalsPtr > iend - 1)) { LIZ_LOG_DECOMPRESS_LZ5v2("9"); goto _output_error; }
length = *ctx->literalsPtr;
if unlikely(length >= 254) {
if (length == 254) {
@@ -158,20 +158,20 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
}
}
ctx->literalsPtr++;
length += LZ5_LAST_LONG_OFF + MM_LONGOFF;
length += LIZ_LAST_LONG_OFF + MM_LONGOFF;
if (unlikely(ctx->offset24Ptr > ctx->offset24End - 3)) { LZ5_LOG_DECOMPRESS_LZ5v2("10"); goto _output_error; }
if (unlikely(ctx->offset24Ptr > ctx->offset24End - 3)) { LIZ_LOG_DECOMPRESS_LZ5v2("10"); goto _output_error; }
last_off = -(intptr_t)MEM_readLE24(ctx->offset24Ptr);
ctx->offset24Ptr += 3;
}
match = op + last_off;
if ((checkOffset) && ((unlikely((uintptr_t)(-last_off) > (uintptr_t)op) || (match < lowLimit)))) { LZ5_LOG_DECOMPRESS_LZ5v2("lowPrefix[%p]-dictSize[%d]=lowLimit[%p] match[%p]=op[%p]-last_off[%d]\n", lowPrefix, (int)dictSize, lowLimit, match, op, (int)last_off); goto _output_error; } /* Error : offset outside buffers */
if ((checkOffset) && ((unlikely((uintptr_t)(-last_off) > (uintptr_t)op) || (match < lowLimit)))) { LIZ_LOG_DECOMPRESS_LZ5v2("lowPrefix[%p]-dictSize[%d]=lowLimit[%p] match[%p]=op[%p]-last_off[%d]\n", lowPrefix, (int)dictSize, lowLimit, match, op, (int)last_off); goto _output_error; } /* Error : offset outside buffers */
/* check external dictionary */
if ((dict==usingExtDict) && (match < lowPrefix)) {
if (unlikely(op + length > oend - WILDCOPYLENGTH)) { LZ5_LOG_DECOMPRESS_LZ5v2("12"); goto _output_error; } /* doesn't respect parsing restriction */
if (unlikely(op + length > oend - WILDCOPYLENGTH)) { LIZ_LOG_DECOMPRESS_LZ5v2("12"); goto _output_error; } /* doesn't respect parsing restriction */
if (length <= (intptr_t)(lowPrefix - match)) {
/* match can be copied as a single segment from external dictionary */
@@ -196,18 +196,18 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
/* copy match within block */
cpy = op + length;
if (unlikely(cpy > oend - WILDCOPYLENGTH)) { LZ5_LOG_DECOMPRESS_LZ5v2("13match=%p lowLimit=%p\n", match, lowLimit); goto _output_error; } /* Error : offset outside buffers */
LZ5_copy8(op, match);
LZ5_copy8(op+8, match+8);
if (unlikely(cpy > oend - WILDCOPYLENGTH)) { LIZ_LOG_DECOMPRESS_LZ5v2("13match=%p lowLimit=%p\n", match, lowLimit); goto _output_error; } /* Error : offset outside buffers */
LIZ_copy8(op, match);
LIZ_copy8(op+8, match+8);
if (length > 16)
LZ5_wildCopy16(op + 16, match + 16, cpy);
LIZ_wildCopy16(op + 16, match + 16, cpy);
op = cpy;
}
/* last literals */
length = ctx->literalsEnd - ctx->literalsPtr;
cpy = op + length;
if ((ctx->literalsPtr+length != iend) || (cpy > oend)) { LZ5_LOG_DECOMPRESS_LZ5v2("14"); goto _output_error; } /* Error : input must be consumed */
if ((ctx->literalsPtr+length != iend) || (cpy > oend)) { LIZ_LOG_DECOMPRESS_LZ5v2("14"); goto _output_error; } /* Error : input must be consumed */
memcpy(op, ctx->literalsPtr, length);
ctx->literalsPtr += length;
op += length;
@@ -218,7 +218,7 @@ FORCE_INLINE int LZ5_decompress_LZ5v2(
/* Overflow error detected */
_output_error:
LZ5_LOG_DECOMPRESS_LZ5v2("_output_error=%d ctx->flagsPtr=%p blockBase=%p\n", (int) (-(ctx->flagsPtr-blockBase))-1, ctx->flagsPtr, blockBase);
LZ5_LOG_DECOMPRESS_LZ5v2("cpy=%p oend=%p ctx->literalsPtr+length[%d]=%p iend=%p\n", cpy, oend, (int)length, ctx->literalsPtr+length, iend);
LIZ_LOG_DECOMPRESS_LZ5v2("_output_error=%d ctx->flagsPtr=%p blockBase=%p\n", (int) (-(ctx->flagsPtr-blockBase))-1, ctx->flagsPtr, blockBase);
LIZ_LOG_DECOMPRESS_LZ5v2("cpy=%p oend=%p ctx->literalsPtr+length[%d]=%p iend=%p\n", cpy, oend, (int)length, ctx->literalsPtr+length, iend);
return (int) (-(ctx->flagsPtr-blockBase))-1;
}

View File

@@ -37,34 +37,34 @@
***************************************/
#include "mem.h"
#include "error_private.h" /* ERR_*, ERROR */
#define FSE_STATIC_LINKING_ONLY /* FSE_MIN_TABLELOG */
#define LIZFSE_STATIC_LINKING_ONLY /* LIZFSE_MIN_TABLELOG */
#include "fse.h"
#define HUF_STATIC_LINKING_ONLY /* HUF_TABLELOG_ABSOLUTEMAX */
#define LIZHUF_STATIC_LINKING_ONLY /* LIZHUF_TABLELOG_ABSOLUTEMAX */
#include "huf.h"
/*-****************************************
* FSE Error Management
******************************************/
unsigned FSE_isError(size_t code) { return ERR_isError(code); }
unsigned LIZFSE_isError(size_t code) { return ERR_isError(code); }
const char* FSE_getErrorName(size_t code) { return ERR_getErrorName(code); }
const char* LIZFSE_getErrorName(size_t code) { return ERR_getErrorName(code); }
/* **************************************************************
* HUF Error Management
****************************************************************/
unsigned HUF_isError(size_t code) { return ERR_isError(code); }
unsigned LIZHUF_isError(size_t code) { return ERR_isError(code); }
const char* HUF_getErrorName(size_t code) { return ERR_getErrorName(code); }
const char* LIZHUF_getErrorName(size_t code) { return ERR_getErrorName(code); }
/*-**************************************************************
* FSE NCount encoding-decoding
****************************************************************/
static short FSE_abs(short a) { return (short)(a<0 ? -a : a); }
static short LIZFSE_abs(short a) { return (short)(a<0 ? -a : a); }
size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
size_t LIZFSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr,
const void* headerBuffer, size_t hbSize)
{
const BYTE* const istart = (const BYTE*) headerBuffer;
@@ -80,8 +80,8 @@ size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* t
if (hbSize < 4) return ERROR(srcSize_wrong);
bitStream = MEM_readLE32(ip);
nbBits = (bitStream & 0xF) + FSE_MIN_TABLELOG; /* extract tableLog */
if (nbBits > FSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge);
nbBits = (bitStream & 0xF) + LIZFSE_MIN_TABLELOG; /* extract tableLog */
if (nbBits > LIZFSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge);
bitStream >>= 4;
bitCount = 4;
*tableLogPtr = nbBits;
@@ -130,7 +130,7 @@ size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* t
}
count--; /* extra accuracy */
remaining -= FSE_abs(count);
remaining -= LIZFSE_abs(count);
normalizedCounter[charnum++] = count;
previous0 = !count;
while (remaining < threshold) {
@@ -156,13 +156,13 @@ size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* t
}
/*! HUF_readStats() :
Read compact Huffman tree, saved by HUF_writeCTable().
/*! LIZHUF_readStats() :
Read compact Huffman tree, saved by LIZHUF_writeCTable().
`huffWeight` is destination buffer.
@return : size read from `src` , or an error Code .
Note : Needed by HUF_readCTable() and HUF_readDTableX?() .
Note : Needed by LIZHUF_readCTable() and LIZHUF_readDTableX?() .
*/
size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
size_t LIZHUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
U32* nbSymbolsPtr, U32* tableLogPtr,
const void* src, size_t srcSize)
{
@@ -186,22 +186,22 @@ size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats,
} } }
else { /* header compressed with FSE (normal case) */
if (iSize+1 > srcSize) return ERROR(srcSize_wrong);
oSize = FSE_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */
if (FSE_isError(oSize)) return oSize;
oSize = LIZFSE_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */
if (LIZFSE_isError(oSize)) return oSize;
}
/* collect weight stats */
memset(rankStats, 0, (HUF_TABLELOG_ABSOLUTEMAX + 1) * sizeof(U32));
memset(rankStats, 0, (LIZHUF_TABLELOG_ABSOLUTEMAX + 1) * sizeof(U32));
weightTotal = 0;
{ U32 n; for (n=0; n<oSize; n++) {
if (huffWeight[n] >= HUF_TABLELOG_ABSOLUTEMAX) return ERROR(corruption_detected);
if (huffWeight[n] >= LIZHUF_TABLELOG_ABSOLUTEMAX) return ERROR(corruption_detected);
rankStats[huffWeight[n]]++;
weightTotal += (1 << huffWeight[n]) >> 1;
} }
/* get last non-null symbol weight (implied, total must be 2^n) */
{ U32 const tableLog = BIT_highbit32(weightTotal) + 1;
if (tableLog > HUF_TABLELOG_ABSOLUTEMAX) return ERROR(corruption_detected);
if (tableLog > LIZHUF_TABLELOG_ABSOLUTEMAX) return ERROR(corruption_detected);
*tableLogPtr = tableLog;
/* determine last weight */
{ U32 const total = 1 << tableLog;

View File

@@ -60,20 +60,20 @@
#include <string.h> /* memcpy, memset */
#include <stdio.h> /* printf (debug) */
#include "bitstream.h"
#define FSE_STATIC_LINKING_ONLY
#define LIZFSE_STATIC_LINKING_ONLY
#include "fse.h"
/* **************************************************************
* Error Management
****************************************************************/
#define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
#define LIZFSE_STATIC_ASSERT(c) { enum { LIZFSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
/* **************************************************************
* Complex types
****************************************************************/
typedef U32 CTable_max_t[FSE_CTABLE_SIZE_U32(FSE_MAX_TABLELOG, FSE_MAX_SYMBOL_VALUE)];
typedef U32 CTable_max_t[LIZFSE_CTABLE_SIZE_U32(LIZFSE_MAX_TABLELOG, LIZFSE_MAX_SYMBOL_VALUE)];
/* **************************************************************
@@ -86,32 +86,32 @@ typedef U32 CTable_max_t[FSE_CTABLE_SIZE_U32(FSE_MAX_TABLELOG, FSE_MAX_SYMBOL_VA
*/
/* safety checks */
#ifndef FSE_FUNCTION_EXTENSION
# error "FSE_FUNCTION_EXTENSION must be defined"
#ifndef LIZFSE_FUNCTION_EXTENSION
# error "LIZFSE_FUNCTION_EXTENSION must be defined"
#endif
#ifndef FSE_FUNCTION_TYPE
# error "FSE_FUNCTION_TYPE must be defined"
#ifndef LIZFSE_FUNCTION_TYPE
# error "LIZFSE_FUNCTION_TYPE must be defined"
#endif
/* Function names */
#define FSE_CAT(X,Y) X##Y
#define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y)
#define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y)
#define LIZFSE_CAT(X,Y) X##Y
#define LIZFSE_FUNCTION_NAME(X,Y) LIZFSE_CAT(X,Y)
#define LIZFSE_TYPE_NAME(X,Y) LIZFSE_CAT(X,Y)
/* Function templates */
size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_buildCTable(LIZFSE_CTable* ct, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
{
U32 const tableSize = 1 << tableLog;
U32 const tableMask = tableSize - 1;
void* const ptr = ct;
U16* const tableU16 = ( (U16*) ptr) + 2;
void* const FSCT = ((U32*)ptr) + 1 /* header */ + (tableLog ? tableSize>>1 : 1) ;
FSE_symbolCompressionTransform* const symbolTT = (FSE_symbolCompressionTransform*) (FSCT);
U32 const step = FSE_TABLESTEP(tableSize);
U32 cumul[FSE_MAX_SYMBOL_VALUE+2];
LIZFSE_symbolCompressionTransform* const symbolTT = (LIZFSE_symbolCompressionTransform*) (FSCT);
U32 const step = LIZFSE_TABLESTEP(tableSize);
U32 cumul[LIZFSE_MAX_SYMBOL_VALUE+2];
FSE_FUNCTION_TYPE tableSymbol[FSE_MAX_TABLESIZE]; /* memset() is not necessary, even if static analyzer complain about it */
LIZFSE_FUNCTION_TYPE tableSymbol[LIZFSE_MAX_TABLESIZE]; /* memset() is not necessary, even if static analyzer complain about it */
U32 highThreshold = tableSize-1;
/* CTable header */
@@ -127,7 +127,7 @@ size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned
for (u=1; u<=maxSymbolValue+1; u++) {
if (normalizedCounter[u-1]==-1) { /* Low proba symbol */
cumul[u] = cumul[u-1] + 1;
tableSymbol[highThreshold--] = (FSE_FUNCTION_TYPE)(u-1);
tableSymbol[highThreshold--] = (LIZFSE_FUNCTION_TYPE)(u-1);
} else {
cumul[u] = cumul[u-1] + normalizedCounter[u-1];
} }
@@ -140,7 +140,7 @@ size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned
for (symbol=0; symbol<=maxSymbolValue; symbol++) {
int nbOccurences;
for (nbOccurences=0; nbOccurences<normalizedCounter[symbol]; nbOccurences++) {
tableSymbol[position] = (FSE_FUNCTION_TYPE)symbol;
tableSymbol[position] = (LIZFSE_FUNCTION_TYPE)symbol;
position = (position + step) & tableMask;
while (position > highThreshold) position = (position + step) & tableMask; /* Low proba area */
} }
@@ -150,7 +150,7 @@ size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned
/* Build table */
{ U32 u; for (u=0; u<tableSize; u++) {
FSE_FUNCTION_TYPE s = tableSymbol[u]; /* note : static analyzer may not understand tableSymbol is properly initialized */
LIZFSE_FUNCTION_TYPE s = tableSymbol[u]; /* note : static analyzer may not understand tableSymbol is properly initialized */
tableU16[cumul[s]++] = (U16) (tableSize+u); /* TableU16 : sorted by symbol order; gives next state value */
} }
@@ -182,20 +182,20 @@ size_t FSE_buildCTable(FSE_CTable* ct, const short* normalizedCounter, unsigned
#ifndef FSE_COMMONDEFS_ONLY
#ifndef LIZFSE_COMMONDEFS_ONLY
/*-**************************************************************
* FSE NCount encoding-decoding
****************************************************************/
size_t FSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_NCountWriteBound(unsigned maxSymbolValue, unsigned tableLog)
{
size_t maxHeaderSize = (((maxSymbolValue+1) * tableLog) >> 3) + 3;
return maxSymbolValue ? maxHeaderSize : FSE_NCOUNTBOUND; /* maxSymbolValue==0 ? use default */
return maxSymbolValue ? maxHeaderSize : LIZFSE_NCOUNTBOUND; /* maxSymbolValue==0 ? use default */
}
static short FSE_abs(short a) { return (short)(a<0 ? -a : a); }
static short LIZFSE_abs(short a) { return (short)(a<0 ? -a : a); }
static size_t FSE_writeNCount_generic (void* header, size_t headerBufferSize,
static size_t LIZFSE_writeNCount_generic (void* header, size_t headerBufferSize,
const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog,
unsigned writeIsSafe)
{
@@ -214,7 +214,7 @@ static size_t FSE_writeNCount_generic (void* header, size_t headerBufferSize,
bitStream = 0;
bitCount = 0;
/* Table Size */
bitStream += (tableLog-FSE_MIN_TABLELOG) << bitCount;
bitStream += (tableLog-LIZFSE_MIN_TABLELOG) << bitCount;
bitCount += 4;
/* Init */
@@ -252,7 +252,7 @@ static size_t FSE_writeNCount_generic (void* header, size_t headerBufferSize,
} }
{ short count = normalizedCounter[charnum++];
const short max = (short)((2*threshold-1)-remaining);
remaining -= FSE_abs(count);
remaining -= LIZFSE_abs(count);
if (remaining<1) return ERROR(GENERIC);
count++; /* +1 for extra accuracy */
if (count>=threshold) count += max; /* [0..max[ [max..threshold[ (...) [threshold+max 2*threshold[ */
@@ -283,15 +283,15 @@ static size_t FSE_writeNCount_generic (void* header, size_t headerBufferSize,
}
size_t FSE_writeNCount (void* buffer, size_t bufferSize, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_writeNCount (void* buffer, size_t bufferSize, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
{
if (tableLog > FSE_MAX_TABLELOG) return ERROR(GENERIC); /* Unsupported */
if (tableLog < FSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported */
if (tableLog > LIZFSE_MAX_TABLELOG) return ERROR(GENERIC); /* Unsupported */
if (tableLog < LIZFSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported */
if (bufferSize < FSE_NCountWriteBound(maxSymbolValue, tableLog))
return FSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 0);
if (bufferSize < LIZFSE_NCountWriteBound(maxSymbolValue, tableLog))
return LIZFSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 0);
return FSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 1);
return LIZFSE_writeNCount_generic(buffer, bufferSize, normalizedCounter, maxSymbolValue, tableLog, 1);
}
@@ -299,14 +299,14 @@ size_t FSE_writeNCount (void* buffer, size_t bufferSize, const short* normalized
/*-**************************************************************
* Counting histogram
****************************************************************/
/*! FSE_count_simple
/*! LIZFSE_count_simple
This function just counts byte values within `src`,
and store the histogram into table `count`.
This function is unsafe : it doesn't check that all values within `src` can fit into `count`.
For this reason, prefer using a table `count` with 256 elements.
@return : count of most numerous element
*/
static size_t FSE_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
static size_t LIZFSE_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
const void* src, size_t srcSize)
{
const BYTE* ip = (const BYTE*)src;
@@ -329,7 +329,7 @@ static size_t FSE_count_simple(unsigned* count, unsigned* maxSymbolValuePtr,
}
static size_t FSE_count_parallel(unsigned* count, unsigned* maxSymbolValuePtr,
static size_t LIZFSE_count_parallel(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize,
unsigned checkMax)
{
@@ -399,20 +399,20 @@ static size_t FSE_count_parallel(unsigned* count, unsigned* maxSymbolValuePtr,
}
/* fast variant (unsafe : won't check if src contains values beyond count[] limit) */
size_t FSE_countFast(unsigned* count, unsigned* maxSymbolValuePtr,
size_t LIZFSE_countFast(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize)
{
if (sourceSize < 1500) return FSE_count_simple(count, maxSymbolValuePtr, source, sourceSize);
return FSE_count_parallel(count, maxSymbolValuePtr, source, sourceSize, 0);
if (sourceSize < 1500) return LIZFSE_count_simple(count, maxSymbolValuePtr, source, sourceSize);
return LIZFSE_count_parallel(count, maxSymbolValuePtr, source, sourceSize, 0);
}
size_t FSE_count(unsigned* count, unsigned* maxSymbolValuePtr,
size_t LIZFSE_count(unsigned* count, unsigned* maxSymbolValuePtr,
const void* source, size_t sourceSize)
{
if (*maxSymbolValuePtr <255)
return FSE_count_parallel(count, maxSymbolValuePtr, source, sourceSize, 1);
return LIZFSE_count_parallel(count, maxSymbolValuePtr, source, sourceSize, 1);
*maxSymbolValuePtr = 255;
return FSE_countFast(count, maxSymbolValuePtr, source, sourceSize);
return LIZFSE_countFast(count, maxSymbolValuePtr, source, sourceSize);
}
@@ -420,36 +420,36 @@ size_t FSE_count(unsigned* count, unsigned* maxSymbolValuePtr,
/*-**************************************************************
* FSE Compression Code
****************************************************************/
/*! FSE_sizeof_CTable() :
FSE_CTable is a variable size structure which contains :
/*! LIZFSE_sizeof_CTable() :
LIZFSE_CTable is a variable size structure which contains :
`U16 tableLog;`
`U16 maxSymbolValue;`
`U16 nextStateNumber[1 << tableLog];` // This size is variable
`FSE_symbolCompressionTransform symbolTT[maxSymbolValue+1];` // This size is variable
`LIZFSE_symbolCompressionTransform symbolTT[maxSymbolValue+1];` // This size is variable
Allocation is manual (C standard does not support variable-size structures).
*/
size_t FSE_sizeof_CTable (unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_sizeof_CTable (unsigned maxSymbolValue, unsigned tableLog)
{
size_t size;
FSE_STATIC_ASSERT((size_t)FSE_CTABLE_SIZE_U32(FSE_MAX_TABLELOG, FSE_MAX_SYMBOL_VALUE)*4 >= sizeof(CTable_max_t)); /* A compilation error here means FSE_CTABLE_SIZE_U32 is not large enough */
if (tableLog > FSE_MAX_TABLELOG) return ERROR(GENERIC);
size = FSE_CTABLE_SIZE_U32 (tableLog, maxSymbolValue) * sizeof(U32);
LIZFSE_STATIC_ASSERT((size_t)LIZFSE_CTABLE_SIZE_U32(LIZFSE_MAX_TABLELOG, LIZFSE_MAX_SYMBOL_VALUE)*4 >= sizeof(CTable_max_t)); /* A compilation error here means LIZFSE_CTABLE_SIZE_U32 is not large enough */
if (tableLog > LIZFSE_MAX_TABLELOG) return ERROR(GENERIC);
size = LIZFSE_CTABLE_SIZE_U32 (tableLog, maxSymbolValue) * sizeof(U32);
return size;
}
FSE_CTable* FSE_createCTable (unsigned maxSymbolValue, unsigned tableLog)
LIZFSE_CTable* LIZFSE_createCTable (unsigned maxSymbolValue, unsigned tableLog)
{
size_t size;
if (tableLog > FSE_TABLELOG_ABSOLUTE_MAX) tableLog = FSE_TABLELOG_ABSOLUTE_MAX;
size = FSE_CTABLE_SIZE_U32 (tableLog, maxSymbolValue) * sizeof(U32);
return (FSE_CTable*)malloc(size);
if (tableLog > LIZFSE_TABLELOG_ABSOLUTE_MAX) tableLog = LIZFSE_TABLELOG_ABSOLUTE_MAX;
size = LIZFSE_CTABLE_SIZE_U32 (tableLog, maxSymbolValue) * sizeof(U32);
return (LIZFSE_CTable*)malloc(size);
}
void FSE_freeCTable (FSE_CTable* ct) { free(ct); }
void LIZFSE_freeCTable (LIZFSE_CTable* ct) { free(ct); }
/* provides the minimum logSize to safely represent a distribution */
static unsigned FSE_minTableLog(size_t srcSize, unsigned maxSymbolValue)
static unsigned LIZFSE_minTableLog(size_t srcSize, unsigned maxSymbolValue)
{
U32 minBitsSrc = BIT_highbit32((U32)(srcSize - 1)) + 1;
U32 minBitsSymbols = BIT_highbit32(maxSymbolValue) + 2;
@@ -457,29 +457,29 @@ static unsigned FSE_minTableLog(size_t srcSize, unsigned maxSymbolValue)
return minBits;
}
unsigned FSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus)
unsigned LIZFSE_optimalTableLog_internal(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue, unsigned minus)
{
U32 maxBitsSrc = BIT_highbit32((U32)(srcSize - 1)) - minus;
U32 tableLog = maxTableLog;
U32 minBits = FSE_minTableLog(srcSize, maxSymbolValue);
if (tableLog==0) tableLog = FSE_DEFAULT_TABLELOG;
U32 minBits = LIZFSE_minTableLog(srcSize, maxSymbolValue);
if (tableLog==0) tableLog = LIZFSE_DEFAULT_TABLELOG;
if (maxBitsSrc < tableLog) tableLog = maxBitsSrc; /* Accuracy can be reduced */
if (minBits > tableLog) tableLog = minBits; /* Need a minimum to safely represent all symbol values */
if (tableLog < FSE_MIN_TABLELOG) tableLog = FSE_MIN_TABLELOG;
if (tableLog > FSE_MAX_TABLELOG) tableLog = FSE_MAX_TABLELOG;
if (tableLog < LIZFSE_MIN_TABLELOG) tableLog = LIZFSE_MIN_TABLELOG;
if (tableLog > LIZFSE_MAX_TABLELOG) tableLog = LIZFSE_MAX_TABLELOG;
return tableLog;
}
unsigned FSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue)
unsigned LIZFSE_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue)
{
return FSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 2);
return LIZFSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 2);
}
/* Secondary normalization method.
To be used when primary method fails. */
static size_t FSE_normalizeM2(short* norm, U32 tableLog, const unsigned* count, size_t total, U32 maxSymbolValue)
static size_t LIZFSE_normalizeM2(short* norm, U32 tableLog, const unsigned* count, size_t total, U32 maxSymbolValue)
{
U32 s;
U32 distributed = 0;
@@ -555,15 +555,15 @@ static size_t FSE_normalizeM2(short* norm, U32 tableLog, const unsigned* count,
}
size_t FSE_normalizeCount (short* normalizedCounter, unsigned tableLog,
size_t LIZFSE_normalizeCount (short* normalizedCounter, unsigned tableLog,
const unsigned* count, size_t total,
unsigned maxSymbolValue)
{
/* Sanity checks */
if (tableLog==0) tableLog = FSE_DEFAULT_TABLELOG;
if (tableLog < FSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported size */
if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Unsupported size */
if (tableLog < FSE_minTableLog(total, maxSymbolValue)) return ERROR(GENERIC); /* Too small tableLog, compression potentially impossible */
if (tableLog==0) tableLog = LIZFSE_DEFAULT_TABLELOG;
if (tableLog < LIZFSE_MIN_TABLELOG) return ERROR(GENERIC); /* Unsupported size */
if (tableLog > LIZFSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Unsupported size */
if (tableLog < LIZFSE_minTableLog(total, maxSymbolValue)) return ERROR(GENERIC); /* Too small tableLog, compression potentially impossible */
{ U32 const rtbTable[] = { 0, 473195, 504333, 520860, 550000, 700000, 750000, 830000 };
@@ -594,8 +594,8 @@ size_t FSE_normalizeCount (short* normalizedCounter, unsigned tableLog,
} }
if (-stillToDistribute >= (normalizedCounter[largest] >> 1)) {
/* corner case, need another normalization method */
size_t errorCode = FSE_normalizeM2(normalizedCounter, tableLog, count, total, maxSymbolValue);
if (FSE_isError(errorCode)) return errorCode;
size_t errorCode = LIZFSE_normalizeM2(normalizedCounter, tableLog, count, total, maxSymbolValue);
if (LIZFSE_isError(errorCode)) return errorCode;
}
else normalizedCounter[largest] += (short)stillToDistribute;
}
@@ -618,8 +618,8 @@ size_t FSE_normalizeCount (short* normalizedCounter, unsigned tableLog,
}
/* fake FSE_CTable, for raw (uncompressed) input */
size_t FSE_buildCTable_raw (FSE_CTable* ct, unsigned nbBits)
/* fake LIZFSE_CTable, for raw (uncompressed) input */
size_t LIZFSE_buildCTable_raw (LIZFSE_CTable* ct, unsigned nbBits)
{
const unsigned tableSize = 1 << nbBits;
const unsigned tableMask = tableSize - 1;
@@ -627,7 +627,7 @@ size_t FSE_buildCTable_raw (FSE_CTable* ct, unsigned nbBits)
void* const ptr = ct;
U16* const tableU16 = ( (U16*) ptr) + 2;
void* const FSCT = ((U32*)ptr) + 1 /* header */ + (tableSize>>1); /* assumption : tableLog >= 1 */
FSE_symbolCompressionTransform* const symbolTT = (FSE_symbolCompressionTransform*) (FSCT);
LIZFSE_symbolCompressionTransform* const symbolTT = (LIZFSE_symbolCompressionTransform*) (FSCT);
unsigned s;
/* Sanity checks */
@@ -653,13 +653,13 @@ size_t FSE_buildCTable_raw (FSE_CTable* ct, unsigned nbBits)
return 0;
}
/* fake FSE_CTable, for rle (100% always same symbol) input */
size_t FSE_buildCTable_rle (FSE_CTable* ct, BYTE symbolValue)
/* fake LIZFSE_CTable, for rle (100% always same symbol) input */
size_t LIZFSE_buildCTable_rle (LIZFSE_CTable* ct, BYTE symbolValue)
{
void* ptr = ct;
U16* tableU16 = ( (U16*) ptr) + 2;
void* FSCTptr = (U32*)ptr + 2;
FSE_symbolCompressionTransform* symbolTT = (FSE_symbolCompressionTransform*) FSCTptr;
LIZFSE_symbolCompressionTransform* symbolTT = (LIZFSE_symbolCompressionTransform*) FSCTptr;
/* header */
tableU16[-2] = (U16) 0;
@@ -677,9 +677,9 @@ size_t FSE_buildCTable_rle (FSE_CTable* ct, BYTE symbolValue)
}
static size_t FSE_compress_usingCTable_generic (void* dst, size_t dstSize,
static size_t LIZFSE_compress_usingCTable_generic (void* dst, size_t dstSize,
const void* src, size_t srcSize,
const FSE_CTable* ct, const unsigned fast)
const LIZFSE_CTable* ct, const unsigned fast)
{
const BYTE* const istart = (const BYTE*) src;
const BYTE* const iend = istart + srcSize;
@@ -687,72 +687,72 @@ static size_t FSE_compress_usingCTable_generic (void* dst, size_t dstSize,
BIT_CStream_t bitC;
FSE_CState_t CState1, CState2;
LIZFSE_CState_t CState1, CState2;
/* init */
if (srcSize <= 2) return 0;
{ size_t const errorCode = BIT_initCStream(&bitC, dst, dstSize);
if (FSE_isError(errorCode)) return 0; }
if (LIZFSE_isError(errorCode)) return 0; }
#define FSE_FLUSHBITS(s) (fast ? BIT_flushBitsFast(s) : BIT_flushBits(s))
#define LIZFSE_FLUSHBITS(s) (fast ? BIT_flushBitsFast(s) : BIT_flushBits(s))
if (srcSize & 1) {
FSE_initCState2(&CState1, ct, *--ip);
FSE_initCState2(&CState2, ct, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
FSE_FLUSHBITS(&bitC);
LIZFSE_initCState2(&CState1, ct, *--ip);
LIZFSE_initCState2(&CState2, ct, *--ip);
LIZFSE_encodeSymbol(&bitC, &CState1, *--ip);
LIZFSE_FLUSHBITS(&bitC);
} else {
FSE_initCState2(&CState2, ct, *--ip);
FSE_initCState2(&CState1, ct, *--ip);
LIZFSE_initCState2(&CState2, ct, *--ip);
LIZFSE_initCState2(&CState1, ct, *--ip);
}
/* join to mod 4 */
srcSize -= 2;
if ((sizeof(bitC.bitContainer)*8 > FSE_MAX_TABLELOG*4+7 ) && (srcSize & 2)) { /* test bit 2 */
FSE_encodeSymbol(&bitC, &CState2, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
FSE_FLUSHBITS(&bitC);
if ((sizeof(bitC.bitContainer)*8 > LIZFSE_MAX_TABLELOG*4+7 ) && (srcSize & 2)) { /* test bit 2 */
LIZFSE_encodeSymbol(&bitC, &CState2, *--ip);
LIZFSE_encodeSymbol(&bitC, &CState1, *--ip);
LIZFSE_FLUSHBITS(&bitC);
}
/* 2 or 4 encoding per loop */
for ( ; ip>istart ; ) {
FSE_encodeSymbol(&bitC, &CState2, *--ip);
LIZFSE_encodeSymbol(&bitC, &CState2, *--ip);
if (sizeof(bitC.bitContainer)*8 < FSE_MAX_TABLELOG*2+7 ) /* this test must be static */
FSE_FLUSHBITS(&bitC);
if (sizeof(bitC.bitContainer)*8 < LIZFSE_MAX_TABLELOG*2+7 ) /* this test must be static */
LIZFSE_FLUSHBITS(&bitC);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
LIZFSE_encodeSymbol(&bitC, &CState1, *--ip);
if (sizeof(bitC.bitContainer)*8 > FSE_MAX_TABLELOG*4+7 ) { /* this test must be static */
FSE_encodeSymbol(&bitC, &CState2, *--ip);
FSE_encodeSymbol(&bitC, &CState1, *--ip);
if (sizeof(bitC.bitContainer)*8 > LIZFSE_MAX_TABLELOG*4+7 ) { /* this test must be static */
LIZFSE_encodeSymbol(&bitC, &CState2, *--ip);
LIZFSE_encodeSymbol(&bitC, &CState1, *--ip);
}
FSE_FLUSHBITS(&bitC);
LIZFSE_FLUSHBITS(&bitC);
}
FSE_flushCState(&bitC, &CState2);
FSE_flushCState(&bitC, &CState1);
LIZFSE_flushCState(&bitC, &CState2);
LIZFSE_flushCState(&bitC, &CState1);
return BIT_closeCStream(&bitC);
}
size_t FSE_compress_usingCTable (void* dst, size_t dstSize,
size_t LIZFSE_compress_usingCTable (void* dst, size_t dstSize,
const void* src, size_t srcSize,
const FSE_CTable* ct)
const LIZFSE_CTable* ct)
{
const unsigned fast = (dstSize >= FSE_BLOCKBOUND(srcSize));
const unsigned fast = (dstSize >= LIZFSE_BLOCKBOUND(srcSize));
if (fast)
return FSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 1);
return LIZFSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 1);
else
return FSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 0);
return LIZFSE_compress_usingCTable_generic(dst, dstSize, src, srcSize, ct, 0);
}
size_t FSE_compressBound(size_t size) { return FSE_COMPRESSBOUND(size); }
size_t LIZFSE_compressBound(size_t size) { return LIZFSE_COMPRESSBOUND(size); }
size_t FSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize, unsigned maxSymbolValue, unsigned tableLog)
{
const BYTE* const istart = (const BYTE*) src;
const BYTE* ip = istart;
@@ -761,36 +761,36 @@ size_t FSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize
BYTE* op = ostart;
BYTE* const oend = ostart + dstSize;
U32 count[FSE_MAX_SYMBOL_VALUE+1];
S16 norm[FSE_MAX_SYMBOL_VALUE+1];
U32 count[LIZFSE_MAX_SYMBOL_VALUE+1];
S16 norm[LIZFSE_MAX_SYMBOL_VALUE+1];
CTable_max_t ct;
size_t errorCode;
/* init conditions */
if (srcSize <= 1) return 0; /* Uncompressible */
if (!maxSymbolValue) maxSymbolValue = FSE_MAX_SYMBOL_VALUE;
if (!tableLog) tableLog = FSE_DEFAULT_TABLELOG;
if (!maxSymbolValue) maxSymbolValue = LIZFSE_MAX_SYMBOL_VALUE;
if (!tableLog) tableLog = LIZFSE_DEFAULT_TABLELOG;
/* Scan input and build symbol stats */
errorCode = FSE_count (count, &maxSymbolValue, ip, srcSize);
if (FSE_isError(errorCode)) return errorCode;
errorCode = LIZFSE_count (count, &maxSymbolValue, ip, srcSize);
if (LIZFSE_isError(errorCode)) return errorCode;
if (errorCode == srcSize) return 1;
if (errorCode == 1) return 0; /* each symbol only present once */
if (errorCode < (srcSize >> 7)) return 0; /* Heuristic : not compressible enough */
tableLog = FSE_optimalTableLog(tableLog, srcSize, maxSymbolValue);
errorCode = FSE_normalizeCount (norm, tableLog, count, srcSize, maxSymbolValue);
if (FSE_isError(errorCode)) return errorCode;
tableLog = LIZFSE_optimalTableLog(tableLog, srcSize, maxSymbolValue);
errorCode = LIZFSE_normalizeCount (norm, tableLog, count, srcSize, maxSymbolValue);
if (LIZFSE_isError(errorCode)) return errorCode;
/* Write table description header */
errorCode = FSE_writeNCount (op, oend-op, norm, maxSymbolValue, tableLog);
if (FSE_isError(errorCode)) return errorCode;
errorCode = LIZFSE_writeNCount (op, oend-op, norm, maxSymbolValue, tableLog);
if (LIZFSE_isError(errorCode)) return errorCode;
op += errorCode;
/* Compress */
errorCode = FSE_buildCTable (ct, norm, maxSymbolValue, tableLog);
if (FSE_isError(errorCode)) return errorCode;
errorCode = FSE_compress_usingCTable(op, oend - op, ip, srcSize, ct);
errorCode = LIZFSE_buildCTable (ct, norm, maxSymbolValue, tableLog);
if (LIZFSE_isError(errorCode)) return errorCode;
errorCode = LIZFSE_compress_usingCTable(op, oend - op, ip, srcSize, ct);
if (errorCode == 0) return 0; /* not enough space for compressed data */
op += errorCode;
@@ -801,10 +801,10 @@ size_t FSE_compress2 (void* dst, size_t dstSize, const void* src, size_t srcSize
return op-ostart;
}
size_t FSE_compress (void* dst, size_t dstSize, const void* src, size_t srcSize)
size_t LIZFSE_compress (void* dst, size_t dstSize, const void* src, size_t srcSize)
{
return FSE_compress2(dst, dstSize, src, (U32)srcSize, FSE_MAX_SYMBOL_VALUE, FSE_DEFAULT_TABLELOG);
return LIZFSE_compress2(dst, dstSize, src, (U32)srcSize, LIZFSE_MAX_SYMBOL_VALUE, LIZFSE_DEFAULT_TABLELOG);
}
#endif /* FSE_COMMONDEFS_ONLY */
#endif /* LIZFSE_COMMONDEFS_ONLY */

View File

@@ -61,24 +61,24 @@
#include <string.h> /* memcpy, memset */
#include <stdio.h> /* printf (debug) */
#include "bitstream.h"
#define FSE_STATIC_LINKING_ONLY
#define LIZFSE_STATIC_LINKING_ONLY
#include "fse.h"
/* **************************************************************
* Error Management
****************************************************************/
#define FSE_isError ERR_isError
#define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
#define LIZFSE_isError ERR_isError
#define LIZFSE_STATIC_ASSERT(c) { enum { LIZFSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
/* check and forward error code */
#define CHECK_F(f) { size_t const e = f; if (FSE_isError(e)) return e; }
#define CHECK_F(f) { size_t const e = f; if (LIZFSE_isError(e)) return e; }
/* **************************************************************
* Complex types
****************************************************************/
typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)];
typedef U32 DTable_max_t[LIZFSE_DTABLE_SIZE_U32(LIZFSE_MAX_TABLELOG)];
/* **************************************************************
@@ -91,54 +91,54 @@ typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)];
*/
/* safety checks */
#ifndef FSE_FUNCTION_EXTENSION
# error "FSE_FUNCTION_EXTENSION must be defined"
#ifndef LIZFSE_FUNCTION_EXTENSION
# error "LIZFSE_FUNCTION_EXTENSION must be defined"
#endif
#ifndef FSE_FUNCTION_TYPE
# error "FSE_FUNCTION_TYPE must be defined"
#ifndef LIZFSE_FUNCTION_TYPE
# error "LIZFSE_FUNCTION_TYPE must be defined"
#endif
/* Function names */
#define FSE_CAT(X,Y) X##Y
#define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y)
#define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y)
#define LIZFSE_CAT(X,Y) X##Y
#define LIZFSE_FUNCTION_NAME(X,Y) LIZFSE_CAT(X,Y)
#define LIZFSE_TYPE_NAME(X,Y) LIZFSE_CAT(X,Y)
/* Function templates */
FSE_DTable* FSE_createDTable (unsigned tableLog)
LIZFSE_DTable* LIZFSE_createDTable (unsigned tableLog)
{
if (tableLog > FSE_TABLELOG_ABSOLUTE_MAX) tableLog = FSE_TABLELOG_ABSOLUTE_MAX;
return (FSE_DTable*)malloc( FSE_DTABLE_SIZE_U32(tableLog) * sizeof (U32) );
if (tableLog > LIZFSE_TABLELOG_ABSOLUTE_MAX) tableLog = LIZFSE_TABLELOG_ABSOLUTE_MAX;
return (LIZFSE_DTable*)malloc( LIZFSE_DTABLE_SIZE_U32(tableLog) * sizeof (U32) );
}
void FSE_freeDTable (FSE_DTable* dt)
void LIZFSE_freeDTable (LIZFSE_DTable* dt)
{
free(dt);
}
size_t FSE_buildDTable(FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
size_t LIZFSE_buildDTable(LIZFSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog)
{
void* const tdPtr = dt+1; /* because *dt is unsigned, 32-bits aligned on 32-bits */
FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*) (tdPtr);
U16 symbolNext[FSE_MAX_SYMBOL_VALUE+1];
LIZFSE_DECODE_TYPE* const tableDecode = (LIZFSE_DECODE_TYPE*) (tdPtr);
U16 symbolNext[LIZFSE_MAX_SYMBOL_VALUE+1];
U32 const maxSV1 = maxSymbolValue + 1;
U32 const tableSize = 1 << tableLog;
U32 highThreshold = tableSize-1;
/* Sanity Checks */
if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge);
if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge);
if (maxSymbolValue > LIZFSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge);
if (tableLog > LIZFSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge);
/* Init, lay down lowprob symbols */
{ FSE_DTableHeader DTableH;
{ LIZFSE_DTableHeader DTableH;
DTableH.tableLog = (U16)tableLog;
DTableH.fastMode = 1;
{ S16 const largeLimit= (S16)(1 << (tableLog-1));
U32 s;
for (s=0; s<maxSV1; s++) {
if (normalizedCounter[s]==-1) {
tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s;
tableDecode[highThreshold--].symbol = (LIZFSE_FUNCTION_TYPE)s;
symbolNext[s] = 1;
} else {
if (normalizedCounter[s] >= largeLimit) DTableH.fastMode=0;
@@ -149,12 +149,12 @@ size_t FSE_buildDTable(FSE_DTable* dt, const short* normalizedCounter, unsigned
/* Spread symbols */
{ U32 const tableMask = tableSize-1;
U32 const step = FSE_TABLESTEP(tableSize);
U32 const step = LIZFSE_TABLESTEP(tableSize);
U32 s, position = 0;
for (s=0; s<maxSV1; s++) {
int i;
for (i=0; i<normalizedCounter[s]; i++) {
tableDecode[position].symbol = (FSE_FUNCTION_TYPE)s;
tableDecode[position].symbol = (LIZFSE_FUNCTION_TYPE)s;
position = (position + step) & tableMask;
while (position > highThreshold) position = (position + step) & tableMask; /* lowprob area */
} }
@@ -164,7 +164,7 @@ size_t FSE_buildDTable(FSE_DTable* dt, const short* normalizedCounter, unsigned
/* Build Decoding table */
{ U32 u;
for (u=0; u<tableSize; u++) {
FSE_FUNCTION_TYPE const symbol = (FSE_FUNCTION_TYPE)(tableDecode[u].symbol);
LIZFSE_FUNCTION_TYPE const symbol = (LIZFSE_FUNCTION_TYPE)(tableDecode[u].symbol);
U16 nextState = symbolNext[symbol]++;
tableDecode[u].nbBits = (BYTE) (tableLog - BIT_highbit32 ((U32)nextState) );
tableDecode[u].newState = (U16) ( (nextState << tableDecode[u].nbBits) - tableSize);
@@ -174,17 +174,17 @@ size_t FSE_buildDTable(FSE_DTable* dt, const short* normalizedCounter, unsigned
}
#ifndef FSE_COMMONDEFS_ONLY
#ifndef LIZFSE_COMMONDEFS_ONLY
/*-*******************************************************
* Decompression (Byte symbols)
*********************************************************/
size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue)
size_t LIZFSE_buildDTable_rle (LIZFSE_DTable* dt, BYTE symbolValue)
{
void* ptr = dt;
FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr;
LIZFSE_DTableHeader* const DTableH = (LIZFSE_DTableHeader*)ptr;
void* dPtr = dt + 1;
FSE_decode_t* const cell = (FSE_decode_t*)dPtr;
LIZFSE_decode_t* const cell = (LIZFSE_decode_t*)dPtr;
DTableH->tableLog = 0;
DTableH->fastMode = 0;
@@ -197,12 +197,12 @@ size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue)
}
size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits)
size_t LIZFSE_buildDTable_raw (LIZFSE_DTable* dt, unsigned nbBits)
{
void* ptr = dt;
FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr;
LIZFSE_DTableHeader* const DTableH = (LIZFSE_DTableHeader*)ptr;
void* dPtr = dt + 1;
FSE_decode_t* const dinfo = (FSE_decode_t*)dPtr;
LIZFSE_decode_t* const dinfo = (LIZFSE_decode_t*)dPtr;
const unsigned tableSize = 1 << nbBits;
const unsigned tableMask = tableSize - 1;
const unsigned maxSV1 = tableMask+1;
@@ -223,10 +223,10 @@ size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits)
return 0;
}
FORCE_INLINE size_t FSE_decompress_usingDTable_generic(
FORCE_INLINE size_t LIZFSE_decompress_usingDTable_generic(
void* dst, size_t maxDstSize,
const void* cSrc, size_t cSrcSize,
const FSE_DTable* dt, const unsigned fast)
const LIZFSE_DTable* dt, const unsigned fast)
{
BYTE* const ostart = (BYTE*) dst;
BYTE* op = ostart;
@@ -234,51 +234,51 @@ FORCE_INLINE size_t FSE_decompress_usingDTable_generic(
BYTE* const olimit = omax-3;
BIT_DStream_t bitD;
FSE_DState_t state1;
FSE_DState_t state2;
LIZFSE_DState_t state1;
LIZFSE_DState_t state2;
/* Init */
CHECK_F(BIT_initDStream(&bitD, cSrc, cSrcSize));
FSE_initDState(&state1, &bitD, dt);
FSE_initDState(&state2, &bitD, dt);
LIZFSE_initDState(&state1, &bitD, dt);
LIZFSE_initDState(&state2, &bitD, dt);
#define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD)
#define LIZFSE_GETSYMBOL(statePtr) fast ? LIZFSE_decodeSymbolFast(statePtr, &bitD) : LIZFSE_decodeSymbol(statePtr, &bitD)
/* 4 symbols per loop */
for ( ; (BIT_reloadDStream(&bitD)==BIT_DStream_unfinished) & (op<olimit) ; op+=4) {
op[0] = FSE_GETSYMBOL(&state1);
op[0] = LIZFSE_GETSYMBOL(&state1);
if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
if (LIZFSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
BIT_reloadDStream(&bitD);
op[1] = FSE_GETSYMBOL(&state2);
op[1] = LIZFSE_GETSYMBOL(&state2);
if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
if (LIZFSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
{ if (BIT_reloadDStream(&bitD) > BIT_DStream_unfinished) { op+=2; break; } }
op[2] = FSE_GETSYMBOL(&state1);
op[2] = LIZFSE_GETSYMBOL(&state1);
if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
if (LIZFSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */
BIT_reloadDStream(&bitD);
op[3] = FSE_GETSYMBOL(&state2);
op[3] = LIZFSE_GETSYMBOL(&state2);
}
/* tail */
/* note : BIT_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */
/* note : BIT_reloadDStream(&bitD) >= LIZFSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */
while (1) {
if (op>(omax-2)) return ERROR(dstSize_tooSmall);
*op++ = FSE_GETSYMBOL(&state1);
*op++ = LIZFSE_GETSYMBOL(&state1);
if (BIT_reloadDStream(&bitD)==BIT_DStream_overflow) {
*op++ = FSE_GETSYMBOL(&state2);
*op++ = LIZFSE_GETSYMBOL(&state2);
break;
}
if (op>(omax-2)) return ERROR(dstSize_tooSmall);
*op++ = FSE_GETSYMBOL(&state2);
*op++ = LIZFSE_GETSYMBOL(&state2);
if (BIT_reloadDStream(&bitD)==BIT_DStream_overflow) {
*op++ = FSE_GETSYMBOL(&state1);
*op++ = LIZFSE_GETSYMBOL(&state1);
break;
} }
@@ -286,44 +286,44 @@ FORCE_INLINE size_t FSE_decompress_usingDTable_generic(
}
size_t FSE_decompress_usingDTable(void* dst, size_t originalSize,
size_t LIZFSE_decompress_usingDTable(void* dst, size_t originalSize,
const void* cSrc, size_t cSrcSize,
const FSE_DTable* dt)
const LIZFSE_DTable* dt)
{
const void* ptr = dt;
const FSE_DTableHeader* DTableH = (const FSE_DTableHeader*)ptr;
const LIZFSE_DTableHeader* DTableH = (const LIZFSE_DTableHeader*)ptr;
const U32 fastMode = DTableH->fastMode;
/* select fast mode (static) */
if (fastMode) return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1);
return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0);
if (fastMode) return LIZFSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1);
return LIZFSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0);
}
size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize)
size_t LIZFSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize)
{
const BYTE* const istart = (const BYTE*)cSrc;
const BYTE* ip = istart;
short counting[FSE_MAX_SYMBOL_VALUE+1];
short counting[LIZFSE_MAX_SYMBOL_VALUE+1];
DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */
unsigned tableLog;
unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE;
unsigned maxSymbolValue = LIZFSE_MAX_SYMBOL_VALUE;
if (cSrcSize<2) return ERROR(srcSize_wrong); /* too small input size */
/* normal FSE decoding mode */
{ size_t const NCountLength = FSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize);
if (FSE_isError(NCountLength)) return NCountLength;
{ size_t const NCountLength = LIZFSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize);
if (LIZFSE_isError(NCountLength)) return NCountLength;
if (NCountLength >= cSrcSize) return ERROR(srcSize_wrong); /* too small input size */
ip += NCountLength;
cSrcSize -= NCountLength;
}
CHECK_F( FSE_buildDTable (dt, counting, maxSymbolValue, tableLog) );
CHECK_F( LIZFSE_buildDTable (dt, counting, maxSymbolValue, tableLog) );
return FSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); /* always return, even if it is an error code */
return LIZFSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); /* always return, even if it is an error code */
}
#endif /* FSE_COMMONDEFS_ONLY */
#endif /* LIZFSE_COMMONDEFS_ONLY */

View File

@@ -46,34 +46,34 @@
#include <string.h> /* memcpy, memset */
#include <stdio.h> /* printf (debug) */
#include "bitstream.h"
#define FSE_STATIC_LINKING_ONLY /* FSE_optimalTableLog_internal */
#define LIZFSE_STATIC_LINKING_ONLY /* LIZFSE_optimalTableLog_internal */
#include "fse.h" /* header compression */
#define HUF_STATIC_LINKING_ONLY
#define LIZHUF_STATIC_LINKING_ONLY
#include "huf.h"
/* **************************************************************
* Error Management
****************************************************************/
#define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
#define LIZHUF_STATIC_ASSERT(c) { enum { LIZHUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
/* **************************************************************
* Utils
****************************************************************/
unsigned HUF_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue)
unsigned LIZHUF_optimalTableLog(unsigned maxTableLog, size_t srcSize, unsigned maxSymbolValue)
{
return FSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 1);
return LIZFSE_optimalTableLog_internal(maxTableLog, srcSize, maxSymbolValue, 1);
}
/* *******************************************************
* HUF : Huffman block compression
*********************************************************/
struct HUF_CElt_s {
struct LIZHUF_CElt_s {
U16 val;
BYTE nbBits;
}; /* typedef'd to HUF_CElt within "huf.h" */
}; /* typedef'd to LIZHUF_CElt within "huf.h" */
typedef struct nodeElt_s {
U32 count;
@@ -82,19 +82,19 @@ typedef struct nodeElt_s {
BYTE nbBits;
} nodeElt;
/*! HUF_writeCTable() :
/*! LIZHUF_writeCTable() :
`CTable` : huffman tree to save, using huf representation.
@return : size of saved CTable */
size_t HUF_writeCTable (void* dst, size_t maxDstSize,
const HUF_CElt* CTable, U32 maxSymbolValue, U32 huffLog)
size_t LIZHUF_writeCTable (void* dst, size_t maxDstSize,
const LIZHUF_CElt* CTable, U32 maxSymbolValue, U32 huffLog)
{
BYTE bitsToWeight[HUF_TABLELOG_MAX + 1];
BYTE huffWeight[HUF_SYMBOLVALUE_MAX];
BYTE bitsToWeight[LIZHUF_TABLELOG_MAX + 1];
BYTE huffWeight[LIZHUF_SYMBOLVALUE_MAX];
BYTE* op = (BYTE*)dst;
U32 n;
/* check conditions */
if (maxSymbolValue > HUF_SYMBOLVALUE_MAX) return ERROR(GENERIC);
if (maxSymbolValue > LIZHUF_SYMBOLVALUE_MAX) return ERROR(GENERIC);
/* convert to weight */
bitsToWeight[0] = 0;
@@ -103,8 +103,8 @@ size_t HUF_writeCTable (void* dst, size_t maxDstSize,
for (n=0; n<maxSymbolValue; n++)
huffWeight[n] = bitsToWeight[CTable[n].nbBits];
{ size_t const size = FSE_compress(op+1, maxDstSize-1, huffWeight, maxSymbolValue);
if (FSE_isError(size)) return size;
{ size_t const size = LIZFSE_compress(op+1, maxDstSize-1, huffWeight, maxSymbolValue);
if (LIZFSE_isError(size)) return size;
if ((size>1) & (size < maxSymbolValue/2)) { /* FSE compressed */
op[0] = (BYTE)size;
return size+1;
@@ -123,21 +123,21 @@ size_t HUF_writeCTable (void* dst, size_t maxDstSize,
}
size_t HUF_readCTable (HUF_CElt* CTable, U32 maxSymbolValue, const void* src, size_t srcSize)
size_t LIZHUF_readCTable (LIZHUF_CElt* CTable, U32 maxSymbolValue, const void* src, size_t srcSize)
{
BYTE huffWeight[HUF_SYMBOLVALUE_MAX + 1];
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1]; /* large enough for values from 0 to 16 */
BYTE huffWeight[LIZHUF_SYMBOLVALUE_MAX + 1];
U32 rankVal[LIZHUF_TABLELOG_ABSOLUTEMAX + 1]; /* large enough for values from 0 to 16 */
U32 tableLog = 0;
size_t readSize;
U32 nbSymbols = 0;
/*memset(huffWeight, 0, sizeof(huffWeight));*/ /* is not necessary, even though some analyzer complain ... */
/* get symbol weights */
readSize = HUF_readStats(huffWeight, HUF_SYMBOLVALUE_MAX+1, rankVal, &nbSymbols, &tableLog, src, srcSize);
if (HUF_isError(readSize)) return readSize;
readSize = LIZHUF_readStats(huffWeight, LIZHUF_SYMBOLVALUE_MAX+1, rankVal, &nbSymbols, &tableLog, src, srcSize);
if (LIZHUF_isError(readSize)) return readSize;
/* check result */
if (tableLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (tableLog > LIZHUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (nbSymbols > maxSymbolValue+1) return ERROR(maxSymbolValue_tooSmall);
/* Prepare base value per rank */
@@ -155,12 +155,12 @@ size_t HUF_readCTable (HUF_CElt* CTable, U32 maxSymbolValue, const void* src, si
} }
/* fill val */
{ U16 nbPerRank[HUF_TABLELOG_MAX+1] = {0};
U16 valPerRank[HUF_TABLELOG_MAX+1] = {0};
{ U16 nbPerRank[LIZHUF_TABLELOG_MAX+1] = {0};
U16 valPerRank[LIZHUF_TABLELOG_MAX+1] = {0};
{ U32 n; for (n=0; n<nbSymbols; n++) nbPerRank[CTable[n].nbBits]++; }
/* determine stating value per rank */
{ U16 min = 0;
U32 n; for (n=HUF_TABLELOG_MAX; n>0; n--) {
U32 n; for (n=LIZHUF_TABLELOG_MAX; n>0; n--) {
valPerRank[n] = min; /* get starting value within each rank */
min += nbPerRank[n];
min >>= 1;
@@ -173,7 +173,7 @@ size_t HUF_readCTable (HUF_CElt* CTable, U32 maxSymbolValue, const void* src, si
}
static U32 HUF_setMaxHeight(nodeElt* huffNode, U32 lastNonNull, U32 maxNbBits)
static U32 LIZHUF_setMaxHeight(nodeElt* huffNode, U32 lastNonNull, U32 maxNbBits)
{
const U32 largestBits = huffNode[lastNonNull].nbBits;
if (largestBits <= maxNbBits) return largestBits; /* early exit : no elt > maxNbBits */
@@ -195,7 +195,7 @@ static U32 HUF_setMaxHeight(nodeElt* huffNode, U32 lastNonNull, U32 maxNbBits)
/* repay normalized cost */
{ U32 const noSymbol = 0xF0F0F0F0;
U32 rankLast[HUF_TABLELOG_MAX+2];
U32 rankLast[LIZHUF_TABLELOG_MAX+2];
int pos;
/* Get pos of last (smallest) symbol per rank */
@@ -219,7 +219,7 @@ static U32 HUF_setMaxHeight(nodeElt* huffNode, U32 lastNonNull, U32 maxNbBits)
if (highTotal <= lowTotal) break;
} }
/* only triggered when no more rank 1 symbol left => find closest one (note : there is necessarily at least one !) */
while ((nBitsToDecrease<=HUF_TABLELOG_MAX) && (rankLast[nBitsToDecrease] == noSymbol)) /* HUF_MAX_TABLELOG test just to please gcc 5+; but it should not be necessary */
while ((nBitsToDecrease<=LIZHUF_TABLELOG_MAX) && (rankLast[nBitsToDecrease] == noSymbol)) /* LIZHUF_MAX_TABLELOG test just to please gcc 5+; but it should not be necessary */
nBitsToDecrease ++;
totalCost -= 1 << (nBitsToDecrease-1);
if (rankLast[nBitsToDecrease-1] == noSymbol)
@@ -255,7 +255,7 @@ typedef struct {
U32 current;
} rankPos;
static void HUF_sort(nodeElt* huffNode, const U32* count, U32 maxSymbolValue)
static void LIZHUF_sort(nodeElt* huffNode, const U32* count, U32 maxSymbolValue)
{
rankPos rank[32];
U32 n;
@@ -278,10 +278,10 @@ static void HUF_sort(nodeElt* huffNode, const U32* count, U32 maxSymbolValue)
}
#define STARTNODE (HUF_SYMBOLVALUE_MAX+1)
size_t HUF_buildCTable (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U32 maxNbBits)
#define STARTNODE (LIZHUF_SYMBOLVALUE_MAX+1)
size_t LIZHUF_buildCTable (LIZHUF_CElt* tree, const U32* count, U32 maxSymbolValue, U32 maxNbBits)
{
nodeElt huffNode0[2*HUF_SYMBOLVALUE_MAX+1 +1];
nodeElt huffNode0[2*LIZHUF_SYMBOLVALUE_MAX+1 +1];
nodeElt* huffNode = huffNode0 + 1;
U32 n, nonNullRank;
int lowS, lowN;
@@ -289,12 +289,12 @@ size_t HUF_buildCTable (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U3
U32 nodeRoot;
/* safety checks */
if (maxNbBits == 0) maxNbBits = HUF_TABLELOG_DEFAULT;
if (maxSymbolValue > HUF_SYMBOLVALUE_MAX) return ERROR(GENERIC);
if (maxNbBits == 0) maxNbBits = LIZHUF_TABLELOG_DEFAULT;
if (maxSymbolValue > LIZHUF_SYMBOLVALUE_MAX) return ERROR(GENERIC);
memset(huffNode0, 0, sizeof(huffNode0));
/* sort, decreasing order */
HUF_sort(huffNode, count, maxSymbolValue);
LIZHUF_sort(huffNode, count, maxSymbolValue);
/* init for parents */
nonNullRank = maxSymbolValue;
@@ -323,12 +323,12 @@ size_t HUF_buildCTable (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U3
huffNode[n].nbBits = huffNode[ huffNode[n].parent ].nbBits + 1;
/* enforce maxTableLog */
maxNbBits = HUF_setMaxHeight(huffNode, nonNullRank, maxNbBits);
maxNbBits = LIZHUF_setMaxHeight(huffNode, nonNullRank, maxNbBits);
/* fill result into tree (val, nbBits) */
{ U16 nbPerRank[HUF_TABLELOG_MAX+1] = {0};
U16 valPerRank[HUF_TABLELOG_MAX+1] = {0};
if (maxNbBits > HUF_TABLELOG_MAX) return ERROR(GENERIC); /* check fit into table */
{ U16 nbPerRank[LIZHUF_TABLELOG_MAX+1] = {0};
U16 valPerRank[LIZHUF_TABLELOG_MAX+1] = {0};
if (maxNbBits > LIZHUF_TABLELOG_MAX) return ERROR(GENERIC); /* check fit into table */
for (n=0; n<=nonNullRank; n++)
nbPerRank[huffNode[n].nbBits]++;
/* determine stating value per rank */
@@ -347,65 +347,65 @@ size_t HUF_buildCTable (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U3
return maxNbBits;
}
static void HUF_encodeSymbol(BIT_CStream_t* bitCPtr, U32 symbol, const HUF_CElt* CTable)
static void LIZHUF_encodeSymbol(BIT_CStream_t* bitCPtr, U32 symbol, const LIZHUF_CElt* CTable)
{
BIT_addBitsFast(bitCPtr, CTable[symbol].val, CTable[symbol].nbBits);
}
size_t HUF_compressBound(size_t size) { return HUF_COMPRESSBOUND(size); }
size_t LIZHUF_compressBound(size_t size) { return LIZHUF_COMPRESSBOUND(size); }
#define HUF_FLUSHBITS(s) (fast ? BIT_flushBitsFast(s) : BIT_flushBits(s))
#define LIZHUF_FLUSHBITS(s) (fast ? BIT_flushBitsFast(s) : BIT_flushBits(s))
#define HUF_FLUSHBITS_1(stream) \
if (sizeof((stream)->bitContainer)*8 < HUF_TABLELOG_MAX*2+7) HUF_FLUSHBITS(stream)
#define LIZHUF_FLUSHBITS_1(stream) \
if (sizeof((stream)->bitContainer)*8 < LIZHUF_TABLELOG_MAX*2+7) LIZHUF_FLUSHBITS(stream)
#define HUF_FLUSHBITS_2(stream) \
if (sizeof((stream)->bitContainer)*8 < HUF_TABLELOG_MAX*4+7) HUF_FLUSHBITS(stream)
#define LIZHUF_FLUSHBITS_2(stream) \
if (sizeof((stream)->bitContainer)*8 < LIZHUF_TABLELOG_MAX*4+7) LIZHUF_FLUSHBITS(stream)
size_t HUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable)
size_t LIZHUF_compress1X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const LIZHUF_CElt* CTable)
{
const BYTE* ip = (const BYTE*) src;
BYTE* const ostart = (BYTE*)dst;
BYTE* const oend = ostart + dstSize;
BYTE* op = ostart;
size_t n;
const unsigned fast = (dstSize >= HUF_BLOCKBOUND(srcSize));
const unsigned fast = (dstSize >= LIZHUF_BLOCKBOUND(srcSize));
BIT_CStream_t bitC;
/* init */
if (dstSize < 8) return 0; /* not enough space to compress */
{ size_t const errorCode = BIT_initCStream(&bitC, op, oend-op);
if (HUF_isError(errorCode)) return 0; }
if (LIZHUF_isError(errorCode)) return 0; }
n = srcSize & ~3; /* join to mod 4 */
switch (srcSize & 3)
{
case 3 : HUF_encodeSymbol(&bitC, ip[n+ 2], CTable);
HUF_FLUSHBITS_2(&bitC);
case 2 : HUF_encodeSymbol(&bitC, ip[n+ 1], CTable);
HUF_FLUSHBITS_1(&bitC);
case 1 : HUF_encodeSymbol(&bitC, ip[n+ 0], CTable);
HUF_FLUSHBITS(&bitC);
case 3 : LIZHUF_encodeSymbol(&bitC, ip[n+ 2], CTable);
LIZHUF_FLUSHBITS_2(&bitC);
case 2 : LIZHUF_encodeSymbol(&bitC, ip[n+ 1], CTable);
LIZHUF_FLUSHBITS_1(&bitC);
case 1 : LIZHUF_encodeSymbol(&bitC, ip[n+ 0], CTable);
LIZHUF_FLUSHBITS(&bitC);
case 0 :
default: ;
}
for (; n>0; n-=4) { /* note : n&3==0 at this stage */
HUF_encodeSymbol(&bitC, ip[n- 1], CTable);
HUF_FLUSHBITS_1(&bitC);
HUF_encodeSymbol(&bitC, ip[n- 2], CTable);
HUF_FLUSHBITS_2(&bitC);
HUF_encodeSymbol(&bitC, ip[n- 3], CTable);
HUF_FLUSHBITS_1(&bitC);
HUF_encodeSymbol(&bitC, ip[n- 4], CTable);
HUF_FLUSHBITS(&bitC);
LIZHUF_encodeSymbol(&bitC, ip[n- 1], CTable);
LIZHUF_FLUSHBITS_1(&bitC);
LIZHUF_encodeSymbol(&bitC, ip[n- 2], CTable);
LIZHUF_FLUSHBITS_2(&bitC);
LIZHUF_encodeSymbol(&bitC, ip[n- 3], CTable);
LIZHUF_FLUSHBITS_1(&bitC);
LIZHUF_encodeSymbol(&bitC, ip[n- 4], CTable);
LIZHUF_FLUSHBITS(&bitC);
}
return BIT_closeCStream(&bitC);
}
size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const HUF_CElt* CTable)
size_t LIZHUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, size_t srcSize, const LIZHUF_CElt* CTable)
{
size_t const segmentSize = (srcSize+3)/4; /* first 3 segments */
const BYTE* ip = (const BYTE*) src;
@@ -418,32 +418,32 @@ size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, si
if (srcSize < 12) return 0; /* no saving possible : too small input */
op += 6; /* jumpTable */
{ size_t const cSize = HUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (HUF_isError(cSize)) return cSize;
{ size_t const cSize = LIZHUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (LIZHUF_isError(cSize)) return cSize;
if (cSize==0) return 0;
MEM_writeLE16(ostart, (U16)cSize);
op += cSize;
}
ip += segmentSize;
{ size_t const cSize = HUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (HUF_isError(cSize)) return cSize;
{ size_t const cSize = LIZHUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (LIZHUF_isError(cSize)) return cSize;
if (cSize==0) return 0;
MEM_writeLE16(ostart+2, (U16)cSize);
op += cSize;
}
ip += segmentSize;
{ size_t const cSize = HUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (HUF_isError(cSize)) return cSize;
{ size_t const cSize = LIZHUF_compress1X_usingCTable(op, oend-op, ip, segmentSize, CTable);
if (LIZHUF_isError(cSize)) return cSize;
if (cSize==0) return 0;
MEM_writeLE16(ostart+4, (U16)cSize);
op += cSize;
}
ip += segmentSize;
{ size_t const cSize = HUF_compress1X_usingCTable(op, oend-op, ip, iend-ip, CTable);
if (HUF_isError(cSize)) return cSize;
{ size_t const cSize = LIZHUF_compress1X_usingCTable(op, oend-op, ip, iend-ip, CTable);
if (LIZHUF_isError(cSize)) return cSize;
if (cSize==0) return 0;
op += cSize;
}
@@ -452,7 +452,7 @@ size_t HUF_compress4X_usingCTable(void* dst, size_t dstSize, const void* src, si
}
static size_t HUF_compress_internal (
static size_t LIZHUF_compress_internal (
void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog,
@@ -462,43 +462,43 @@ static size_t HUF_compress_internal (
BYTE* const oend = ostart + dstSize;
BYTE* op = ostart;
U32 count[HUF_SYMBOLVALUE_MAX+1];
HUF_CElt CTable[HUF_SYMBOLVALUE_MAX+1];
U32 count[LIZHUF_SYMBOLVALUE_MAX+1];
LIZHUF_CElt CTable[LIZHUF_SYMBOLVALUE_MAX+1];
/* checks & inits */
if (!srcSize) return 0; /* Uncompressed (note : 1 means rle, so first byte must be correct) */
if (!dstSize) return 0; /* cannot fit within dst budget */
if (srcSize > HUF_BLOCKSIZE_MAX) return ERROR(srcSize_wrong); /* current block size limit */
if (huffLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (!maxSymbolValue) maxSymbolValue = HUF_SYMBOLVALUE_MAX;
if (!huffLog) huffLog = HUF_TABLELOG_DEFAULT;
if (srcSize > LIZHUF_BLOCKSIZE_MAX) return ERROR(srcSize_wrong); /* current block size limit */
if (huffLog > LIZHUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge);
if (!maxSymbolValue) maxSymbolValue = LIZHUF_SYMBOLVALUE_MAX;
if (!huffLog) huffLog = LIZHUF_TABLELOG_DEFAULT;
/* Scan input and build symbol stats */
{ size_t const largest = FSE_count (count, &maxSymbolValue, (const BYTE*)src, srcSize);
if (HUF_isError(largest)) return largest;
{ size_t const largest = LIZFSE_count (count, &maxSymbolValue, (const BYTE*)src, srcSize);
if (LIZHUF_isError(largest)) return largest;
if (largest == srcSize) { *ostart = ((const BYTE*)src)[0]; return 1; } /* single symbol, rle */
if (largest <= (srcSize >> 7)+1) return 0; /* Fast heuristic : not compressible enough */
}
/* Build Huffman Tree */
huffLog = HUF_optimalTableLog(huffLog, srcSize, maxSymbolValue);
{ size_t const maxBits = HUF_buildCTable (CTable, count, maxSymbolValue, huffLog);
if (HUF_isError(maxBits)) return maxBits;
huffLog = LIZHUF_optimalTableLog(huffLog, srcSize, maxSymbolValue);
{ size_t const maxBits = LIZHUF_buildCTable (CTable, count, maxSymbolValue, huffLog);
if (LIZHUF_isError(maxBits)) return maxBits;
huffLog = (U32)maxBits;
}
/* Write table description header */
{ size_t const hSize = HUF_writeCTable (op, dstSize, CTable, maxSymbolValue, huffLog);
if (HUF_isError(hSize)) return hSize;
{ size_t const hSize = LIZHUF_writeCTable (op, dstSize, CTable, maxSymbolValue, huffLog);
if (LIZHUF_isError(hSize)) return hSize;
if (hSize + 12 >= srcSize) return 0; /* not useful to try compression */
op += hSize;
}
/* Compress */
{ size_t const cSize = (singleStream) ?
HUF_compress1X_usingCTable(op, oend - op, src, srcSize, CTable) : /* single segment */
HUF_compress4X_usingCTable(op, oend - op, src, srcSize, CTable);
if (HUF_isError(cSize)) return cSize;
LIZHUF_compress1X_usingCTable(op, oend - op, src, srcSize, CTable) : /* single segment */
LIZHUF_compress4X_usingCTable(op, oend - op, src, srcSize, CTable);
if (LIZHUF_isError(cSize)) return cSize;
if (cSize==0) return 0; /* uncompressible */
op += cSize;
}
@@ -511,22 +511,22 @@ static size_t HUF_compress_internal (
}
size_t HUF_compress1X (void* dst, size_t dstSize,
size_t LIZHUF_compress1X (void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog)
{
return HUF_compress_internal(dst, dstSize, src, srcSize, maxSymbolValue, huffLog, 1);
return LIZHUF_compress_internal(dst, dstSize, src, srcSize, maxSymbolValue, huffLog, 1);
}
size_t HUF_compress2 (void* dst, size_t dstSize,
size_t LIZHUF_compress2 (void* dst, size_t dstSize,
const void* src, size_t srcSize,
unsigned maxSymbolValue, unsigned huffLog)
{
return HUF_compress_internal(dst, dstSize, src, srcSize, maxSymbolValue, huffLog, 0);
return LIZHUF_compress_internal(dst, dstSize, src, srcSize, maxSymbolValue, huffLog, 0);
}
size_t HUF_compress (void* dst, size_t maxDstSize, const void* src, size_t srcSize)
size_t LIZHUF_compress (void* dst, size_t maxDstSize, const void* src, size_t srcSize)
{
return HUF_compress2(dst, maxDstSize, src, (U32)srcSize, 255, HUF_TABLELOG_DEFAULT);
return LIZHUF_compress2(dst, maxDstSize, src, (U32)srcSize, 255, LIZHUF_TABLELOG_DEFAULT);
}

View File

@@ -54,14 +54,14 @@
#include <string.h> /* memcpy, memset */
#include "bitstream.h" /* BIT_* */
#include "fse.h" /* header compression */
#define HUF_STATIC_LINKING_ONLY
#define LIZHUF_STATIC_LINKING_ONLY
#include "huf.h"
/* **************************************************************
* Error Management
****************************************************************/
#define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
#define LIZHUF_STATIC_ASSERT(c) { enum { LIZHUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
/*-***************************/
@@ -70,7 +70,7 @@
typedef struct { BYTE maxTableLog; BYTE tableType; BYTE tableLog; BYTE reserved; } DTableDesc;
static DTableDesc HUF_getDTableDesc(const HUF_DTable* table)
static DTableDesc LIZHUF_getDTableDesc(const LIZHUF_DTable* table)
{
DTableDesc dtd;
memcpy(&dtd, table, sizeof(dtd));
@@ -82,26 +82,26 @@ static DTableDesc HUF_getDTableDesc(const HUF_DTable* table)
/* single-symbol decoding */
/*-***************************/
typedef struct { BYTE byte; BYTE nbBits; } HUF_DEltX2; /* single-symbol decoding */
typedef struct { BYTE byte; BYTE nbBits; } LIZHUF_DEltX2; /* single-symbol decoding */
size_t HUF_readDTableX2 (HUF_DTable* DTable, const void* src, size_t srcSize)
size_t LIZHUF_readDTableX2 (LIZHUF_DTable* DTable, const void* src, size_t srcSize)
{
BYTE huffWeight[HUF_SYMBOLVALUE_MAX + 1];
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1]; /* large enough for values from 0 to 16 */
BYTE huffWeight[LIZHUF_SYMBOLVALUE_MAX + 1];
U32 rankVal[LIZHUF_TABLELOG_ABSOLUTEMAX + 1]; /* large enough for values from 0 to 16 */
U32 tableLog = 0;
U32 nbSymbols = 0;
size_t iSize;
void* const dtPtr = DTable + 1;
HUF_DEltX2* const dt = (HUF_DEltX2*)dtPtr;
LIZHUF_DEltX2* const dt = (LIZHUF_DEltX2*)dtPtr;
HUF_STATIC_ASSERT(sizeof(DTableDesc) == sizeof(HUF_DTable));
LIZHUF_STATIC_ASSERT(sizeof(DTableDesc) == sizeof(LIZHUF_DTable));
/* memset(huffWeight, 0, sizeof(huffWeight)); */ /* is not necessary, even though some analyzer complain ... */
iSize = HUF_readStats(huffWeight, HUF_SYMBOLVALUE_MAX + 1, rankVal, &nbSymbols, &tableLog, src, srcSize);
if (HUF_isError(iSize)) return iSize;
iSize = LIZHUF_readStats(huffWeight, LIZHUF_SYMBOLVALUE_MAX + 1, rankVal, &nbSymbols, &tableLog, src, srcSize);
if (LIZHUF_isError(iSize)) return iSize;
/* Table header */
{ DTableDesc dtd = HUF_getDTableDesc(DTable);
{ DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
if (tableLog > (U32)(dtd.maxTableLog+1)) return ERROR(tableLog_tooLarge); /* DTable too small, huffman tree cannot fit in */
dtd.tableType = 0;
dtd.tableLog = (BYTE)tableLog;
@@ -122,7 +122,7 @@ size_t HUF_readDTableX2 (HUF_DTable* DTable, const void* src, size_t srcSize)
U32 const w = huffWeight[n];
U32 const length = (1 << w) >> 1;
U32 i;
HUF_DEltX2 D;
LIZHUF_DEltX2 D;
D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w);
for (i = rankVal[w]; i < rankVal[w] + length; i++)
dt[i] = D;
@@ -133,7 +133,7 @@ size_t HUF_readDTableX2 (HUF_DTable* DTable, const void* src, size_t srcSize)
}
static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, const U32 dtLog)
static BYTE LIZHUF_decodeSymbolX2(BIT_DStream_t* Dstream, const LIZHUF_DEltX2* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */
BYTE const c = dt[val].byte;
@@ -141,57 +141,57 @@ static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, con
return c;
}
#define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \
*ptr++ = HUF_decodeSymbolX2(DStreamPtr, dt, dtLog)
#define LIZHUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \
*ptr++ = LIZHUF_decodeSymbolX2(DStreamPtr, dt, dtLog)
#define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \
if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \
HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr)
#define LIZHUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \
if (MEM_64bits() || (LIZHUF_TABLELOG_MAX<=12)) \
LIZHUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr)
#define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \
#define LIZHUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \
if (MEM_64bits()) \
HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr)
LIZHUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr)
static inline size_t HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX2* const dt, const U32 dtLog)
static inline size_t LIZHUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const LIZHUF_DEltX2* const dt, const U32 dtLog)
{
BYTE* const pStart = p;
/* up to 4 symbols at a time */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-4)) {
HUF_DECODE_SYMBOLX2_2(p, bitDPtr);
HUF_DECODE_SYMBOLX2_1(p, bitDPtr);
HUF_DECODE_SYMBOLX2_2(p, bitDPtr);
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_2(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_1(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_2(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_0(p, bitDPtr);
}
/* closer to the end */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd))
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_0(p, bitDPtr);
/* no more data to retrieve from bitstream, hence no need to reload */
while (p < pEnd)
HUF_DECODE_SYMBOLX2_0(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX2_0(p, bitDPtr);
return pEnd-pStart;
}
static size_t HUF_decompress1X2_usingDTable_internal(
static size_t LIZHUF_decompress1X2_usingDTable_internal(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
BYTE* op = (BYTE*)dst;
BYTE* const oend = op + dstSize;
const void* dtPtr = DTable + 1;
const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr;
const LIZHUF_DEltX2* const dt = (const LIZHUF_DEltX2*)dtPtr;
BIT_DStream_t bitD;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
{ size_t const errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
HUF_decodeStreamX2(op, &bitD, oend, dt, dtLog);
LIZHUF_decodeStreamX2(op, &bitD, oend, dt, dtLog);
/* check */
if (!BIT_endOfDStream(&bitD)) return ERROR(corruption_detected);
@@ -199,39 +199,39 @@ static size_t HUF_decompress1X2_usingDTable_internal(
return dstSize;
}
size_t HUF_decompress1X2_usingDTable(
size_t LIZHUF_decompress1X2_usingDTable(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc dtd = HUF_getDTableDesc(DTable);
DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
if (dtd.tableType != 0) return ERROR(GENERIC);
return HUF_decompress1X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
return LIZHUF_decompress1X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
}
size_t HUF_decompress1X2_DCtx (HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress1X2_DCtx (LIZHUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX2 (DCtx, cSrc, cSrcSize);
if (HUF_isError(hSize)) return hSize;
size_t const hSize = LIZHUF_readDTableX2 (DCtx, cSrc, cSrcSize);
if (LIZHUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress1X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx);
return LIZHUF_decompress1X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx);
}
size_t HUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_TABLELOG_MAX);
return HUF_decompress1X2_DCtx (DTable, dst, dstSize, cSrc, cSrcSize);
LIZHUF_CREATE_STATIC_DTABLEX2(DTable, LIZHUF_TABLELOG_MAX);
return LIZHUF_decompress1X2_DCtx (DTable, dst, dstSize, cSrc, cSrcSize);
}
static size_t HUF_decompress4X2_usingDTable_internal(
static size_t LIZHUF_decompress4X2_usingDTable_internal(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
/* Check */
if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */
@@ -240,7 +240,7 @@ static size_t HUF_decompress4X2_usingDTable_internal(
BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
const void* const dtPtr = DTable + 1;
const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr;
const LIZHUF_DEltX2* const dt = (const LIZHUF_DEltX2*)dtPtr;
/* Init */
BIT_DStream_t bitD1;
@@ -264,38 +264,38 @@ static size_t HUF_decompress4X2_usingDTable_internal(
BYTE* op3 = opStart3;
BYTE* op4 = opStart4;
U32 endSignal;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */
{ size_t const errorCode = BIT_initDStream(&bitD1, istart1, length1);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD2, istart2, length2);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD3, istart3, length3);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD4, istart4, length4);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
/* 16-32 symbols per loop (4-8 symbols per stream) */
endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4);
for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) {
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_1(op1, &bitD1);
HUF_DECODE_SYMBOLX2_1(op2, &bitD2);
HUF_DECODE_SYMBOLX2_1(op3, &bitD3);
HUF_DECODE_SYMBOLX2_1(op4, &bitD4);
HUF_DECODE_SYMBOLX2_2(op1, &bitD1);
HUF_DECODE_SYMBOLX2_2(op2, &bitD2);
HUF_DECODE_SYMBOLX2_2(op3, &bitD3);
HUF_DECODE_SYMBOLX2_2(op4, &bitD4);
HUF_DECODE_SYMBOLX2_0(op1, &bitD1);
HUF_DECODE_SYMBOLX2_0(op2, &bitD2);
HUF_DECODE_SYMBOLX2_0(op3, &bitD3);
HUF_DECODE_SYMBOLX2_0(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX2_2(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX2_2(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX2_2(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX2_2(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX2_1(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX2_1(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX2_1(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX2_1(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX2_2(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX2_2(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX2_2(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX2_2(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX2_0(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX2_0(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX2_0(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX2_0(op4, &bitD4);
endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4);
}
@@ -306,10 +306,10 @@ static size_t HUF_decompress4X2_usingDTable_internal(
/* note : op4 supposed already verified within main loop */
/* finish bitStreams one by one */
HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog);
HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog);
HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog);
HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog);
LIZHUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog);
LIZHUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog);
LIZHUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog);
LIZHUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog);
/* check */
endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4);
@@ -321,50 +321,50 @@ static size_t HUF_decompress4X2_usingDTable_internal(
}
size_t HUF_decompress4X2_usingDTable(
size_t LIZHUF_decompress4X2_usingDTable(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc dtd = HUF_getDTableDesc(DTable);
DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
if (dtd.tableType != 0) return ERROR(GENERIC);
return HUF_decompress4X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
return LIZHUF_decompress4X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
}
size_t HUF_decompress4X2_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X2_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX2 (dctx, cSrc, cSrcSize);
if (HUF_isError(hSize)) return hSize;
size_t const hSize = LIZHUF_readDTableX2 (dctx, cSrc, cSrcSize);
if (LIZHUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress4X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, dctx);
return LIZHUF_decompress4X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, dctx);
}
size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_TABLELOG_MAX);
return HUF_decompress4X2_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
LIZHUF_CREATE_STATIC_DTABLEX2(DTable, LIZHUF_TABLELOG_MAX);
return LIZHUF_decompress4X2_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
}
/* *************************/
/* double-symbols decoding */
/* *************************/
typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX4; /* double-symbols decoding */
typedef struct { U16 sequence; BYTE nbBits; BYTE length; } LIZHUF_DEltX4; /* double-symbols decoding */
typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t;
static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 consumed,
static void LIZHUF_fillDTableX4Level2(LIZHUF_DEltX4* DTable, U32 sizeLog, const U32 consumed,
const U32* rankValOrigin, const int minWeight,
const sortedSymbol_t* sortedSymbols, const U32 sortedListSize,
U32 nbBitsBaseline, U16 baseSeq)
{
HUF_DEltX4 DElt;
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1];
LIZHUF_DEltX4 DElt;
U32 rankVal[LIZHUF_TABLELOG_ABSOLUTEMAX + 1];
/* get pre-calculated rankVal */
memcpy(rankVal, rankValOrigin, sizeof(rankVal));
@@ -398,14 +398,14 @@ static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 co
} }
}
typedef U32 rankVal_t[HUF_TABLELOG_ABSOLUTEMAX][HUF_TABLELOG_ABSOLUTEMAX + 1];
typedef U32 rankVal_t[LIZHUF_TABLELOG_ABSOLUTEMAX][LIZHUF_TABLELOG_ABSOLUTEMAX + 1];
static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog,
static void LIZHUF_fillDTableX4(LIZHUF_DEltX4* DTable, const U32 targetLog,
const sortedSymbol_t* sortedList, const U32 sortedListSize,
const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight,
const U32 nbBitsBaseline)
{
U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1];
U32 rankVal[LIZHUF_TABLELOG_ABSOLUTEMAX + 1];
const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */
const U32 minBits = nbBitsBaseline - maxWeight;
U32 s;
@@ -425,12 +425,12 @@ static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog,
int minWeight = nbBits + scaleLog;
if (minWeight < 1) minWeight = 1;
sortedRank = rankStart[minWeight];
HUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits,
LIZHUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits,
rankValOrigin[nbBits], minWeight,
sortedList+sortedRank, sortedListSize-sortedRank,
nbBitsBaseline, symbol);
} else {
HUF_DEltX4 DElt;
LIZHUF_DEltX4 DElt;
MEM_writeLE16(&(DElt.sequence), symbol);
DElt.nbBits = (BYTE)(nbBits);
DElt.length = 1;
@@ -442,27 +442,27 @@ static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog,
}
}
size_t HUF_readDTableX4 (HUF_DTable* DTable, const void* src, size_t srcSize)
size_t LIZHUF_readDTableX4 (LIZHUF_DTable* DTable, const void* src, size_t srcSize)
{
BYTE weightList[HUF_SYMBOLVALUE_MAX + 1];
sortedSymbol_t sortedSymbol[HUF_SYMBOLVALUE_MAX + 1];
U32 rankStats[HUF_TABLELOG_ABSOLUTEMAX + 1] = { 0 };
U32 rankStart0[HUF_TABLELOG_ABSOLUTEMAX + 2] = { 0 };
BYTE weightList[LIZHUF_SYMBOLVALUE_MAX + 1];
sortedSymbol_t sortedSymbol[LIZHUF_SYMBOLVALUE_MAX + 1];
U32 rankStats[LIZHUF_TABLELOG_ABSOLUTEMAX + 1] = { 0 };
U32 rankStart0[LIZHUF_TABLELOG_ABSOLUTEMAX + 2] = { 0 };
U32* const rankStart = rankStart0+1;
rankVal_t rankVal;
U32 tableLog, maxW, sizeOfSort, nbSymbols;
DTableDesc dtd = HUF_getDTableDesc(DTable);
DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
U32 const maxTableLog = dtd.maxTableLog;
size_t iSize;
void* dtPtr = DTable+1; /* force compiler to avoid strict-aliasing */
HUF_DEltX4* const dt = (HUF_DEltX4*)dtPtr;
LIZHUF_DEltX4* const dt = (LIZHUF_DEltX4*)dtPtr;
HUF_STATIC_ASSERT(sizeof(HUF_DEltX4) == sizeof(HUF_DTable)); /* if compilation fails here, assertion is false */
if (maxTableLog > HUF_TABLELOG_ABSOLUTEMAX) return ERROR(tableLog_tooLarge);
LIZHUF_STATIC_ASSERT(sizeof(LIZHUF_DEltX4) == sizeof(LIZHUF_DTable)); /* if compilation fails here, assertion is false */
if (maxTableLog > LIZHUF_TABLELOG_ABSOLUTEMAX) return ERROR(tableLog_tooLarge);
/* memset(weightList, 0, sizeof(weightList)); */ /* is not necessary, even though some analyzer complain ... */
iSize = HUF_readStats(weightList, HUF_SYMBOLVALUE_MAX + 1, rankStats, &nbSymbols, &tableLog, src, srcSize);
if (HUF_isError(iSize)) return iSize;
iSize = LIZHUF_readStats(weightList, LIZHUF_SYMBOLVALUE_MAX + 1, rankStats, &nbSymbols, &tableLog, src, srcSize);
if (LIZHUF_isError(iSize)) return iSize;
/* check result */
if (tableLog > maxTableLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */
@@ -511,7 +511,7 @@ size_t HUF_readDTableX4 (HUF_DTable* DTable, const void* src, size_t srcSize)
rankValPtr[w] = rankVal0[w] >> consumed;
} } } }
HUF_fillDTableX4(dt, maxTableLog,
LIZHUF_fillDTableX4(dt, maxTableLog,
sortedSymbol, sizeOfSort,
rankStart0, rankVal, maxW,
tableLog+1);
@@ -523,7 +523,7 @@ size_t HUF_readDTableX4 (HUF_DTable* DTable, const void* src, size_t srcSize)
}
static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog)
static U32 LIZHUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const LIZHUF_DEltX4* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */
memcpy(op, dt+val, 2);
@@ -531,7 +531,7 @@ static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4
return dt[val].length;
}
static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog)
static U32 LIZHUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const LIZHUF_DEltX4* dt, const U32 dtLog)
{
size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */
memcpy(op, dt+val, 1);
@@ -546,62 +546,62 @@ static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DE
}
#define HUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \
ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
#define LIZHUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \
ptr += LIZHUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
#define HUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \
if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \
ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
#define LIZHUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \
if (MEM_64bits() || (LIZHUF_TABLELOG_MAX<=12)) \
ptr += LIZHUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
#define HUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \
#define LIZHUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \
if (MEM_64bits()) \
ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
ptr += LIZHUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog)
static inline size_t HUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const HUF_DEltX4* const dt, const U32 dtLog)
static inline size_t LIZHUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const LIZHUF_DEltX4* const dt, const U32 dtLog)
{
BYTE* const pStart = p;
/* up to 8 symbols at a time */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd-(sizeof(bitDPtr->bitContainer)-1))) {
HUF_DECODE_SYMBOLX4_2(p, bitDPtr);
HUF_DECODE_SYMBOLX4_1(p, bitDPtr);
HUF_DECODE_SYMBOLX4_2(p, bitDPtr);
HUF_DECODE_SYMBOLX4_0(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX4_2(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX4_1(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX4_2(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX4_0(p, bitDPtr);
}
/* closer to end : up to 2 symbols at a time */
while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p <= pEnd-2))
HUF_DECODE_SYMBOLX4_0(p, bitDPtr);
LIZHUF_DECODE_SYMBOLX4_0(p, bitDPtr);
while (p <= pEnd-2)
HUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */
LIZHUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */
if (p < pEnd)
p += HUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog);
p += LIZHUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog);
return p-pStart;
}
static size_t HUF_decompress1X4_usingDTable_internal(
static size_t LIZHUF_decompress1X4_usingDTable_internal(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
BIT_DStream_t bitD;
/* Init */
{ size_t const errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize);
if (HUF_isError(errorCode)) return errorCode;
if (LIZHUF_isError(errorCode)) return errorCode;
}
/* decode */
{ BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
const void* const dtPtr = DTable+1; /* force compiler to not use strict-aliasing */
const HUF_DEltX4* const dt = (const HUF_DEltX4*)dtPtr;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
HUF_decodeStreamX4(ostart, &bitD, oend, dt, dtd.tableLog);
const LIZHUF_DEltX4* const dt = (const LIZHUF_DEltX4*)dtPtr;
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
LIZHUF_decodeStreamX4(ostart, &bitD, oend, dt, dtd.tableLog);
}
/* check */
@@ -611,38 +611,38 @@ static size_t HUF_decompress1X4_usingDTable_internal(
return dstSize;
}
size_t HUF_decompress1X4_usingDTable(
size_t LIZHUF_decompress1X4_usingDTable(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc dtd = HUF_getDTableDesc(DTable);
DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
if (dtd.tableType != 1) return ERROR(GENERIC);
return HUF_decompress1X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
return LIZHUF_decompress1X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
}
size_t HUF_decompress1X4_DCtx (HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress1X4_DCtx (LIZHUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t const hSize = HUF_readDTableX4 (DCtx, cSrc, cSrcSize);
if (HUF_isError(hSize)) return hSize;
size_t const hSize = LIZHUF_readDTableX4 (DCtx, cSrc, cSrcSize);
if (LIZHUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress1X4_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx);
return LIZHUF_decompress1X4_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx);
}
size_t HUF_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_TABLELOG_MAX);
return HUF_decompress1X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
LIZHUF_CREATE_STATIC_DTABLEX4(DTable, LIZHUF_TABLELOG_MAX);
return LIZHUF_decompress1X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
}
static size_t HUF_decompress4X4_usingDTable_internal(
static size_t LIZHUF_decompress4X4_usingDTable_internal(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */
@@ -650,7 +650,7 @@ static size_t HUF_decompress4X4_usingDTable_internal(
BYTE* const ostart = (BYTE*) dst;
BYTE* const oend = ostart + dstSize;
const void* const dtPtr = DTable+1;
const HUF_DEltX4* const dt = (const HUF_DEltX4*)dtPtr;
const LIZHUF_DEltX4* const dt = (const LIZHUF_DEltX4*)dtPtr;
/* Init */
BIT_DStream_t bitD1;
@@ -674,38 +674,38 @@ static size_t HUF_decompress4X4_usingDTable_internal(
BYTE* op3 = opStart3;
BYTE* op4 = opStart4;
U32 endSignal;
DTableDesc const dtd = HUF_getDTableDesc(DTable);
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
U32 const dtLog = dtd.tableLog;
if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */
{ size_t const errorCode = BIT_initDStream(&bitD1, istart1, length1);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD2, istart2, length2);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD3, istart3, length3);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
{ size_t const errorCode = BIT_initDStream(&bitD4, istart4, length4);
if (HUF_isError(errorCode)) return errorCode; }
if (LIZHUF_isError(errorCode)) return errorCode; }
/* 16-32 symbols per loop (4-8 symbols per stream) */
endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4);
for ( ; (endSignal==BIT_DStream_unfinished) & (op4<(oend-(sizeof(bitD4.bitContainer)-1))) ; ) {
HUF_DECODE_SYMBOLX4_2(op1, &bitD1);
HUF_DECODE_SYMBOLX4_2(op2, &bitD2);
HUF_DECODE_SYMBOLX4_2(op3, &bitD3);
HUF_DECODE_SYMBOLX4_2(op4, &bitD4);
HUF_DECODE_SYMBOLX4_1(op1, &bitD1);
HUF_DECODE_SYMBOLX4_1(op2, &bitD2);
HUF_DECODE_SYMBOLX4_1(op3, &bitD3);
HUF_DECODE_SYMBOLX4_1(op4, &bitD4);
HUF_DECODE_SYMBOLX4_2(op1, &bitD1);
HUF_DECODE_SYMBOLX4_2(op2, &bitD2);
HUF_DECODE_SYMBOLX4_2(op3, &bitD3);
HUF_DECODE_SYMBOLX4_2(op4, &bitD4);
HUF_DECODE_SYMBOLX4_0(op1, &bitD1);
HUF_DECODE_SYMBOLX4_0(op2, &bitD2);
HUF_DECODE_SYMBOLX4_0(op3, &bitD3);
HUF_DECODE_SYMBOLX4_0(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX4_2(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX4_2(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX4_2(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX4_2(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX4_1(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX4_1(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX4_1(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX4_1(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX4_2(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX4_2(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX4_2(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX4_2(op4, &bitD4);
LIZHUF_DECODE_SYMBOLX4_0(op1, &bitD1);
LIZHUF_DECODE_SYMBOLX4_0(op2, &bitD2);
LIZHUF_DECODE_SYMBOLX4_0(op3, &bitD3);
LIZHUF_DECODE_SYMBOLX4_0(op4, &bitD4);
endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4);
}
@@ -717,10 +717,10 @@ static size_t HUF_decompress4X4_usingDTable_internal(
/* note : op4 already verified within main loop */
/* finish bitStreams one by one */
HUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog);
HUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog);
HUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog);
HUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog);
LIZHUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog);
LIZHUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog);
LIZHUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog);
LIZHUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog);
/* check */
{ U32 const endCheck = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4);
@@ -732,33 +732,33 @@ static size_t HUF_decompress4X4_usingDTable_internal(
}
size_t HUF_decompress4X4_usingDTable(
size_t LIZHUF_decompress4X4_usingDTable(
void* dst, size_t dstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc dtd = HUF_getDTableDesc(DTable);
DTableDesc dtd = LIZHUF_getDTableDesc(DTable);
if (dtd.tableType != 1) return ERROR(GENERIC);
return HUF_decompress4X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
return LIZHUF_decompress4X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable);
}
size_t HUF_decompress4X4_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X4_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
const BYTE* ip = (const BYTE*) cSrc;
size_t hSize = HUF_readDTableX4 (dctx, cSrc, cSrcSize);
if (HUF_isError(hSize)) return hSize;
size_t hSize = LIZHUF_readDTableX4 (dctx, cSrc, cSrcSize);
if (LIZHUF_isError(hSize)) return hSize;
if (hSize >= cSrcSize) return ERROR(srcSize_wrong);
ip += hSize; cSrcSize -= hSize;
return HUF_decompress4X4_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx);
return LIZHUF_decompress4X4_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx);
}
size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_TABLELOG_MAX);
return HUF_decompress4X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
LIZHUF_CREATE_STATIC_DTABLEX4(DTable, LIZHUF_TABLELOG_MAX);
return LIZHUF_decompress4X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
}
@@ -766,22 +766,22 @@ size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cS
/* Generic decompression selector */
/* ********************************/
size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize,
size_t LIZHUF_decompress1X_usingDTable(void* dst, size_t maxDstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc const dtd = HUF_getDTableDesc(DTable);
return dtd.tableType ? HUF_decompress1X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) :
HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable);
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
return dtd.tableType ? LIZHUF_decompress1X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) :
LIZHUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable);
}
size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize,
size_t LIZHUF_decompress4X_usingDTable(void* dst, size_t maxDstSize,
const void* cSrc, size_t cSrcSize,
const HUF_DTable* DTable)
const LIZHUF_DTable* DTable)
{
DTableDesc const dtd = HUF_getDTableDesc(DTable);
return dtd.tableType ? HUF_decompress4X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) :
HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable);
DTableDesc const dtd = LIZHUF_getDTableDesc(DTable);
return dtd.tableType ? LIZHUF_decompress4X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) :
LIZHUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable);
}
@@ -807,12 +807,12 @@ static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, qu
{{ 722,128}, {1891,145}, {1936,146}}, /* Q ==15 : 93-99% */
};
/** HUF_selectDecoder() :
/** LIZHUF_selectDecoder() :
* Tells which decoder is likely to decode faster,
* based on a set of pre-determined metrics.
* @return : 0==HUF_decompress4X2, 1==HUF_decompress4X4 .
* @return : 0==LIZHUF_decompress4X2, 1==LIZHUF_decompress4X4 .
* Assumption : 0 < cSrcSize < dstSize <= 128 KB */
U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize)
U32 LIZHUF_selectDecoder (size_t dstSize, size_t cSrcSize)
{
/* decoder timing evaluation */
U32 const Q = (U32)(cSrcSize * 16 / dstSize); /* Q < 16 since dstSize > cSrcSize */
@@ -827,9 +827,9 @@ U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize)
typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);
size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
static const decompressionAlgo decompress[2] = { HUF_decompress4X2, HUF_decompress4X4 };
static const decompressionAlgo decompress[2] = { LIZHUF_decompress4X2, LIZHUF_decompress4X4 };
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
@@ -837,12 +837,12 @@ size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcS
if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */
if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
{ U32 const algoNb = LIZHUF_selectDecoder(dstSize, cSrcSize);
return decompress[algoNb](dst, dstSize, cSrc, cSrcSize);
}
}
size_t HUF_decompress4X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
@@ -850,25 +850,25 @@ size_t HUF_decompress4X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const
if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */
if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? HUF_decompress4X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
HUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
{ U32 const algoNb = LIZHUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? LIZHUF_decompress4X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
LIZHUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
}
}
size_t HUF_decompress4X_hufOnly (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress4X_hufOnly (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
if ((cSrcSize >= dstSize) || (cSrcSize <= 1)) return ERROR(corruption_detected); /* invalid */
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? HUF_decompress4X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
HUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
{ U32 const algoNb = LIZHUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? LIZHUF_decompress4X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
LIZHUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
}
}
size_t HUF_decompress1X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
size_t LIZHUF_decompress1X_DCtx (LIZHUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
{
/* validation checks */
if (dstSize == 0) return ERROR(dstSize_tooSmall);
@@ -876,8 +876,8 @@ size_t HUF_decompress1X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const
if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */
if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */
{ U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? HUF_decompress1X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
HUF_decompress1X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
{ U32 const algoNb = LIZHUF_selectDecoder(dstSize, cSrcSize);
return algoNb ? LIZHUF_decompress1X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
LIZHUF_decompress1X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
}
}

View File

@@ -21,7 +21,7 @@
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOTLZ5_hash4Ptr
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOTLIZ_hash4Ptr
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
@@ -36,36 +36,36 @@
/* *************************************
* Includes
***************************************/
#include "lz5_compress.h"
#include "lz5_common.h"
#include "liz_compress.h"
#include "liz_common.h"
#include <stdio.h>
#include <stdint.h> // intptr_t
#ifndef USE_LZ4_ONLY
#ifdef LZ5_USE_TEST
#ifdef LIZ_USE_TEST
#include "test/lz5_common_test.h"
#include "test/lz5_compress_test.h"
#else
#include "lz5_compress_lz5v2.h"
#include "liz_compress_lz5v2.h"
#endif
#endif
#include "lz5_compress_lz4.h"
#include "entropy/huf.h"
#include "liz_compress_lz4.h"
#include "huf.h"
/* *************************************
* Local Macros
***************************************/
#define DELTANEXT(p) chainTable[(p) & contentMask]
#define LZ5_MINIMAL_HUFF_GAIN(comprSize) (comprSize + (comprSize/8) + 512)
#define LZ5_MINIMAL_BLOCK_GAIN(comprSize) (comprSize + (comprSize/32) + 512)
#define LIZ_MINIMAL_HUFF_GAIN(comprSize) (comprSize + (comprSize/8) + 512)
#define LIZ_MINIMAL_BLOCK_GAIN(comprSize) (comprSize + (comprSize/32) + 512)
/*-************************************
* Local Utils
**************************************/
int LZ5_versionNumber (void) { return LZ5_VERSION_NUMBER; }
int LZ5_compressBound(int isize) { return LZ5_COMPRESSBOUND(isize); }
int LZ5_sizeofState_MinLevel() { return LZ5_sizeofState(LZ5_MIN_CLEVEL); }
int LIZ_versionNumber (void) { return LIZ_VERSION_NUMBER; }
int LIZ_compressBound(int isize) { return LIZ_COMPRESSBOUND(isize); }
int LIZ_sizeofState_MinLevel() { return LIZ_sizeofState(LIZ_MIN_CLEVEL); }
@@ -80,31 +80,31 @@ static const U64 prime7bytes = 58295818150454627ULL;
#if MINMATCH == 3
static const U32 prime3bytes = 506832829U;
static U32 LZ5_hash3(U32 u, U32 h) { return (u * prime3bytes) << (32-24) >> (32-h) ; }
static size_t LZ5_hash3Ptr(const void* ptr, U32 h) { return LZ5_hash3(MEM_read32(ptr), h); }
static U32 LIZ_hash3(U32 u, U32 h) { return (u * prime3bytes) << (32-24) >> (32-h) ; }
static size_t LIZ_hash3Ptr(const void* ptr, U32 h) { return LIZ_hash3(MEM_read32(ptr), h); }
#endif
static U32 LZ5_hash4(U32 u, U32 h) { return (u * prime4bytes) >> (32-h) ; }
static size_t LZ5_hash4Ptr(const void* ptr, U32 h) { return LZ5_hash4(MEM_read32(ptr), h); }
static U32 LIZ_hash4(U32 u, U32 h) { return (u * prime4bytes) >> (32-h) ; }
static size_t LIZ_hash4Ptr(const void* ptr, U32 h) { return LIZ_hash4(MEM_read32(ptr), h); }
static size_t LZ5_hash5(U64 u, U32 h) { return (size_t)((u * prime5bytes) << (64-40) >> (64-h)) ; }
static size_t LZ5_hash5Ptr(const void* p, U32 h) { return LZ5_hash5(MEM_read64(p), h); }
static size_t LIZ_hash5(U64 u, U32 h) { return (size_t)((u * prime5bytes) << (64-40) >> (64-h)) ; }
static size_t LIZ_hash5Ptr(const void* p, U32 h) { return LIZ_hash5(MEM_read64(p), h); }
static size_t LZ5_hash6(U64 u, U32 h) { return (size_t)((u * prime6bytes) << (64-48) >> (64-h)) ; }
static size_t LZ5_hash6Ptr(const void* p, U32 h) { return LZ5_hash6(MEM_read64(p), h); }
static size_t LIZ_hash6(U64 u, U32 h) { return (size_t)((u * prime6bytes) << (64-48) >> (64-h)) ; }
static size_t LIZ_hash6Ptr(const void* p, U32 h) { return LIZ_hash6(MEM_read64(p), h); }
static size_t LZ5_hash7(U64 u, U32 h) { return (size_t)((u * prime7bytes) << (64-56) >> (64-h)) ; }
static size_t LZ5_hash7Ptr(const void* p, U32 h) { return LZ5_hash7(MEM_read64(p), h); }
static size_t LIZ_hash7(U64 u, U32 h) { return (size_t)((u * prime7bytes) << (64-56) >> (64-h)) ; }
static size_t LIZ_hash7Ptr(const void* p, U32 h) { return LIZ_hash7(MEM_read64(p), h); }
static size_t LZ5_hashPtr(const void* p, U32 hBits, U32 mls)
static size_t LIZ_hashPtr(const void* p, U32 hBits, U32 mls)
{
switch(mls)
{
default:
case 4: return LZ5_hash4Ptr(p, hBits);
case 5: return LZ5_hash5Ptr(p, hBits);
case 6: return LZ5_hash6Ptr(p, hBits);
case 7: return LZ5_hash7Ptr(p, hBits);
case 4: return LIZ_hash4Ptr(p, hBits);
case 5: return LIZ_hash5Ptr(p, hBits);
case 6: return LIZ_hash6Ptr(p, hBits);
case 7: return LIZ_hash7Ptr(p, hBits);
}
}
@@ -114,76 +114,76 @@ static size_t LZ5_hashPtr(const void* p, U32 hBits, U32 mls)
/**************************************
* Internal functions
**************************************/
/** LZ5_count_2segments() :
/** LIZ_count_2segments() :
* can count match length with `ip` & `match` in 2 different segments.
* convention : on reaching mEnd, match count continue starting from iStart
*/
static size_t LZ5_count_2segments(const BYTE* ip, const BYTE* match, const BYTE* iEnd, const BYTE* mEnd, const BYTE* iStart)
static size_t LIZ_count_2segments(const BYTE* ip, const BYTE* match, const BYTE* iEnd, const BYTE* mEnd, const BYTE* iStart)
{
const BYTE* const vEnd = MIN( ip + (mEnd - match), iEnd);
size_t const matchLength = LZ5_count(ip, match, vEnd);
size_t const matchLength = LIZ_count(ip, match, vEnd);
if (match + matchLength != mEnd) return matchLength;
return matchLength + LZ5_count(ip+matchLength, iStart, iEnd);
return matchLength + LIZ_count(ip+matchLength, iStart, iEnd);
}
void LZ5_initBlock(LZ5_stream_t* ctx)
void LIZ_initBlock(LIZ_stream_t* ctx)
{
ctx->offset16Ptr = ctx->offset16Base;
ctx->offset24Ptr = ctx->offset24Base;
ctx->lenPtr = ctx->lenBase;
ctx->literalsPtr = ctx->literalsBase;
ctx->flagsPtr = ctx->flagsBase;
ctx->last_off = LZ5_INIT_LAST_OFFSET; /* reset last offset */
ctx->last_off = LIZ_INIT_LAST_OFFSET; /* reset last offset */
}
FORCE_INLINE int LZ5_writeStream(int useHuff, LZ5_stream_t* ctx, BYTE* streamPtr, uint32_t streamLen, BYTE** op, BYTE* oend)
FORCE_INLINE int LIZ_writeStream(int useHuff, LIZ_stream_t* ctx, BYTE* streamPtr, uint32_t streamLen, BYTE** op, BYTE* oend)
{
if (useHuff && streamLen > 1024) {
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
int useHuffBuf;
if (*op + 6 > oend) { LZ5_LOG_COMPRESS("*op[%p] + 6 > oend[%p]\n", *op, oend); return -1; }
if (*op + 6 > oend) { LIZ_LOG_COMPRESS("*op[%p] + 6 > oend[%p]\n", *op, oend); return -1; }
useHuffBuf = ((size_t)(oend - (*op + 6)) < HUF_compressBound(streamLen));
useHuffBuf = ((size_t)(oend - (*op + 6)) < LIZHUF_compressBound(streamLen));
if (useHuffBuf) {
if (streamLen > LZ5_BLOCK_SIZE) { LZ5_LOG_COMPRESS("streamLen[%d] > LZ5_BLOCK_SIZE\n", streamLen); return -1; }
ctx->comprStreamLen = (U32)HUF_compress(ctx->huffBase, ctx->huffEnd - ctx->huffBase, streamPtr, streamLen);
if (streamLen > LIZ_BLOCK_SIZE) { LIZ_LOG_COMPRESS("streamLen[%d] > LIZ_BLOCK_SIZE\n", streamLen); return -1; }
ctx->comprStreamLen = (U32)LIZHUF_compress(ctx->huffBase, ctx->huffEnd - ctx->huffBase, streamPtr, streamLen);
} else {
ctx->comprStreamLen = (U32)HUF_compress(*op + 6, oend - (*op + 6), streamPtr, streamLen);
ctx->comprStreamLen = (U32)LIZHUF_compress(*op + 6, oend - (*op + 6), streamPtr, streamLen);
}
if (!HUF_isError(ctx->comprStreamLen)) {
if (ctx->comprStreamLen > 0 && (LZ5_MINIMAL_HUFF_GAIN(ctx->comprStreamLen) < streamLen)) { /* compressible */
if (!LIZHUF_isError(ctx->comprStreamLen)) {
if (ctx->comprStreamLen > 0 && (LIZ_MINIMAL_HUFF_GAIN(ctx->comprStreamLen) < streamLen)) { /* compressible */
MEM_writeLE24(*op, streamLen);
MEM_writeLE24(*op+3, ctx->comprStreamLen);
if (useHuffBuf) {
if ((size_t)(oend - (*op + 6)) < ctx->comprStreamLen) { LZ5_LOG_COMPRESS("*op[%p] oend[%p] comprStreamLen[%d]\n", *op, oend, (int)ctx->comprStreamLen); return -1; }
if ((size_t)(oend - (*op + 6)) < ctx->comprStreamLen) { LIZ_LOG_COMPRESS("*op[%p] oend[%p] comprStreamLen[%d]\n", *op, oend, (int)ctx->comprStreamLen); return -1; }
memcpy(*op + 6, ctx->huffBase, ctx->comprStreamLen);
}
*op += ctx->comprStreamLen + 6;
LZ5_LOG_COMPRESS("HUF_compress streamLen=%d comprStreamLen=%d\n", (int)streamLen, (int)ctx->comprStreamLen);
LIZ_LOG_COMPRESS("LIZHUF_compress streamLen=%d comprStreamLen=%d\n", (int)streamLen, (int)ctx->comprStreamLen);
return 1;
} else { LZ5_LOG_COMPRESS("HUF_compress ERROR comprStreamLen=%d streamLen=%d\n", (int)ctx->comprStreamLen, (int)streamLen); }
} else { LZ5_LOG_COMPRESS("HUF_compress ERROR %d: %s\n", (int)ctx->comprStreamLen, HUF_getErrorName(ctx->comprStreamLen)); }
} else { LIZ_LOG_COMPRESS("LIZHUF_compress ERROR comprStreamLen=%d streamLen=%d\n", (int)ctx->comprStreamLen, (int)streamLen); }
} else { LIZ_LOG_COMPRESS("LIZHUF_compress ERROR %d: %s\n", (int)ctx->comprStreamLen, LIZHUF_getErrorName(ctx->comprStreamLen)); }
#else
LZ5_LOG_COMPRESS("compiled with LZ5_NO_HUFFMAN\n");
LIZ_LOG_COMPRESS("compiled with LIZ_NO_HUFFMAN\n");
(void)ctx;
return -1;
#endif
} else ctx->comprStreamLen = 0;
if (*op + 3 + streamLen > oend) { LZ5_LOG_COMPRESS("*op[%p] + 3 + streamLen[%d] > oend[%p]\n", *op, streamLen, oend); return -1; }
if (*op + 3 + streamLen > oend) { LIZ_LOG_COMPRESS("*op[%p] + 3 + streamLen[%d] > oend[%p]\n", *op, streamLen, oend); return -1; }
MEM_writeLE24(*op, streamLen);
*op += 3;
memcpy(*op, streamPtr, streamLen);
*op += streamLen;
LZ5_LOG_COMPRESS("Uncompressed streamLen=%d\n", (int)streamLen);
LIZ_LOG_COMPRESS("Uncompressed streamLen=%d\n", (int)streamLen);
return 0;
}
int LZ5_writeBlock(LZ5_stream_t* ctx, const BYTE* ip, uint32_t inputSize, BYTE** op, BYTE* oend)
int LIZ_writeBlock(LIZ_stream_t* ctx, const BYTE* ip, uint32_t inputSize, BYTE** op, BYTE* oend)
{
int res;
uint32_t flagsLen = (uint32_t)(ctx->flagsPtr - ctx->flagsBase);
@@ -192,7 +192,7 @@ int LZ5_writeBlock(LZ5_stream_t* ctx, const BYTE* ip, uint32_t inputSize, BYTE**
uint32_t offset16Len = (uint32_t)(ctx->offset16Ptr - ctx->offset16Base);
uint32_t offset24Len = (uint32_t)(ctx->offset24Ptr - ctx->offset24Base);
uint32_t sum = flagsLen + literalsLen + lenLen + offset16Len + offset24Len;
#ifdef LZ5_USE_LOGS
#ifdef LIZ_USE_LOGS
uint32_t comprFlagsLen, comprLiteralsLen;
#endif
@@ -203,40 +203,40 @@ int LZ5_writeBlock(LZ5_stream_t* ctx, const BYTE* ip, uint32_t inputSize, BYTE**
*start = 0;
*op += 1;
res = LZ5_writeStream(0, ctx, ctx->lenBase, lenLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LZ5_FLAG_LEN);
res = LIZ_writeStream(0, ctx, ctx->lenBase, lenLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LIZ_FLAG_LEN);
res = LZ5_writeStream(ctx->huffType&LZ5_FLAG_OFFSET16, ctx, ctx->offset16Base, offset16Len, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LZ5_FLAG_OFFSET16);
res = LIZ_writeStream(ctx->huffType&LIZ_FLAG_OFFSET16, ctx, ctx->offset16Base, offset16Len, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LIZ_FLAG_OFFSET16);
res = LZ5_writeStream(ctx->huffType&LZ5_FLAG_OFFSET24, ctx, ctx->offset24Base, offset24Len, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LZ5_FLAG_OFFSET24);
res = LIZ_writeStream(ctx->huffType&LIZ_FLAG_OFFSET24, ctx, ctx->offset24Base, offset24Len, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LIZ_FLAG_OFFSET24);
res = LZ5_writeStream(ctx->huffType&LZ5_FLAG_FLAGS, ctx, ctx->flagsBase, flagsLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LZ5_FLAG_FLAGS);
#ifdef LZ5_USE_LOGS
res = LIZ_writeStream(ctx->huffType&LIZ_FLAG_FLAGS, ctx, ctx->flagsBase, flagsLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LIZ_FLAG_FLAGS);
#ifdef LIZ_USE_LOGS
comprFlagsLen = ctx->comprStreamLen;
#endif
res = LZ5_writeStream(ctx->huffType&LZ5_FLAG_LITERALS, ctx, ctx->literalsBase, literalsLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LZ5_FLAG_LITERALS);
#ifdef LZ5_USE_LOGS
res = LIZ_writeStream(ctx->huffType&LIZ_FLAG_LITERALS, ctx, ctx->literalsBase, literalsLen, op, oend);
if (res < 0) goto _output_error; else *start += (BYTE)(res*LIZ_FLAG_LITERALS);
#ifdef LIZ_USE_LOGS
comprLiteralsLen = ctx->comprStreamLen;
sum = (int)(*op-start);
#endif
if (LZ5_MINIMAL_BLOCK_GAIN((uint32_t)(*op-start)) > inputSize) goto _write_uncompressed;
if (LIZ_MINIMAL_BLOCK_GAIN((uint32_t)(*op-start)) > inputSize) goto _write_uncompressed;
LZ5_LOG_COMPRESS("%d: total=%d block=%d flagsLen[%.2f%%]=%d comprFlagsLen[%.2f%%]=%d literalsLen[%.2f%%]=%d comprLiteralsLen[%.2f%%]=%d lenLen=%d offset16Len[%.2f%%]=%d offset24Len[%.2f%%]=%d\n", (int)(ip - ctx->srcBase),
LIZ_LOG_COMPRESS("%d: total=%d block=%d flagsLen[%.2f%%]=%d comprFlagsLen[%.2f%%]=%d literalsLen[%.2f%%]=%d comprLiteralsLen[%.2f%%]=%d lenLen=%d offset16Len[%.2f%%]=%d offset24Len[%.2f%%]=%d\n", (int)(ip - ctx->srcBase),
(int)(*op - ctx->destBase), sum, (flagsLen*100.0)/sum, flagsLen, (comprFlagsLen*100.0)/sum, comprFlagsLen, (literalsLen*100.0)/sum, literalsLen, (comprLiteralsLen*100.0)/sum, comprLiteralsLen,
lenLen, (offset16Len*100.0)/sum, offset16Len, (offset24Len*100.0)/sum, offset24Len);
return 0;
_write_uncompressed:
LZ5_LOG_COMPRESS("%d: total=%d block=%d UNCOMPRESSED inputSize=%u outSize=%d\n", (int)(ip - ctx->srcBase),
LIZ_LOG_COMPRESS("%d: total=%d block=%d UNCOMPRESSED inputSize=%u outSize=%d\n", (int)(ip - ctx->srcBase),
(int)(*op - ctx->destBase), (int)(*op-start), inputSize, (int)(oend-start));
if ((uint32_t)(oend - start) < inputSize + 4) goto _output_error;
*start = LZ5_FLAG_UNCOMPRESSED;
*start = LIZ_FLAG_UNCOMPRESSED;
*op = start + 1;
MEM_writeLE24(*op, inputSize);
*op += 3;
@@ -245,42 +245,42 @@ _write_uncompressed:
return 0;
_output_error:
LZ5_LOG_COMPRESS("LZ5_writeBlock ERROR size=%d/%d flagsLen=%d literalsLen=%d lenLen=%d offset16Len=%d offset24Len=%d\n", (int)(*op-start), (int)(oend-start), flagsLen, literalsLen, lenLen, offset16Len, offset24Len);
LIZ_LOG_COMPRESS("LIZ_writeBlock ERROR size=%d/%d flagsLen=%d literalsLen=%d lenLen=%d offset16Len=%d offset24Len=%d\n", (int)(*op-start), (int)(oend-start), flagsLen, literalsLen, lenLen, offset16Len, offset24Len);
return 1;
}
FORCE_INLINE int LZ5_encodeSequence (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeSequence (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor,
size_t matchLength,
const BYTE* const match)
{
#ifdef USE_LZ4_ONLY
return LZ5_encodeSequence_LZ4(ctx, ip, anchor, matchLength, match);
return LIZ_encodeSequence_LZ4(ctx, ip, anchor, matchLength, match);
#else
if (ctx->params.decompressType == LZ5_coderwords_LZ4)
return LZ5_encodeSequence_LZ4(ctx, ip, anchor, matchLength, match);
if (ctx->params.decompressType == LIZ_coderwords_LZ4)
return LIZ_encodeSequence_LZ4(ctx, ip, anchor, matchLength, match);
return LZ5_encodeSequence_LZ5v2(ctx, ip, anchor, matchLength, match);
return LIZ_encodeSequence_LZ5v2(ctx, ip, anchor, matchLength, match);
#endif
}
FORCE_INLINE int LZ5_encodeLastLiterals (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_encodeLastLiterals (
LIZ_stream_t* ctx,
const BYTE** ip,
const BYTE** anchor)
{
LZ5_LOG_COMPRESS("LZ5_encodeLastLiterals LZ5_coderwords_LZ4=%d\n", ctx->params.decompressType == LZ5_coderwords_LZ4);
LIZ_LOG_COMPRESS("LIZ_encodeLastLiterals LIZ_coderwords_LZ4=%d\n", ctx->params.decompressType == LIZ_coderwords_LZ4);
#ifdef USE_LZ4_ONLY
return LZ5_encodeLastLiterals_LZ4(ctx, ip, anchor);
return LIZ_encodeLastLiterals_LZ4(ctx, ip, anchor);
#else
if (ctx->params.decompressType == LZ5_coderwords_LZ4)
return LZ5_encodeLastLiterals_LZ4(ctx, ip, anchor);
if (ctx->params.decompressType == LIZ_coderwords_LZ4)
return LIZ_encodeLastLiterals_LZ4(ctx, ip, anchor);
return LZ5_encodeLastLiterals_LZ5v2(ctx, ip, anchor);
return LIZ_encodeLastLiterals_LZ5v2(ctx, ip, anchor);
#endif
}
@@ -288,83 +288,83 @@ FORCE_INLINE int LZ5_encodeLastLiterals (
/**************************************
* Include parsers
**************************************/
#include "lz5_parser_hashchain.h"
#include "lz5_parser_nochain.h"
#include "lz5_parser_fast.h"
#include "lz5_parser_fastsmall.h"
#include "lz5_parser_fastbig.h"
#include "liz_parser_hashchain.h"
#include "liz_parser_nochain.h"
#include "liz_parser_fast.h"
#include "liz_parser_fastsmall.h"
#include "liz_parser_fastbig.h"
#ifndef USE_LZ4_ONLY
#include "lz5_parser_optimal.h"
#include "lz5_parser_lowestprice.h"
#include "lz5_parser_pricefast.h"
#include "liz_parser_optimal.h"
#include "liz_parser_lowestprice.h"
#include "liz_parser_pricefast.h"
#endif
int LZ5_verifyCompressionLevel(int compressionLevel)
int LIZ_verifyCompressionLevel(int compressionLevel)
{
(void)LZ5_hashPtr;
(void)LZ5_wildCopy16;
if (compressionLevel > LZ5_MAX_CLEVEL) compressionLevel = LZ5_MAX_CLEVEL;
if (compressionLevel < LZ5_MIN_CLEVEL) compressionLevel = LZ5_DEFAULT_CLEVEL;
(void)LIZ_hashPtr;
(void)LIZ_wildCopy16;
if (compressionLevel > LIZ_MAX_CLEVEL) compressionLevel = LIZ_MAX_CLEVEL;
if (compressionLevel < LIZ_MIN_CLEVEL) compressionLevel = LIZ_DEFAULT_CLEVEL;
return compressionLevel;
}
int LZ5_sizeofState(int compressionLevel)
int LIZ_sizeofState(int compressionLevel)
{
LZ5_parameters params;
LIZ_parameters params;
U32 hashTableSize, chainTableSize;
compressionLevel = LZ5_verifyCompressionLevel(compressionLevel);
params = LZ5_defaultParameters[compressionLevel - LZ5_MIN_CLEVEL];
compressionLevel = LIZ_verifyCompressionLevel(compressionLevel);
params = LIZ_defaultParameters[compressionLevel - LIZ_MIN_CLEVEL];
// hashTableSize = (U32)(sizeof(U32)*(((size_t)1 << params.hashLog3)+((size_t)1 << params.hashLog)));
hashTableSize = (U32)(sizeof(U32)*(((size_t)1 << params.hashLog)));
chainTableSize = (U32)(sizeof(U32)*((size_t)1 << params.contentLog));
return sizeof(LZ5_stream_t) + hashTableSize + chainTableSize + LZ5_COMPRESS_ADD_BUF + (int)LZ5_COMPRESS_ADD_HUF;
return sizeof(LIZ_stream_t) + hashTableSize + chainTableSize + LIZ_COMPRESS_ADD_BUF + (int)LIZ_COMPRESS_ADD_HUF;
}
static void LZ5_init(LZ5_stream_t* ctx, const BYTE* start)
static void LIZ_init(LIZ_stream_t* ctx, const BYTE* start)
{
MEM_INIT((void*)ctx->hashTable, 0, ctx->hashTableSize);
MEM_INIT(ctx->chainTable, 0x01, ctx->chainTableSize);
// printf("memset hashTable=%p hashEnd=%p chainTable=%p chainEnd=%p\n", ctx->hashTable, ((BYTE*)ctx->hashTable) + ctx->hashTableSize, ctx->chainTable, ((BYTE*)ctx->chainTable)+ctx->chainTableSize);
ctx->nextToUpdate = LZ5_DICT_SIZE;
ctx->base = start - LZ5_DICT_SIZE;
ctx->nextToUpdate = LIZ_DICT_SIZE;
ctx->base = start - LIZ_DICT_SIZE;
ctx->end = start;
ctx->dictBase = start - LZ5_DICT_SIZE;
ctx->dictLimit = LZ5_DICT_SIZE;
ctx->lowLimit = LZ5_DICT_SIZE;
ctx->last_off = LZ5_INIT_LAST_OFFSET;
ctx->dictBase = start - LIZ_DICT_SIZE;
ctx->dictLimit = LIZ_DICT_SIZE;
ctx->lowLimit = LIZ_DICT_SIZE;
ctx->last_off = LIZ_INIT_LAST_OFFSET;
ctx->litSum = 0;
}
/* if ctx==NULL memory is allocated and returned as value */
LZ5_stream_t* LZ5_initStream(LZ5_stream_t* ctx, int compressionLevel)
LIZ_stream_t* LIZ_initStream(LIZ_stream_t* ctx, int compressionLevel)
{
LZ5_parameters params;
LIZ_parameters params;
U32 hashTableSize, chainTableSize;
void *tempPtr;
compressionLevel = LZ5_verifyCompressionLevel(compressionLevel);
params = LZ5_defaultParameters[compressionLevel - LZ5_MIN_CLEVEL];
compressionLevel = LIZ_verifyCompressionLevel(compressionLevel);
params = LIZ_defaultParameters[compressionLevel - LIZ_MIN_CLEVEL];
// hashTableSize = (U32)(sizeof(U32)*(((size_t)1 << params.hashLog3)+((size_t)1 << params.hashLog)));
hashTableSize = (U32)(sizeof(U32)*(((size_t)1 << params.hashLog)));
chainTableSize = (U32)(sizeof(U32)*((size_t)1 << params.contentLog));
if (!ctx)
{
ctx = (LZ5_stream_t*)malloc(sizeof(LZ5_stream_t) + hashTableSize + chainTableSize + LZ5_COMPRESS_ADD_BUF + LZ5_COMPRESS_ADD_HUF);
if (!ctx) { printf("ERROR: Cannot allocate %d MB (compressionLevel=%d)\n", (int)(sizeof(LZ5_stream_t) + hashTableSize + chainTableSize)>>20, compressionLevel); return 0; }
LZ5_LOG_COMPRESS("Allocated %d MB (compressionLevel=%d)\n", (int)(sizeof(LZ5_stream_t) + hashTableSize + chainTableSize)>>20, compressionLevel);
ctx->allocatedMemory = sizeof(LZ5_stream_t) + hashTableSize + chainTableSize + LZ5_COMPRESS_ADD_BUF + (U32)LZ5_COMPRESS_ADD_HUF;
// printf("malloc from=%p to=%p hashTable=%p hashEnd=%p chainTable=%p chainEnd=%p\n", ctx, ((BYTE*)ctx)+sizeof(LZ5_stream_t) + hashTableSize + chainTableSize, ctx->hashTable, ((BYTE*)ctx->hashTable) + hashTableSize, ctx->chainTable, ((BYTE*)ctx->chainTable)+chainTableSize);
ctx = (LIZ_stream_t*)malloc(sizeof(LIZ_stream_t) + hashTableSize + chainTableSize + LIZ_COMPRESS_ADD_BUF + LIZ_COMPRESS_ADD_HUF);
if (!ctx) { printf("ERROR: Cannot allocate %d MB (compressionLevel=%d)\n", (int)(sizeof(LIZ_stream_t) + hashTableSize + chainTableSize)>>20, compressionLevel); return 0; }
LIZ_LOG_COMPRESS("Allocated %d MB (compressionLevel=%d)\n", (int)(sizeof(LIZ_stream_t) + hashTableSize + chainTableSize)>>20, compressionLevel);
ctx->allocatedMemory = sizeof(LIZ_stream_t) + hashTableSize + chainTableSize + LIZ_COMPRESS_ADD_BUF + (U32)LIZ_COMPRESS_ADD_HUF;
// printf("malloc from=%p to=%p hashTable=%p hashEnd=%p chainTable=%p chainEnd=%p\n", ctx, ((BYTE*)ctx)+sizeof(LIZ_stream_t) + hashTableSize + chainTableSize, ctx->hashTable, ((BYTE*)ctx->hashTable) + hashTableSize, ctx->chainTable, ((BYTE*)ctx->chainTable)+chainTableSize);
}
tempPtr = ctx;
ctx->hashTable = (U32*)(tempPtr) + sizeof(LZ5_stream_t)/4;
ctx->hashTable = (U32*)(tempPtr) + sizeof(LIZ_stream_t)/4;
ctx->hashTableSize = hashTableSize;
ctx->chainTable = ctx->hashTable + hashTableSize/4;
ctx->chainTableSize = chainTableSize;
@@ -373,44 +373,44 @@ LZ5_stream_t* LZ5_initStream(LZ5_stream_t* ctx, int compressionLevel)
if (compressionLevel < 30)
ctx->huffType = 0;
else
ctx->huffType = LZ5_FLAG_LITERALS + LZ5_FLAG_FLAGS; // + LZ5_FLAG_OFFSET16 + LZ5_FLAG_OFFSET24;
ctx->huffType = LIZ_FLAG_LITERALS + LIZ_FLAG_FLAGS; // + LIZ_FLAG_OFFSET16 + LIZ_FLAG_OFFSET24;
ctx->literalsBase = (BYTE*)ctx->hashTable + ctx->hashTableSize + ctx->chainTableSize;
ctx->flagsBase = ctx->literalsEnd = ctx->literalsBase + LZ5_BLOCK_SIZE_PAD;
ctx->lenBase = ctx->flagsEnd = ctx->flagsBase + LZ5_BLOCK_SIZE_PAD;
ctx->offset16Base = ctx->lenEnd = ctx->lenBase + LZ5_BLOCK_SIZE_PAD;
ctx->offset24Base = ctx->offset16End = ctx->offset16Base + LZ5_BLOCK_SIZE_PAD;
ctx->huffBase = ctx->offset24End = ctx->offset24Base + LZ5_BLOCK_SIZE_PAD;
ctx->huffEnd = ctx->huffBase + LZ5_COMPRESS_ADD_HUF;
ctx->flagsBase = ctx->literalsEnd = ctx->literalsBase + LIZ_BLOCK_SIZE_PAD;
ctx->lenBase = ctx->flagsEnd = ctx->flagsBase + LIZ_BLOCK_SIZE_PAD;
ctx->offset16Base = ctx->lenEnd = ctx->lenBase + LIZ_BLOCK_SIZE_PAD;
ctx->offset24Base = ctx->offset16End = ctx->offset16Base + LIZ_BLOCK_SIZE_PAD;
ctx->huffBase = ctx->offset24End = ctx->offset24Base + LIZ_BLOCK_SIZE_PAD;
ctx->huffEnd = ctx->huffBase + LIZ_COMPRESS_ADD_HUF;
return ctx;
}
LZ5_stream_t* LZ5_createStream(int compressionLevel)
LIZ_stream_t* LIZ_createStream(int compressionLevel)
{
LZ5_stream_t* ctx = LZ5_initStream(NULL, compressionLevel);
// if (ctx) printf("LZ5_createStream ctx=%p ctx->compressionLevel=%d\n", ctx, ctx->compressionLevel);
LIZ_stream_t* ctx = LIZ_initStream(NULL, compressionLevel);
// if (ctx) printf("LIZ_createStream ctx=%p ctx->compressionLevel=%d\n", ctx, ctx->compressionLevel);
return ctx;
}
/* initialization */
LZ5_stream_t* LZ5_resetStream(LZ5_stream_t* ctx, int compressionLevel)
LIZ_stream_t* LIZ_resetStream(LIZ_stream_t* ctx, int compressionLevel)
{
size_t wanted = LZ5_sizeofState(compressionLevel);
size_t wanted = LIZ_sizeofState(compressionLevel);
// printf("LZ5_resetStream ctx=%p cLevel=%d have=%d wanted=%d min=%d\n", ctx, compressionLevel, (int)have, (int)wanted, (int)sizeof(LZ5_stream_t));
// printf("LIZ_resetStream ctx=%p cLevel=%d have=%d wanted=%d min=%d\n", ctx, compressionLevel, (int)have, (int)wanted, (int)sizeof(LIZ_stream_t));
if (ctx->allocatedMemory < wanted)
{
// printf("REALLOC ctx=%p cLevel=%d have=%d wanted=%d\n", ctx, compressionLevel, (int)have, (int)wanted);
LZ5_freeStream(ctx);
ctx = LZ5_createStream(compressionLevel);
LIZ_freeStream(ctx);
ctx = LIZ_createStream(compressionLevel);
}
else
{
LZ5_initStream(ctx, compressionLevel);
LIZ_initStream(ctx, compressionLevel);
}
if (ctx) ctx->base = NULL;
@@ -418,33 +418,33 @@ LZ5_stream_t* LZ5_resetStream(LZ5_stream_t* ctx, int compressionLevel)
}
int LZ5_freeStream(LZ5_stream_t* ctx)
int LIZ_freeStream(LIZ_stream_t* ctx)
{
if (ctx) {
// printf("LZ5_freeStream ctx=%p ctx->compressionLevel=%d\n", ctx, ctx->compressionLevel);
// printf("LIZ_freeStream ctx=%p ctx->compressionLevel=%d\n", ctx, ctx->compressionLevel);
free(ctx);
}
return 0;
}
int LZ5_loadDict(LZ5_stream_t* LZ5_streamPtr, const char* dictionary, int dictSize)
int LIZ_loadDict(LIZ_stream_t* LIZ_streamPtr, const char* dictionary, int dictSize)
{
LZ5_stream_t* ctxPtr = (LZ5_stream_t*) LZ5_streamPtr;
if (dictSize > LZ5_DICT_SIZE) {
dictionary += dictSize - LZ5_DICT_SIZE;
dictSize = LZ5_DICT_SIZE;
LIZ_stream_t* ctxPtr = (LIZ_stream_t*) LIZ_streamPtr;
if (dictSize > LIZ_DICT_SIZE) {
dictionary += dictSize - LIZ_DICT_SIZE;
dictSize = LIZ_DICT_SIZE;
}
LZ5_init (ctxPtr, (const BYTE*)dictionary);
if (dictSize >= HASH_UPDATE_LIMIT) LZ5_Insert (ctxPtr, (const BYTE*)dictionary + (dictSize - (HASH_UPDATE_LIMIT-1)));
LIZ_init (ctxPtr, (const BYTE*)dictionary);
if (dictSize >= HASH_UPDATE_LIMIT) LIZ_Insert (ctxPtr, (const BYTE*)dictionary + (dictSize - (HASH_UPDATE_LIMIT-1)));
ctxPtr->end = (const BYTE*)dictionary + dictSize;
return dictSize;
}
static void LZ5_setExternalDict(LZ5_stream_t* ctxPtr, const BYTE* newBlock)
static void LIZ_setExternalDict(LIZ_stream_t* ctxPtr, const BYTE* newBlock)
{
if (ctxPtr->end >= ctxPtr->base + HASH_UPDATE_LIMIT) LZ5_Insert (ctxPtr, ctxPtr->end - (HASH_UPDATE_LIMIT-1)); /* Referencing remaining dictionary content */
if (ctxPtr->end >= ctxPtr->base + HASH_UPDATE_LIMIT) LIZ_Insert (ctxPtr, ctxPtr->end - (HASH_UPDATE_LIMIT-1)); /* Referencing remaining dictionary content */
/* Only one memory segment for extDict, so any previous extDict is lost at this stage */
ctxPtr->lowLimit = ctxPtr->dictLimit;
ctxPtr->dictLimit = (U32)(ctxPtr->end - ctxPtr->base);
@@ -456,12 +456,12 @@ static void LZ5_setExternalDict(LZ5_stream_t* ctxPtr, const BYTE* newBlock)
/* dictionary saving */
int LZ5_saveDict (LZ5_stream_t* LZ5_streamPtr, char* safeBuffer, int dictSize)
int LIZ_saveDict (LIZ_stream_t* LIZ_streamPtr, char* safeBuffer, int dictSize)
{
LZ5_stream_t* const ctx = (LZ5_stream_t*)LZ5_streamPtr;
LIZ_stream_t* const ctx = (LIZ_stream_t*)LIZ_streamPtr;
int const prefixSize = (int)(ctx->end - (ctx->base + ctx->dictLimit));
//printf("LZ5_saveDict dictSize=%d prefixSize=%d ctx->dictLimit=%d\n", dictSize, prefixSize, (int)ctx->dictLimit);
if (dictSize > LZ5_DICT_SIZE) dictSize = LZ5_DICT_SIZE;
//printf("LIZ_saveDict dictSize=%d prefixSize=%d ctx->dictLimit=%d\n", dictSize, prefixSize, (int)ctx->dictLimit);
if (dictSize > LIZ_DICT_SIZE) dictSize = LIZ_DICT_SIZE;
if (dictSize < 4) dictSize = 0;
if (dictSize > prefixSize) dictSize = prefixSize;
memmove(safeBuffer, ctx->end - dictSize, dictSize);
@@ -472,18 +472,18 @@ int LZ5_saveDict (LZ5_stream_t* LZ5_streamPtr, char* safeBuffer, int dictSize)
ctx->lowLimit = endIndex - dictSize;
if (ctx->nextToUpdate < ctx->dictLimit) ctx->nextToUpdate = ctx->dictLimit;
}
//printf("2LZ5_saveDict dictSize=%d\n", dictSize);
//printf("2LIZ_saveDict dictSize=%d\n", dictSize);
return dictSize;
}
FORCE_INLINE int LZ5_compress_generic (
FORCE_INLINE int LIZ_compress_generic (
void* ctxvoid,
const char* source,
char* dest,
int inputSize,
int maxOutputSize)
{
LZ5_stream_t* ctx = (LZ5_stream_t*) ctxvoid;
LIZ_stream_t* ctx = (LIZ_stream_t*) ctxvoid;
size_t dictSize = (size_t)(ctx->end - ctx->base) - ctx->dictLimit;
const BYTE* ip = (const BYTE*) source;
BYTE* op = (BYTE*) dest;
@@ -491,7 +491,7 @@ FORCE_INLINE int LZ5_compress_generic (
int res;
(void)dictSize;
LZ5_LOG_COMPRESS("LZ5_compress_generic source=%p inputSize=%d dest=%p maxOutputSize=%d cLevel=%d dictBase=%p dictSize=%d\n", source, inputSize, dest, maxOutputSize, ctx->compressionLevel, ctx->dictBase, (int)dictSize);
LIZ_LOG_COMPRESS("LIZ_compress_generic source=%p inputSize=%d dest=%p maxOutputSize=%d cLevel=%d dictBase=%p dictSize=%d\n", source, inputSize, dest, maxOutputSize, ctx->compressionLevel, ctx->dictBase, (int)dictSize);
*op++ = (BYTE)ctx->compressionLevel;
maxOutputSize--; // can be lower than 0
ctx->end += inputSize;
@@ -500,77 +500,77 @@ FORCE_INLINE int LZ5_compress_generic (
while (inputSize > 0)
{
int inputPart = MIN(LZ5_BLOCK_SIZE, inputSize);
int inputPart = MIN(LIZ_BLOCK_SIZE, inputSize);
if (ctx->huffType) LZ5_rescaleFreqs(ctx);
LZ5_initBlock(ctx);
if (ctx->huffType) LIZ_rescaleFreqs(ctx);
LIZ_initBlock(ctx);
ctx->diffBase = ip;
switch(ctx->params.parserType)
{
default:
case LZ5_parser_fastSmall:
res = LZ5_compress_fastSmall(ctx, ip, ip+inputPart); break;
case LZ5_parser_fast:
res = LZ5_compress_fast(ctx, ip, ip+inputPart); break;
case LZ5_parser_noChain:
res = LZ5_compress_noChain(ctx, ip, ip+inputPart); break;
case LZ5_parser_hashChain:
res = LZ5_compress_hashChain(ctx, ip, ip+inputPart); break;
case LIZ_parser_fastSmall:
res = LIZ_compress_fastSmall(ctx, ip, ip+inputPart); break;
case LIZ_parser_fast:
res = LIZ_compress_fast(ctx, ip, ip+inputPart); break;
case LIZ_parser_noChain:
res = LIZ_compress_noChain(ctx, ip, ip+inputPart); break;
case LIZ_parser_hashChain:
res = LIZ_compress_hashChain(ctx, ip, ip+inputPart); break;
#ifndef USE_LZ4_ONLY
case LZ5_parser_fastBig:
res = LZ5_compress_fastBig(ctx, ip, ip+inputPart); break;
case LZ5_parser_priceFast:
res = LZ5_compress_priceFast(ctx, ip, ip+inputPart); break;
case LZ5_parser_lowestPrice:
res = LZ5_compress_lowestPrice(ctx, ip, ip+inputPart); break;
case LZ5_parser_optimalPrice:
case LZ5_parser_optimalPriceBT:
res = LZ5_compress_optimalPrice(ctx, ip, ip+inputPart); break;
case LIZ_parser_fastBig:
res = LIZ_compress_fastBig(ctx, ip, ip+inputPart); break;
case LIZ_parser_priceFast:
res = LIZ_compress_priceFast(ctx, ip, ip+inputPart); break;
case LIZ_parser_lowestPrice:
res = LIZ_compress_lowestPrice(ctx, ip, ip+inputPart); break;
case LIZ_parser_optimalPrice:
case LIZ_parser_optimalPriceBT:
res = LIZ_compress_optimalPrice(ctx, ip, ip+inputPart); break;
#else
case LZ5_parser_priceFast:
case LZ5_parser_lowestPrice:
case LZ5_parser_optimalPrice:
case LZ5_parser_optimalPriceBT:
case LIZ_parser_priceFast:
case LIZ_parser_lowestPrice:
case LIZ_parser_optimalPrice:
case LIZ_parser_optimalPriceBT:
res = 0;
#endif
}
LZ5_LOG_COMPRESS("LZ5_compress_generic res=%d inputPart=%d \n", res, inputPart);
LIZ_LOG_COMPRESS("LIZ_compress_generic res=%d inputPart=%d \n", res, inputPart);
if (res <= 0) return res;
if (LZ5_writeBlock(ctx, ip, inputPart, &op, oend)) goto _output_error;
if (LIZ_writeBlock(ctx, ip, inputPart, &op, oend)) goto _output_error;
ip += inputPart;
inputSize -= inputPart;
LZ5_LOG_COMPRESS("LZ5_compress_generic in=%d out=%d\n", (int)(ip-(const BYTE*)source), (int)(op-(BYTE*)dest));
LIZ_LOG_COMPRESS("LIZ_compress_generic in=%d out=%d\n", (int)(ip-(const BYTE*)source), (int)(op-(BYTE*)dest));
}
LZ5_LOG_COMPRESS("LZ5_compress_generic total=%d\n", (int)(op-(BYTE*)dest));
LIZ_LOG_COMPRESS("LIZ_compress_generic total=%d\n", (int)(op-(BYTE*)dest));
return (int)(op-(BYTE*)dest);
_output_error:
LZ5_LOG_COMPRESS("LZ5_compress_generic ERROR\n");
LIZ_LOG_COMPRESS("LIZ_compress_generic ERROR\n");
return 0;
}
int LZ5_compress_continue (LZ5_stream_t* ctxPtr,
int LIZ_compress_continue (LIZ_stream_t* ctxPtr,
const char* source, char* dest,
int inputSize, int maxOutputSize)
{
/* auto-init if forgotten */
if (ctxPtr->base == NULL) LZ5_init (ctxPtr, (const BYTE*) source);
if (ctxPtr->base == NULL) LIZ_init (ctxPtr, (const BYTE*) source);
/* Check overflow */
if ((size_t)(ctxPtr->end - ctxPtr->base) > 2 GB) {
size_t dictSize = (size_t)(ctxPtr->end - ctxPtr->base) - ctxPtr->dictLimit;
if (dictSize > LZ5_DICT_SIZE) dictSize = LZ5_DICT_SIZE;
LZ5_loadDict((LZ5_stream_t*)ctxPtr, (const char*)(ctxPtr->end) - dictSize, (int)dictSize);
if (dictSize > LIZ_DICT_SIZE) dictSize = LIZ_DICT_SIZE;
LIZ_loadDict((LIZ_stream_t*)ctxPtr, (const char*)(ctxPtr->end) - dictSize, (int)dictSize);
}
/* Check if blocks follow each other */
if ((const BYTE*)source != ctxPtr->end)
LZ5_setExternalDict(ctxPtr, (const BYTE*)source);
LIZ_setExternalDict(ctxPtr, (const BYTE*)source);
/* Check overlapping input/dictionary space */
{ const BYTE* sourceEnd = (const BYTE*) source + inputSize;
@@ -583,32 +583,32 @@ int LZ5_compress_continue (LZ5_stream_t* ctxPtr,
}
}
return LZ5_compress_generic (ctxPtr, source, dest, inputSize, maxOutputSize);
return LIZ_compress_generic (ctxPtr, source, dest, inputSize, maxOutputSize);
}
int LZ5_compress_extState (void* state, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel)
int LIZ_compress_extState (void* state, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel)
{
LZ5_stream_t* ctx = (LZ5_stream_t*) state;
LIZ_stream_t* ctx = (LIZ_stream_t*) state;
if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0; /* Error : state is not aligned for pointers (32 or 64 bits) */
/* initialize stream */
LZ5_initStream(ctx, compressionLevel);
LZ5_init ((LZ5_stream_t*)state, (const BYTE*)src);
LIZ_initStream(ctx, compressionLevel);
LIZ_init ((LIZ_stream_t*)state, (const BYTE*)src);
return LZ5_compress_generic (state, src, dst, srcSize, maxDstSize);
return LIZ_compress_generic (state, src, dst, srcSize, maxDstSize);
}
int LZ5_compress(const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel)
int LIZ_compress(const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel)
{
int cSize;
LZ5_stream_t* statePtr = LZ5_createStream(compressionLevel);
LIZ_stream_t* statePtr = LIZ_createStream(compressionLevel);
if (!statePtr) return 0;
cSize = LZ5_compress_extState(statePtr, src, dst, srcSize, maxDstSize, compressionLevel);
cSize = LIZ_compress_extState(statePtr, src, dst, srcSize, maxDstSize, compressionLevel);
LZ5_freeStream(statePtr);
LIZ_freeStream(statePtr);
return cSize;
}
@@ -616,22 +616,22 @@ int LZ5_compress(const char* src, char* dst, int srcSize, int maxDstSize, int co
/**************************************
* Level1 functions
**************************************/
int LZ5_compress_extState_MinLevel(void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
int LIZ_compress_extState_MinLevel(void* state, const char* source, char* dest, int inputSize, int maxOutputSize)
{
return LZ5_compress_extState(state, source, dest, inputSize, maxOutputSize, LZ5_MIN_CLEVEL);
return LIZ_compress_extState(state, source, dest, inputSize, maxOutputSize, LIZ_MIN_CLEVEL);
}
int LZ5_compress_MinLevel(const char* source, char* dest, int inputSize, int maxOutputSize)
int LIZ_compress_MinLevel(const char* source, char* dest, int inputSize, int maxOutputSize)
{
return LZ5_compress(source, dest, inputSize, maxOutputSize, LZ5_MIN_CLEVEL);
return LIZ_compress(source, dest, inputSize, maxOutputSize, LIZ_MIN_CLEVEL);
}
LZ5_stream_t* LZ5_createStream_MinLevel(void)
LIZ_stream_t* LIZ_createStream_MinLevel(void)
{
return LZ5_createStream(LZ5_MIN_CLEVEL);
return LIZ_createStream(LIZ_MIN_CLEVEL);
}
LZ5_stream_t* LZ5_resetStream_MinLevel(LZ5_stream_t* LZ5_stream)
LIZ_stream_t* LIZ_resetStream_MinLevel(LIZ_stream_t* LIZ_stream)
{
return LZ5_resetStream (LZ5_stream, LZ5_MIN_CLEVEL);
return LIZ_resetStream (LIZ_stream, LIZ_MIN_CLEVEL);
}

View File

@@ -36,13 +36,13 @@
/**************************************
* Includes
**************************************/
//#define LZ5_STATS 1 // 0=simple stats, 1=more, 2=full
#ifdef LZ5_STATS
//#define LIZ_STATS 1 // 0=simple stats, 1=more, 2=full
#ifdef LIZ_STATS
#include "test/lz5_stats.h"
#endif
#include "lz5_compress.h"
#include "lz5_decompress.h"
#include "lz5_common.h"
#include "liz_compress.h"
#include "liz_decompress.h"
#include "liz_common.h"
#include <stdio.h> // printf
#include <stdint.h> // intptr_t
@@ -53,23 +53,23 @@
typedef enum { noDict = 0, withPrefix64k, usingExtDict } dict_directive;
typedef enum { full = 0, partial = 1 } earlyEnd_directive;
#include "lz5_decompress_lz4.h"
#include "liz_decompress_lz4.h"
#ifndef USE_LZ4_ONLY
#ifdef LZ5_USE_TEST
#ifdef LIZ_USE_TEST
#include "test/lz5_common_test.h"
#include "test/lz5_decompress_test.h"
#else
#include "lz5_decompress_lz5v2.h"
#include "liz_decompress_lz5v2.h"
#endif
#endif
#include "entropy/huf.h"
#include "huf.h"
/*-*****************************
* Decompression functions
*******************************/
FORCE_INLINE size_t LZ5_readStream(int flag, const BYTE** ip, const BYTE* const iend, BYTE* op, BYTE* const oend, const BYTE** streamPtr, const BYTE** streamEnd, int streamFlag)
FORCE_INLINE size_t LIZ_readStream(int flag, const BYTE** ip, const BYTE* const iend, BYTE* op, BYTE* const oend, const BYTE** streamPtr, const BYTE** streamEnd, int streamFlag)
{
if (!flag) {
if (*ip > iend - 3) return 0;
@@ -77,36 +77,36 @@ FORCE_INLINE size_t LZ5_readStream(int flag, const BYTE** ip, const BYTE* const
*streamEnd = *streamPtr + MEM_readLE24(*ip);
if (*streamEnd < *streamPtr) return 0;
*ip = *streamEnd;
#ifdef LZ5_STATS
#ifdef LIZ_STATS
uncompr_stream[streamFlag] += *streamEnd-*streamPtr;
#else
(void)streamFlag;
#endif
return 1;
} else {
#ifndef LZ5_NO_HUFFMAN
#ifndef LIZ_NO_HUFFMAN
size_t res, streamLen, comprStreamLen;
if (*ip > iend - 6) return 0;
streamLen = MEM_readLE24(*ip);
comprStreamLen = MEM_readLE24(*ip + 3);
// printf("LZ5_readStream ip=%p iout=%p iend=%p streamLen=%d comprStreamLen=%d\n", *ip, *ip + 6 + comprStreamLen, iend, (int)streamLen, (int)comprStreamLen);
// printf("LIZ_readStream ip=%p iout=%p iend=%p streamLen=%d comprStreamLen=%d\n", *ip, *ip + 6 + comprStreamLen, iend, (int)streamLen, (int)comprStreamLen);
if ((op > oend - streamLen) || (*ip + comprStreamLen > iend - 6)) return 0;
res = HUF_decompress(op, streamLen, *ip + 6, comprStreamLen);
if (HUF_isError(res) || (res != streamLen)) return 0;
res = LIZHUF_decompress(op, streamLen, *ip + 6, comprStreamLen);
if (LIZHUF_isError(res) || (res != streamLen)) return 0;
*ip += comprStreamLen + 6;
*streamPtr = op;
*streamEnd = *streamPtr + streamLen;
#ifdef LZ5_STATS
#ifdef LIZ_STATS
compr_stream[streamFlag] += comprStreamLen + 6;
decompr_stream[streamFlag] += *streamEnd-*streamPtr;
#endif
return 1;
#else
fprintf(stderr, "compiled with LZ5_NO_HUFFMAN\n");
fprintf(stderr, "compiled with LIZ_NO_HUFFMAN\n");
(void)op; (void)oend;
return 0;
#endif
@@ -114,7 +114,7 @@ FORCE_INLINE size_t LZ5_readStream(int flag, const BYTE** ip, const BYTE* const
}
FORCE_INLINE int LZ5_decompress_generic(
FORCE_INLINE int LIZ_decompress_generic(
const char* source,
char* const dest,
int inputSize,
@@ -133,29 +133,29 @@ FORCE_INLINE int LZ5_decompress_generic(
BYTE* op = (BYTE*) dest;
BYTE* const oend = op + outputSize;
BYTE* oexit = op + targetOutputSize;
LZ5_parameters params;
LZ5_dstream_t ctx;
LIZ_parameters params;
LIZ_dstream_t ctx;
BYTE* decompFlagsBase, *decompOff24Base, *decompOff16Base, *decompLiteralsBase = NULL;
int res, compressionLevel;
if (inputSize < 1) { LZ5_LOG_DECOMPRESS("inputSize=%d outputSize=%d targetOutputSize=%d partialDecoding=%d\n", inputSize, outputSize, targetOutputSize, partialDecoding); return 0; }
if (inputSize < 1) { LIZ_LOG_DECOMPRESS("inputSize=%d outputSize=%d targetOutputSize=%d partialDecoding=%d\n", inputSize, outputSize, targetOutputSize, partialDecoding); return 0; }
compressionLevel = *ip++;
if (compressionLevel < LZ5_MIN_CLEVEL || compressionLevel > LZ5_MAX_CLEVEL) {
LZ5_LOG_DECOMPRESS("ERROR LZ5_decompress_generic inputSize=%d compressionLevel=%d\n", inputSize, compressionLevel);
if (compressionLevel < LIZ_MIN_CLEVEL || compressionLevel > LIZ_MAX_CLEVEL) {
LIZ_LOG_DECOMPRESS("ERROR LIZ_decompress_generic inputSize=%d compressionLevel=%d\n", inputSize, compressionLevel);
return -1;
}
LZ5_LOG_DECOMPRESS("LZ5_decompress_generic ip=%p inputSize=%d targetOutputSize=%d dest=%p outputSize=%d cLevel=%d dict=%d dictSize=%d dictStart=%p partialDecoding=%d\n", ip, inputSize, targetOutputSize, dest, outputSize, compressionLevel, dict, (int)dictSize, dictStart, partialDecoding);
LIZ_LOG_DECOMPRESS("LIZ_decompress_generic ip=%p inputSize=%d targetOutputSize=%d dest=%p outputSize=%d cLevel=%d dict=%d dictSize=%d dictStart=%p partialDecoding=%d\n", ip, inputSize, targetOutputSize, dest, outputSize, compressionLevel, dict, (int)dictSize, dictStart, partialDecoding);
decompLiteralsBase = (BYTE*)malloc(4*LZ5_HUF_BLOCK_SIZE);
decompLiteralsBase = (BYTE*)malloc(4*LIZ_LIZHUF_BLOCK_SIZE);
if (!decompLiteralsBase) return -1;
decompFlagsBase = decompLiteralsBase + LZ5_HUF_BLOCK_SIZE;
decompOff24Base = decompFlagsBase + LZ5_HUF_BLOCK_SIZE;
decompOff16Base = decompOff24Base + LZ5_HUF_BLOCK_SIZE;
decompFlagsBase = decompLiteralsBase + LIZ_LIZHUF_BLOCK_SIZE;
decompOff24Base = decompFlagsBase + LIZ_LIZHUF_BLOCK_SIZE;
decompOff16Base = decompOff24Base + LIZ_LIZHUF_BLOCK_SIZE;
#ifdef LZ5_STATS
#ifdef LIZ_STATS
init_stats();
#endif
(void)istart;
@@ -163,63 +163,63 @@ FORCE_INLINE int LZ5_decompress_generic(
while (ip < iend)
{
res = *ip++;
if (res == LZ5_FLAG_UNCOMPRESSED) /* uncompressed */
if (res == LIZ_FLAG_UNCOMPRESSED) /* uncompressed */
{
uint32_t length;
if (ip > iend - 3) { LZ5_LOG_DECOMPRESS("UNCOMPRESSED ip[%p] > iend[%p] - 3\n", ip, iend); goto _output_error; }
if (ip > iend - 3) { LIZ_LOG_DECOMPRESS("UNCOMPRESSED ip[%p] > iend[%p] - 3\n", ip, iend); goto _output_error; }
length = MEM_readLE24(ip);
ip += 3;
// printf("%d: total=%d block=%d UNCOMPRESSED op=%p oexit=%p oend=%p\n", (int)(op-(BYTE*)dest) ,(int)(ip-istart), length, op, oexit, oend);
if (ip + length > iend || op + length > oend) { LZ5_LOG_DECOMPRESS("UNCOMPRESSED ip[%p]+length[%d] > iend[%p]\n", ip, length, iend); goto _output_error; }
if (ip + length > iend || op + length > oend) { LIZ_LOG_DECOMPRESS("UNCOMPRESSED ip[%p]+length[%d] > iend[%p]\n", ip, length, iend); goto _output_error; }
memcpy(op, ip, length);
op += length;
ip += length;
if ((partialDecoding) && (op >= oexit)) break;
#ifdef LZ5_STATS
uncompr_stream[LZ5_STREAM_UNCOMPRESSED] += length;
#ifdef LIZ_STATS
uncompr_stream[LIZ_STREAM_UNCOMPRESSED] += length;
#endif
continue;
}
if (res&LZ5_FLAG_LEN) {
LZ5_LOG_DECOMPRESS("res=%d\n", res); goto _output_error;
if (res&LIZ_FLAG_LEN) {
LIZ_LOG_DECOMPRESS("res=%d\n", res); goto _output_error;
}
if (ip > iend - 5*3) goto _output_error;
ctx.lenPtr = (const BYTE*)ip + 3;
ctx.lenEnd = ctx.lenPtr + MEM_readLE24(ip);
if (ctx.lenEnd < ctx.lenPtr || (ctx.lenEnd > iend - 3)) goto _output_error;
#ifdef LZ5_STATS
uncompr_stream[LZ5_STREAM_LEN] += ctx.lenEnd-ctx.lenPtr + 3;
#ifdef LIZ_STATS
uncompr_stream[LIZ_STREAM_LEN] += ctx.lenEnd-ctx.lenPtr + 3;
#endif
ip = ctx.lenEnd;
{ size_t streamLen;
#ifdef LZ5_USE_LOGS
#ifdef LIZ_USE_LOGS
const BYTE* ipos;
size_t comprFlagsLen, comprLiteralsLen, total;
#endif
streamLen = LZ5_readStream(res&LZ5_FLAG_OFFSET16, &ip, iend, decompOff16Base, decompOff16Base + LZ5_HUF_BLOCK_SIZE, &ctx.offset16Ptr, &ctx.offset16End, LZ5_STREAM_OFFSET16);
streamLen = LIZ_readStream(res&LIZ_FLAG_OFFSET16, &ip, iend, decompOff16Base, decompOff16Base + LIZ_LIZHUF_BLOCK_SIZE, &ctx.offset16Ptr, &ctx.offset16End, LIZ_STREAM_OFFSET16);
if (streamLen == 0) goto _output_error;
streamLen = LZ5_readStream(res&LZ5_FLAG_OFFSET24, &ip, iend, decompOff24Base, decompOff24Base + LZ5_HUF_BLOCK_SIZE, &ctx.offset24Ptr, &ctx.offset24End, LZ5_STREAM_OFFSET24);
streamLen = LIZ_readStream(res&LIZ_FLAG_OFFSET24, &ip, iend, decompOff24Base, decompOff24Base + LIZ_LIZHUF_BLOCK_SIZE, &ctx.offset24Ptr, &ctx.offset24End, LIZ_STREAM_OFFSET24);
if (streamLen == 0) goto _output_error;
#ifdef LZ5_USE_LOGS
#ifdef LIZ_USE_LOGS
ipos = ip;
streamLen = LZ5_readStream(res&LZ5_FLAG_FLAGS, &ip, iend, decompFlagsBase, decompFlagsBase + LZ5_HUF_BLOCK_SIZE, &ctx.flagsPtr, &ctx.flagsEnd, LZ5_STREAM_FLAGS);
streamLen = LIZ_readStream(res&LIZ_FLAG_FLAGS, &ip, iend, decompFlagsBase, decompFlagsBase + LIZ_LIZHUF_BLOCK_SIZE, &ctx.flagsPtr, &ctx.flagsEnd, LIZ_STREAM_FLAGS);
if (streamLen == 0) goto _output_error;
streamLen = (size_t)(ctx.flagsEnd-ctx.flagsPtr);
comprFlagsLen = ((size_t)(ip - ipos) + 3 >= streamLen) ? 0 : (size_t)(ip - ipos);
ipos = ip;
#else
streamLen = LZ5_readStream(res&LZ5_FLAG_FLAGS, &ip, iend, decompFlagsBase, decompFlagsBase + LZ5_HUF_BLOCK_SIZE, &ctx.flagsPtr, &ctx.flagsEnd, LZ5_STREAM_FLAGS);
streamLen = LIZ_readStream(res&LIZ_FLAG_FLAGS, &ip, iend, decompFlagsBase, decompFlagsBase + LIZ_LIZHUF_BLOCK_SIZE, &ctx.flagsPtr, &ctx.flagsEnd, LIZ_STREAM_FLAGS);
if (streamLen == 0) goto _output_error;
#endif
streamLen = LZ5_readStream(res&LZ5_FLAG_LITERALS, &ip, iend, decompLiteralsBase, decompLiteralsBase + LZ5_HUF_BLOCK_SIZE, &ctx.literalsPtr, &ctx.literalsEnd, LZ5_STREAM_LITERALS);
streamLen = LIZ_readStream(res&LIZ_FLAG_LITERALS, &ip, iend, decompLiteralsBase, decompLiteralsBase + LIZ_LIZHUF_BLOCK_SIZE, &ctx.literalsPtr, &ctx.literalsEnd, LIZ_STREAM_LITERALS);
if (streamLen == 0) goto _output_error;
#ifdef LZ5_USE_LOGS
#ifdef LIZ_USE_LOGS
streamLen = (size_t)(ctx.literalsEnd-ctx.literalsPtr);
comprLiteralsLen = ((size_t)(ip - ipos) + 3 >= streamLen) ? 0 : (size_t)(ip - ipos);
total = (size_t)(ip-(ctx.lenEnd-1));
@@ -227,22 +227,22 @@ FORCE_INLINE int LZ5_decompress_generic(
if (ip > iend) goto _output_error;
LZ5_LOG_DECOMPRESS("%d: total=%d block=%d flagsLen=%d(HUF=%d) literalsLen=%d(HUF=%d) offset16Len=%d offset24Len=%d lengthsLen=%d \n", (int)(op-(BYTE*)dest) ,(int)(ip-istart), (int)total,
LIZ_LOG_DECOMPRESS("%d: total=%d block=%d flagsLen=%d(HUF=%d) literalsLen=%d(HUF=%d) offset16Len=%d offset24Len=%d lengthsLen=%d \n", (int)(op-(BYTE*)dest) ,(int)(ip-istart), (int)total,
(int)(ctx.flagsEnd-ctx.flagsPtr), (int)comprFlagsLen, (int)(ctx.literalsEnd-ctx.literalsPtr), (int)comprLiteralsLen,
(int)(ctx.offset16End-ctx.offset16Ptr), (int)(ctx.offset24End-ctx.offset24Ptr), (int)(ctx.lenEnd-ctx.lenPtr));
}
ctx.last_off = -LZ5_INIT_LAST_OFFSET;
params = LZ5_defaultParameters[compressionLevel - LZ5_MIN_CLEVEL];
if (params.decompressType == LZ5_coderwords_LZ4)
res = LZ5_decompress_LZ4(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
ctx.last_off = -LIZ_INIT_LAST_OFFSET;
params = LIZ_defaultParameters[compressionLevel - LIZ_MIN_CLEVEL];
if (params.decompressType == LIZ_coderwords_LZ4)
res = LIZ_decompress_LZ4(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
else
#ifdef USE_LZ4_ONLY
res = LZ5_decompress_LZ4(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
res = LIZ_decompress_LZ4(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
#else
res = LZ5_decompress_LZ5v2(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
res = LIZ_decompress_LZ5v2(&ctx, op, outputSize, partialDecoding, targetOutputSize, dict, lowPrefix, dictStart, dictSize, compressionLevel);
#endif
LZ5_LOG_DECOMPRESS("LZ5_decompress_generic res=%d inputSize=%d\n", res, (int)(ctx.literalsEnd-ctx.lenEnd));
LIZ_LOG_DECOMPRESS("LIZ_decompress_generic res=%d inputSize=%d\n", res, (int)(ctx.literalsEnd-ctx.lenEnd));
if (res <= 0) { free(decompLiteralsBase); return res; }
@@ -251,29 +251,29 @@ FORCE_INLINE int LZ5_decompress_generic(
if ((partialDecoding) && (op >= oexit)) break;
}
#ifdef LZ5_STATS
#ifdef LIZ_STATS
print_stats();
#endif
LZ5_LOG_DECOMPRESS("LZ5_decompress_generic total=%d\n", (int)(op-(BYTE*)dest));
LIZ_LOG_DECOMPRESS("LIZ_decompress_generic total=%d\n", (int)(op-(BYTE*)dest));
free(decompLiteralsBase);
return (int)(op-(BYTE*)dest);
_output_error:
LZ5_LOG_DECOMPRESS("LZ5_decompress_generic ERROR\n");
LIZ_LOG_DECOMPRESS("LIZ_decompress_generic ERROR\n");
free(decompLiteralsBase);
return -1;
}
int LZ5_decompress_safe(const char* source, char* dest, int compressedSize, int maxDecompressedSize)
int LIZ_decompress_safe(const char* source, char* dest, int compressedSize, int maxDecompressedSize)
{
return LZ5_decompress_generic(source, dest, compressedSize, maxDecompressedSize, full, 0, noDict, (BYTE*)dest, NULL, 0);
return LIZ_decompress_generic(source, dest, compressedSize, maxDecompressedSize, full, 0, noDict, (BYTE*)dest, NULL, 0);
}
int LZ5_decompress_safe_partial(const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize)
int LIZ_decompress_safe_partial(const char* source, char* dest, int compressedSize, int targetOutputSize, int maxDecompressedSize)
{
return LZ5_decompress_generic(source, dest, compressedSize, maxDecompressedSize, partial, targetOutputSize, noDict, (BYTE*)dest, NULL, 0);
return LIZ_decompress_generic(source, dest, compressedSize, maxDecompressedSize, partial, targetOutputSize, noDict, (BYTE*)dest, NULL, 0);
}
@@ -282,32 +282,32 @@ int LZ5_decompress_safe_partial(const char* source, char* dest, int compressedSi
/*
* If you prefer dynamic allocation methods,
* LZ5_createStreamDecode()
* provides a pointer (void*) towards an initialized LZ5_streamDecode_t structure.
* LIZ_createStreamDecode()
* provides a pointer (void*) towards an initialized LIZ_streamDecode_t structure.
*/
LZ5_streamDecode_t* LZ5_createStreamDecode(void)
LIZ_streamDecode_t* LIZ_createStreamDecode(void)
{
LZ5_streamDecode_t* lz5s = (LZ5_streamDecode_t*) ALLOCATOR(1, sizeof(LZ5_streamDecode_t));
(void)LZ5_count; /* unused function 'LZ5_count' */
LIZ_streamDecode_t* lz5s = (LIZ_streamDecode_t*) ALLOCATOR(1, sizeof(LIZ_streamDecode_t));
(void)LIZ_count; /* unused function 'LIZ_count' */
return lz5s;
}
int LZ5_freeStreamDecode (LZ5_streamDecode_t* LZ5_stream)
int LIZ_freeStreamDecode (LIZ_streamDecode_t* LIZ_stream)
{
FREEMEM(LZ5_stream);
FREEMEM(LIZ_stream);
return 0;
}
/*!
* LZ5_setStreamDecode() :
* LIZ_setStreamDecode() :
* Use this function to instruct where to find the dictionary.
* This function is not necessary if previous data is still available where it was decoded.
* Loading a size of 0 is allowed (same effect as no dictionary).
* Return : 1 if OK, 0 if error
*/
int LZ5_setStreamDecode (LZ5_streamDecode_t* LZ5_streamDecode, const char* dictionary, int dictSize)
int LIZ_setStreamDecode (LIZ_streamDecode_t* LIZ_streamDecode, const char* dictionary, int dictSize)
{
LZ5_streamDecode_t* lz5sd = (LZ5_streamDecode_t*) LZ5_streamDecode;
LIZ_streamDecode_t* lz5sd = (LIZ_streamDecode_t*) LIZ_streamDecode;
lz5sd->prefixSize = (size_t) dictSize;
lz5sd->prefixEnd = (const BYTE*) dictionary + dictSize;
lz5sd->externalDict = NULL;
@@ -320,15 +320,15 @@ int LZ5_setStreamDecode (LZ5_streamDecode_t* LZ5_streamDecode, const char* dicti
These decoding functions allow decompression of multiple blocks in "streaming" mode.
Previously decoded blocks must still be available at the memory position where they were decoded.
If it's not possible, save the relevant part of decoded data into a safe buffer,
and indicate where it stands using LZ5_setStreamDecode()
and indicate where it stands using LIZ_setStreamDecode()
*/
int LZ5_decompress_safe_continue (LZ5_streamDecode_t* LZ5_streamDecode, const char* source, char* dest, int compressedSize, int maxOutputSize)
int LIZ_decompress_safe_continue (LIZ_streamDecode_t* LIZ_streamDecode, const char* source, char* dest, int compressedSize, int maxOutputSize)
{
LZ5_streamDecode_t* lz5sd = (LZ5_streamDecode_t*) LZ5_streamDecode;
LIZ_streamDecode_t* lz5sd = (LIZ_streamDecode_t*) LIZ_streamDecode;
int result;
if (lz5sd->prefixEnd == (BYTE*)dest) {
result = LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize,
result = LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize,
full, 0, usingExtDict, lz5sd->prefixEnd - lz5sd->prefixSize, lz5sd->externalDict, lz5sd->extDictSize);
if (result <= 0) return result;
lz5sd->prefixSize += result;
@@ -336,7 +336,7 @@ int LZ5_decompress_safe_continue (LZ5_streamDecode_t* LZ5_streamDecode, const ch
} else {
lz5sd->extDictSize = lz5sd->prefixSize;
lz5sd->externalDict = lz5sd->prefixEnd - lz5sd->extDictSize;
result = LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize,
result = LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize,
full, 0, usingExtDict, (BYTE*)dest, lz5sd->externalDict, lz5sd->extDictSize);
if (result <= 0) return result;
lz5sd->prefixSize = result;
@@ -354,22 +354,22 @@ Advanced decoding functions :
the dictionary must be explicitly provided within parameters
*/
int LZ5_decompress_safe_usingDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
int LIZ_decompress_safe_usingDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
{
if (dictSize==0)
return LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, noDict, (BYTE*)dest, NULL, 0);
return LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, noDict, (BYTE*)dest, NULL, 0);
if (dictStart+dictSize == dest)
{
if (dictSize >= (int)(LZ5_DICT_SIZE - 1))
return LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, withPrefix64k, (BYTE*)dest-LZ5_DICT_SIZE, NULL, 0);
return LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, noDict, (BYTE*)dest-dictSize, NULL, 0);
if (dictSize >= (int)(LIZ_DICT_SIZE - 1))
return LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, withPrefix64k, (BYTE*)dest-LIZ_DICT_SIZE, NULL, 0);
return LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, noDict, (BYTE*)dest-dictSize, NULL, 0);
}
return LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
return LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
}
/* debug function */
int LZ5_decompress_safe_forceExtDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
int LIZ_decompress_safe_forceExtDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize)
{
return LZ5_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
return LIZ_decompress_generic(source, dest, compressedSize, maxOutputSize, full, 0, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize);
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +1,45 @@
#define LZ5_FAST_MIN_OFFSET 8
#define LZ5_FAST_LONGOFF_MM 0 /* not used with offsets > 1<<16 */
#define LIZ_FAST_MIN_OFFSET 8
#define LIZ_FAST_LONGOFF_MM 0 /* not used with offsets > 1<<16 */
/**************************************
* Hash Functions
**************************************/
static size_t LZ5_hashPosition(const void* p)
static size_t LIZ_hashPosition(const void* p)
{
if (MEM_64bits())
return LZ5_hash5Ptr(p, LZ5_HASHLOG_LZ4);
return LZ5_hash4Ptr(p, LZ5_HASHLOG_LZ4);
return LIZ_hash5Ptr(p, LIZ_HASHLOG_LZ4);
return LIZ_hash4Ptr(p, LIZ_HASHLOG_LZ4);
}
static void LZ5_putPositionOnHash(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
static void LIZ_putPositionOnHash(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
{
hashTable[h] = (U32)(p-srcBase);
}
static void LZ5_putPosition(const BYTE* p, U32* hashTable, const BYTE* srcBase)
static void LIZ_putPosition(const BYTE* p, U32* hashTable, const BYTE* srcBase)
{
size_t const h = LZ5_hashPosition(p);
LZ5_putPositionOnHash(p, h, hashTable, srcBase);
size_t const h = LIZ_hashPosition(p);
LIZ_putPositionOnHash(p, h, hashTable, srcBase);
}
static U32 LZ5_getPositionOnHash(size_t h, U32* hashTable)
static U32 LIZ_getPositionOnHash(size_t h, U32* hashTable)
{
return hashTable[h];
}
static U32 LZ5_getPosition(const BYTE* p, U32* hashTable)
static U32 LIZ_getPosition(const BYTE* p, U32* hashTable)
{
size_t const h = LZ5_hashPosition(p);
return LZ5_getPositionOnHash(h, hashTable);
size_t const h = LIZ_hashPosition(p);
return LIZ_getPositionOnHash(h, hashTable);
}
static const U32 LZ5_skipTrigger = 6; /* Increase this value ==> compression run slower on incompressible data */
static const U32 LZ5_minLength = (MFLIMIT+1);
static const U32 LIZ_skipTrigger = 6; /* Increase this value ==> compression run slower on incompressible data */
static const U32 LIZ_minLength = (MFLIMIT+1);
FORCE_INLINE int LZ5_compress_fast(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_fast(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -57,17 +57,17 @@ FORCE_INLINE int LZ5_compress_fast(
size_t forwardH, matchIndex;
const U32 maxDistance = (1 << ctx->params.windowLog) - 1;
// fprintf(stderr, "base=%p LZ5_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LZ5_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "base=%p LIZ_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LIZ_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "ip=%d base=%p lowPrefixPtr=%p dictBase=%d lowLimit=%p op=%p\n", ip, base, lowPrefixPtr, lowLimit, dictBase, op);
/* Init conditions */
if ((U32)(iend-ip) > (U32)LZ5_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) > (U32)LIZ_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) < LZ5_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
if ((U32)(iend-ip) < LIZ_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
/* First Byte */
LZ5_putPosition(ip, ctx->hashTable, base);
ip++; forwardH = LZ5_hashPosition(ip);
LIZ_putPosition(ip, ctx->hashTable, base);
ip++; forwardH = LIZ_hashPosition(ip);
/* Main Loop */
for ( ; ; ) {
@@ -78,35 +78,35 @@ FORCE_INLINE int LZ5_compress_fast(
/* Find a match */
{ const BYTE* forwardIp = ip;
unsigned step = 1;
unsigned searchMatchNb = acceleration << LZ5_skipTrigger;
unsigned searchMatchNb = acceleration << LIZ_skipTrigger;
while (1) {
size_t const h = forwardH;
ip = forwardIp;
forwardIp += step;
step = (searchMatchNb++ >> LZ5_skipTrigger);
step = (searchMatchNb++ >> LIZ_skipTrigger);
if (unlikely(forwardIp > mflimit)) goto _last_literals;
matchIndex = LZ5_getPositionOnHash(h, ctx->hashTable);
forwardH = LZ5_hashPosition(forwardIp);
LZ5_putPositionOnHash(ip, h, ctx->hashTable, base);
matchIndex = LIZ_getPositionOnHash(h, ctx->hashTable);
forwardH = LIZ_hashPosition(forwardIp);
LIZ_putPositionOnHash(ip, h, ctx->hashTable, base);
if ((matchIndex < lowLimit) || (base + matchIndex + maxDistance < ip)) continue;
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
#endif
if (MEM_read32(match) == MEM_read32(ip))
{
int back = 0;
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
while ((ip+back > anchor) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
{
ip += back;
@@ -116,20 +116,20 @@ FORCE_INLINE int LZ5_compress_fast(
}
} else {
match = dictBase + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
const U32 newLowLimit = (lowLimit + maxDistance >= (U32)(ip-base)) ? lowLimit : (U32)(ip - base) - maxDistance;
int back = 0;
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
while ((ip+back > anchor) && (matchIndex+back > newLowLimit) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
match = base + matchIndex + back;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
{
ip += back;
@@ -141,43 +141,43 @@ FORCE_INLINE int LZ5_compress_fast(
}
_next_match:
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
/* Test end of chunk */
if (ip > mflimit) break;
/* Fill table */
LZ5_putPosition(ip-2, ctx->hashTable, base);
LIZ_putPosition(ip-2, ctx->hashTable, base);
/* Test next position */
matchIndex = LZ5_getPosition(ip, ctx->hashTable);
LZ5_putPosition(ip, ctx->hashTable, base);
matchIndex = LIZ_getPosition(ip, ctx->hashTable);
LIZ_putPosition(ip, ctx->hashTable, base);
if (matchIndex >= lowLimit && (base + matchIndex + maxDistance >= ip))
{
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
#endif
if (MEM_read32(match) == MEM_read32(ip))
{
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
goto _next_match;
}
} else {
match = dictBase + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
match = base + matchIndex;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
goto _next_match;
}
@@ -185,13 +185,13 @@ _next_match:
}
/* Prepare next loop */
forwardH = LZ5_hashPosition(++ip);
forwardH = LIZ_hashPosition(++ip);
}
_last_literals:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,39 +1,39 @@
#define LZ5_FASTBIG_LONGOFF_MM MM_LONGOFF
#define LIZ_FASTBIG_LONGOFF_MM MM_LONGOFF
/**************************************
* Hash Functions
**************************************/
static size_t LZ5_hashPositionHLog(const void* p, int hashLog)
static size_t LIZ_hashPositionHLog(const void* p, int hashLog)
{
if (MEM_64bits())
return LZ5_hash5Ptr(p, hashLog);
return LZ5_hash4Ptr(p, hashLog);
return LIZ_hash5Ptr(p, hashLog);
return LIZ_hash4Ptr(p, hashLog);
}
static void LZ5_putPositionOnHashHLog(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
static void LIZ_putPositionOnHashHLog(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
{
hashTable[h] = (U32)(p-srcBase);
}
static void LZ5_putPositionHLog(const BYTE* p, U32* hashTable, const BYTE* srcBase, int hashLog)
static void LIZ_putPositionHLog(const BYTE* p, U32* hashTable, const BYTE* srcBase, int hashLog)
{
size_t const h = LZ5_hashPositionHLog(p, hashLog);
LZ5_putPositionOnHashHLog(p, h, hashTable, srcBase);
size_t const h = LIZ_hashPositionHLog(p, hashLog);
LIZ_putPositionOnHashHLog(p, h, hashTable, srcBase);
}
static U32 LZ5_getPositionOnHashHLog(size_t h, U32* hashTable)
static U32 LIZ_getPositionOnHashHLog(size_t h, U32* hashTable)
{
return hashTable[h];
}
static U32 LZ5_getPositionHLog(const BYTE* p, U32* hashTable, int hashLog)
static U32 LIZ_getPositionHLog(const BYTE* p, U32* hashTable, int hashLog)
{
size_t const h = LZ5_hashPositionHLog(p, hashLog);
return LZ5_getPositionOnHashHLog(h, hashTable);
size_t const h = LIZ_hashPositionHLog(p, hashLog);
return LIZ_getPositionOnHashHLog(h, hashTable);
}
FORCE_INLINE int LZ5_compress_fastBig(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_fastBig(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -52,17 +52,17 @@ FORCE_INLINE int LZ5_compress_fastBig(
const int hashLog = ctx->params.hashLog;
const U32 maxDistance = (1 << ctx->params.windowLog) - 1;
// fprintf(stderr, "base=%p LZ5_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LZ5_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "base=%p LIZ_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LIZ_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "ip=%d base=%p lowPrefixPtr=%p dictBase=%d lowLimit=%p op=%p\n", ip, base, lowPrefixPtr, lowLimit, dictBase, op);
/* Init conditions */
if ((U32)(iend-ip) > (U32)LZ5_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) > (U32)LIZ_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) < LZ5_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
if ((U32)(iend-ip) < LIZ_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
/* First Byte */
LZ5_putPositionHLog(ip, ctx->hashTable, base, hashLog);
ip++; forwardH = LZ5_hashPositionHLog(ip, hashLog);
LIZ_putPositionHLog(ip, ctx->hashTable, base, hashLog);
ip++; forwardH = LIZ_hashPositionHLog(ip, hashLog);
/* Main Loop */
for ( ; ; ) {
@@ -73,32 +73,32 @@ FORCE_INLINE int LZ5_compress_fastBig(
/* Find a match */
{ const BYTE* forwardIp = ip;
unsigned step = 1;
unsigned searchMatchNb = acceleration << LZ5_skipTrigger;
unsigned searchMatchNb = acceleration << LIZ_skipTrigger;
while (1) {
size_t const h = forwardH;
ip = forwardIp;
forwardIp += step;
step = (searchMatchNb++ >> LZ5_skipTrigger);
step = (searchMatchNb++ >> LIZ_skipTrigger);
if (unlikely(forwardIp > mflimit)) goto _last_literals;
matchIndex = LZ5_getPositionOnHashHLog(h, ctx->hashTable);
forwardH = LZ5_hashPositionHLog(forwardIp, hashLog);
LZ5_putPositionOnHashHLog(ip, h, ctx->hashTable, base);
matchIndex = LIZ_getPositionOnHashHLog(h, ctx->hashTable);
forwardH = LIZ_hashPositionHLog(forwardIp, hashLog);
LIZ_putPositionOnHashHLog(ip, h, ctx->hashTable, base);
if ((matchIndex < lowLimit) || (base + matchIndex + maxDistance < ip)) continue;
if (matchIndex >= dictLimit) {
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
if (MEM_read32(match) == MEM_read32(ip))
{
int back = 0;
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
while ((ip+back > anchor) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
if ((matchLength >= LZ5_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if ((matchLength >= LIZ_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
{
ip += back;
match += back;
@@ -107,17 +107,17 @@ FORCE_INLINE int LZ5_compress_fastBig(
}
} else {
match = dictBase + matchIndex;
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
const U32 newLowLimit = (lowLimit + maxDistance >= (U32)(ip-base)) ? lowLimit : (U32)(ip - base) - maxDistance;
int back = 0;
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
while ((ip+back > anchor) && (matchIndex+back > newLowLimit) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
match = base + matchIndex + back;
if ((matchLength >= LZ5_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if ((matchLength >= LIZ_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
{
ip += back;
break;
@@ -128,49 +128,49 @@ FORCE_INLINE int LZ5_compress_fastBig(
}
_next_match:
if (LZ5_encodeSequence_LZ5v2(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
if (LIZ_encodeSequence_LZ5v2(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
/* Test end of chunk */
if (ip > mflimit) break;
/* Fill table */
LZ5_putPositionHLog(ip-2, ctx->hashTable, base, hashLog);
LIZ_putPositionHLog(ip-2, ctx->hashTable, base, hashLog);
/* Test next position */
matchIndex = LZ5_getPositionHLog(ip, ctx->hashTable, hashLog);
LZ5_putPositionHLog(ip, ctx->hashTable, base, hashLog);
matchIndex = LIZ_getPositionHLog(ip, ctx->hashTable, hashLog);
LIZ_putPositionHLog(ip, ctx->hashTable, base, hashLog);
if (matchIndex >= lowLimit && (base + matchIndex + maxDistance >= ip))
{
if (matchIndex >= dictLimit) {
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
if (MEM_read32(match) == MEM_read32(ip))
{
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
if ((matchLength >= LZ5_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
if ((matchLength >= LIZ_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
goto _next_match;
}
} else {
match = dictBase + matchIndex;
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
match = base + matchIndex;
if ((matchLength >= LZ5_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if ((matchLength >= LIZ_FASTBIG_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
goto _next_match;
}
}
}
/* Prepare next loop */
forwardH = LZ5_hashPositionHLog(++ip, hashLog);
forwardH = LIZ_hashPositionHLog(++ip, hashLog);
}
_last_literals:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,38 +1,38 @@
/**************************************
* Hash Functions
**************************************/
static size_t LZ5_hashPositionSmall(const void* p)
static size_t LIZ_hashPositionSmall(const void* p)
{
if (MEM_64bits())
return LZ5_hash5Ptr(p, LZ5_HASHLOG_LZ4SM);
return LZ5_hash4Ptr(p, LZ5_HASHLOG_LZ4SM);
return LIZ_hash5Ptr(p, LIZ_HASHLOG_LZ4SM);
return LIZ_hash4Ptr(p, LIZ_HASHLOG_LZ4SM);
}
static void LZ5_putPositionOnHashSmall(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
static void LIZ_putPositionOnHashSmall(const BYTE* p, size_t h, U32* hashTable, const BYTE* srcBase)
{
hashTable[h] = (U32)(p-srcBase);
}
static void LZ5_putPositionSmall(const BYTE* p, U32* hashTable, const BYTE* srcBase)
static void LIZ_putPositionSmall(const BYTE* p, U32* hashTable, const BYTE* srcBase)
{
size_t const h = LZ5_hashPositionSmall(p);
LZ5_putPositionOnHashSmall(p, h, hashTable, srcBase);
size_t const h = LIZ_hashPositionSmall(p);
LIZ_putPositionOnHashSmall(p, h, hashTable, srcBase);
}
static U32 LZ5_getPositionOnHashSmall(size_t h, U32* hashTable)
static U32 LIZ_getPositionOnHashSmall(size_t h, U32* hashTable)
{
return hashTable[h];
}
static U32 LZ5_getPositionSmall(const BYTE* p, U32* hashTable)
static U32 LIZ_getPositionSmall(const BYTE* p, U32* hashTable)
{
size_t const h = LZ5_hashPositionSmall(p);
return LZ5_getPositionOnHashSmall(h, hashTable);
size_t const h = LIZ_hashPositionSmall(p);
return LIZ_getPositionOnHashSmall(h, hashTable);
}
FORCE_INLINE int LZ5_compress_fastSmall(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_fastSmall(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -50,17 +50,17 @@ FORCE_INLINE int LZ5_compress_fastSmall(
size_t forwardH, matchIndex;
const U32 maxDistance = (1 << ctx->params.windowLog) - 1;
// fprintf(stderr, "base=%p LZ5_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LZ5_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "base=%p LIZ_stream_t=%d inputSize=%d maxOutputSize=%d\n", base, sizeof(LIZ_stream_t), inputSize, maxOutputSize);
// fprintf(stderr, "ip=%d base=%p lowPrefixPtr=%p dictBase=%d lowLimit=%p op=%p\n", ip, base, lowPrefixPtr, lowLimit, dictBase, op);
/* Init conditions */
if ((U32)(iend-ip) > (U32)LZ5_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) > (U32)LIZ_MAX_INPUT_SIZE) goto _output_error; /* Unsupported inputSize, too large (or negative) */
if ((U32)(iend-ip) < LZ5_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
if ((U32)(iend-ip) < LIZ_minLength) goto _last_literals; /* Input too small, no compression (all literals) */
/* First Byte */
LZ5_putPositionSmall(ip, ctx->hashTable, base);
ip++; forwardH = LZ5_hashPositionSmall(ip);
LIZ_putPositionSmall(ip, ctx->hashTable, base);
ip++; forwardH = LIZ_hashPositionSmall(ip);
/* Main Loop */
for ( ; ; ) {
@@ -71,35 +71,35 @@ FORCE_INLINE int LZ5_compress_fastSmall(
/* Find a match */
{ const BYTE* forwardIp = ip;
unsigned step = 1;
unsigned searchMatchNb = acceleration << LZ5_skipTrigger;
unsigned searchMatchNb = acceleration << LIZ_skipTrigger;
while (1) {
size_t const h = forwardH;
ip = forwardIp;
forwardIp += step;
step = (searchMatchNb++ >> LZ5_skipTrigger);
step = (searchMatchNb++ >> LIZ_skipTrigger);
if (unlikely(forwardIp > mflimit)) goto _last_literals;
matchIndex = LZ5_getPositionOnHashSmall(h, ctx->hashTable);
forwardH = LZ5_hashPositionSmall(forwardIp);
LZ5_putPositionOnHashSmall(ip, h, ctx->hashTable, base);
matchIndex = LIZ_getPositionOnHashSmall(h, ctx->hashTable);
forwardH = LIZ_hashPositionSmall(forwardIp);
LIZ_putPositionOnHashSmall(ip, h, ctx->hashTable, base);
if ((matchIndex < lowLimit) || (base + matchIndex + maxDistance < ip)) continue;
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
#endif
if (MEM_read32(match) == MEM_read32(ip))
{
int back = 0;
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
while ((ip+back > anchor) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
{
ip += back;
@@ -109,20 +109,20 @@ FORCE_INLINE int LZ5_compress_fastSmall(
}
} else {
match = dictBase + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
const U32 newLowLimit = (lowLimit + maxDistance >= (U32)(ip-base)) ? lowLimit : (U32)(ip - base) - maxDistance;
int back = 0;
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
while ((ip+back > anchor) && (matchIndex+back > newLowLimit) && (ip[back-1] == match[back-1])) back--;
matchLength -= back;
match = base + matchIndex + back;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
{
ip += back;
@@ -134,43 +134,43 @@ FORCE_INLINE int LZ5_compress_fastSmall(
}
_next_match:
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, matchLength+MINMATCH, match)) goto _output_error;
/* Test end of chunk */
if (ip > mflimit) break;
/* Fill table */
LZ5_putPositionSmall(ip-2, ctx->hashTable, base);
LIZ_putPositionSmall(ip-2, ctx->hashTable, base);
/* Test next position */
matchIndex = LZ5_getPositionSmall(ip, ctx->hashTable);
LZ5_putPositionSmall(ip, ctx->hashTable, base);
matchIndex = LIZ_getPositionSmall(ip, ctx->hashTable);
LIZ_putPositionSmall(ip, ctx->hashTable, base);
if (matchIndex >= lowLimit && (base + matchIndex + maxDistance >= ip))
{
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_FAST_MIN_OFFSET)
#endif
if (MEM_read32(match) == MEM_read32(ip))
{
matchLength = LZ5_count(ip+MINMATCH, match+MINMATCH, matchlimit);
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
matchLength = LIZ_count(ip+MINMATCH, match+MINMATCH, matchlimit);
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
goto _next_match;
}
} else {
match = dictBase + matchIndex;
#if LZ5_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_FAST_MIN_OFFSET)
#if LIZ_FAST_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_FAST_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
matchLength = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
matchLength = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, matchlimit, dictEnd, lowPrefixPtr);
match = base + matchIndex;
#if LZ5_FAST_LONGOFF_MM > 0
if ((matchLength >= LZ5_FAST_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_FAST_LONGOFF_MM > 0
if ((matchLength >= LIZ_FAST_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
goto _next_match;
}
@@ -178,13 +178,13 @@ _next_match:
}
/* Prepare next loop */
forwardH = LZ5_hashPositionSmall(++ip);
forwardH = LIZ_hashPositionSmall(++ip);
}
_last_literals:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,16 +1,16 @@
#define LZ5_HC_MIN_OFFSET 8
#define LZ5_HC_LONGOFF_MM 0 /* not used with offsets > 1<<16 */
#define LIZ_HC_MIN_OFFSET 8
#define LIZ_HC_LONGOFF_MM 0 /* not used with offsets > 1<<16 */
#define OPTIMAL_ML (int)((ML_MASK_LZ4-1)+MINMATCH)
#define GET_MINMATCH(offset) (MINMATCH)
#if 1
#define LZ5_HC_HASH_FUNCTION(ip, hashLog) LZ5_hashPtr(ip, hashLog, ctx->params.searchLength)
#define LIZ_HC_HASH_FUNCTION(ip, hashLog) LIZ_hashPtr(ip, hashLog, ctx->params.searchLength)
#else
#define LZ5_HC_HASH_FUNCTION(ip, hashLog) LZ5_hash5Ptr(ip, hashLog)
#define LIZ_HC_HASH_FUNCTION(ip, hashLog) LIZ_hash5Ptr(ip, hashLog)
#endif
/* Update chains up to ip (excluded) */
FORCE_INLINE void LZ5_Insert (LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE void LIZ_Insert (LIZ_stream_t* ctx, const BYTE* ip)
{
U32* const chainTable = ctx->chainTable;
U32* const hashTable = ctx->hashTable;
@@ -25,14 +25,14 @@ FORCE_INLINE void LZ5_Insert (LZ5_stream_t* ctx, const BYTE* ip)
const U32 maxDistance = (1 << ctx->params.windowLog) - 1;
while (idx < target) {
size_t const h = LZ5_hashPtr(base+idx, hashLog, ctx->params.searchLength);
size_t const h = LIZ_hashPtr(base+idx, hashLog, ctx->params.searchLength);
size_t delta = idx - hashTable[h];
if (delta>maxDistance) delta = maxDistance;
DELTANEXT(idx) = (U32)delta;
if (idx >= hashTable[h] + LZ5_HC_MIN_OFFSET)
if (idx >= hashTable[h] + LIZ_HC_MIN_OFFSET)
hashTable[h] = idx;
#if MINMATCH == 3
HashTable3[LZ5_hash3Ptr(base+idx, ctx->params.hashLog3)] = idx;
HashTable3[LIZ_hash3Ptr(base+idx, ctx->params.hashLog3)] = idx;
#endif
idx++;
}
@@ -42,7 +42,7 @@ FORCE_INLINE void LZ5_Insert (LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE int LZ5_InsertAndFindBestMatch (LZ5_stream_t* ctx, /* Index table will be updated */
FORCE_INLINE int LIZ_InsertAndFindBestMatch (LIZ_stream_t* ctx, /* Index table will be updated */
const BYTE* ip, const BYTE* const iLimit,
const BYTE** matchpos)
{
@@ -63,36 +63,36 @@ FORCE_INLINE int LZ5_InsertAndFindBestMatch (LZ5_stream_t* ctx, /* Index table
const U32 lowLimit = (ctx->lowLimit + maxDistance >= (U32)(ip-base)) ? ctx->lowLimit : (U32)(ip - base) - maxDistance;
/* HC4 match finder */
LZ5_Insert(ctx, ip);
matchIndex = HashTable[LZ5_HC_HASH_FUNCTION(ip, hashLog)];
LIZ_Insert(ctx, ip);
matchIndex = HashTable[LIZ_HC_HASH_FUNCTION(ip, hashLog)];
while ((matchIndex>=lowLimit) && (nbAttempts)) {
nbAttempts--;
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_HC_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_HC_MIN_OFFSET)
#if LIZ_HC_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_HC_MIN_OFFSET)
#endif
if (*(match+ml) == *(ip+ml)
&& (MEM_read32(match) == MEM_read32(ip)))
{
size_t const mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
#if LZ5_HC_LONGOFF_MM > 0
if ((mlt >= LZ5_HC_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
size_t const mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
#if LIZ_HC_LONGOFF_MM > 0
if ((mlt >= LIZ_HC_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
if (mlt > ml) { ml = mlt; *matchpos = match; }
}
} else {
match = dictBase + matchIndex;
// fprintf(stderr, "dictBase[%p]+matchIndex[%d]=match[%p] dictLimit=%d base=%p ip=%p iLimit=%p off=%d\n", dictBase, matchIndex, match, dictLimit, base, ip, iLimit, (U32)(ip-match));
#if LZ5_HC_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_HC_MIN_OFFSET)
#if LIZ_HC_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_HC_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
size_t mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
#if LZ5_HC_LONGOFF_MM > 0
if ((mlt >= LZ5_HC_LONGOFF_MM) || ((U32)(ip - (base + matchIndex)) < LZ5_MAX_16BIT_OFFSET))
size_t mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
#if LIZ_HC_LONGOFF_MM > 0
if ((mlt >= LIZ_HC_LONGOFF_MM) || ((U32)(ip - (base + matchIndex)) < LIZ_MAX_16BIT_OFFSET))
#endif
if (mlt > ml) { ml = mlt; *matchpos = base + matchIndex; } /* virtual matchpos */
}
@@ -106,8 +106,8 @@ FORCE_INLINE int LZ5_InsertAndFindBestMatch (LZ5_stream_t* ctx, /* Index table
}
FORCE_INLINE int LZ5_InsertAndGetWiderMatch (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_InsertAndGetWiderMatch (
LIZ_stream_t* ctx,
const BYTE* const ip,
const BYTE* const iLowLimit,
const BYTE* const iHighLimit,
@@ -131,25 +131,25 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatch (
const U32 lowLimit = (ctx->lowLimit + maxDistance >= (U32)(ip-base)) ? ctx->lowLimit : (U32)(ip - base) - maxDistance;
/* First Match */
LZ5_Insert(ctx, ip);
matchIndex = HashTable[LZ5_HC_HASH_FUNCTION(ip, hashLog)];
LIZ_Insert(ctx, ip);
matchIndex = HashTable[LIZ_HC_HASH_FUNCTION(ip, hashLog)];
while ((matchIndex>=lowLimit) && (nbAttempts)) {
nbAttempts--;
if (matchIndex >= dictLimit) {
const BYTE* match = base + matchIndex;
#if LZ5_HC_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_HC_MIN_OFFSET)
#if LIZ_HC_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_HC_MIN_OFFSET)
#endif
if (*(iLowLimit + longest) == *(match - LLdelta + longest)) {
if (MEM_read32(match) == MEM_read32(ip)) {
int mlt = MINMATCH + LZ5_count(ip+MINMATCH, match+MINMATCH, iHighLimit);
int mlt = MINMATCH + LIZ_count(ip+MINMATCH, match+MINMATCH, iHighLimit);
int back = 0;
while ((ip+back > iLowLimit) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
#if LZ5_HC_LONGOFF_MM > 0
if ((mlt >= LZ5_HC_LONGOFF_MM) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_HC_LONGOFF_MM > 0
if ((mlt >= LIZ_HC_LONGOFF_MM) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
#endif
if (mlt > longest) {
longest = (int)mlt;
@@ -160,17 +160,17 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatch (
}
} else {
const BYTE* match = dictBase + matchIndex;
#if LZ5_HC_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_HC_MIN_OFFSET)
#if LIZ_HC_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_HC_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
int back=0;
size_t mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
size_t mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
while ((ip+back > iLowLimit) && (matchIndex+back > lowLimit) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
#if LZ5_HC_LONGOFF_MM > 0
if ((mlt >= LZ5_HC_LONGOFF_MM) || ((U32)(ip - (base + matchIndex)) < LZ5_MAX_16BIT_OFFSET))
#if LIZ_HC_LONGOFF_MM > 0
if ((mlt >= LIZ_HC_LONGOFF_MM) || ((U32)(ip - (base + matchIndex)) < LIZ_MAX_16BIT_OFFSET))
#endif
if ((int)mlt > longest) { longest = (int)mlt; *matchpos = base + matchIndex + back; *startpos = ip+back; }
}
@@ -184,8 +184,8 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatch (
}
FORCE_INLINE int LZ5_compress_hashChain (
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_hashChain (
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -207,7 +207,7 @@ FORCE_INLINE int LZ5_compress_hashChain (
/* Main Loop */
while (ip < mflimit) {
ml = LZ5_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref));
ml = LIZ_InsertAndFindBestMatch (ctx, ip, matchlimit, (&ref));
if (!ml) { ip++; continue; }
/* saved, in case we would skip too much */
@@ -217,11 +217,11 @@ FORCE_INLINE int LZ5_compress_hashChain (
_Search2:
if (ip+ml < mflimit)
ml2 = LZ5_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
ml2 = LIZ_InsertAndGetWiderMatch(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
else ml2 = ml;
if (ml2 == ml) { /* No better match */
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
continue;
}
@@ -254,7 +254,7 @@ _Search3:
if (ip+new_ml > start2 + ml2 - GET_MINMATCH((U32)(start2 - ref2))) {
new_ml = (int)(start2 - ip) + ml2 - GET_MINMATCH((U32)(start2 - ref2));
if (new_ml < GET_MINMATCH((U32)(ip - ref))) { // match2 doesn't fit
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
continue;
}
}
@@ -268,16 +268,16 @@ _Search3:
/* Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18) */
if (start2 + ml2 < mflimit)
ml3 = LZ5_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
ml3 = LIZ_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
else ml3 = ml2;
if (ml3 == ml2) { /* No better match : 2 sequences to encode */
/* ip & ref are known; Now for ml */
if (start2 < ip+ml) ml = (int)(start2 - ip);
/* Now, encode 2 sequences */
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start2;
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml2, ref2)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml2, ref2)) return 0;
continue;
}
@@ -295,7 +295,7 @@ _Search3:
}
}
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start3;
ref = ref3;
ml = ml3;
@@ -323,7 +323,7 @@ _Search3:
if (ip + ml > start2 + ml2 - GET_MINMATCH((U32)(start2 - ref2))) {
ml = (int)(start2 - ip) + ml2 - GET_MINMATCH((U32)(start2 - ref2));
if (ml < GET_MINMATCH((U32)(ip - ref))) { // match2 doesn't fit, remove it
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start3;
ref = ref3;
ml = ml3;
@@ -344,7 +344,7 @@ _Search3:
ml = (int)(start2 - ip);
}
}
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start2;
ref = ref2;
@@ -359,7 +359,7 @@ _Search3:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,7 +1,7 @@
#define LZ5_LOWESTPRICE_MIN_OFFSET 8
#define LIZ_LOWESTPRICE_MIN_OFFSET 8
FORCE_INLINE size_t LZ5_more_profitable(LZ5_stream_t* const ctx, const BYTE *best_ip, size_t best_off, size_t best_common, const BYTE *ip, size_t off, size_t common, size_t literals, int last_off)
FORCE_INLINE size_t LIZ_more_profitable(LIZ_stream_t* const ctx, const BYTE *best_ip, size_t best_off, size_t best_common, const BYTE *ip, size_t off, size_t common, size_t literals, int last_off)
{
size_t sum;
@@ -13,20 +13,20 @@ FORCE_INLINE size_t LZ5_more_profitable(LZ5_stream_t* const ctx, const BYTE *bes
if ((int)off == last_off) off = 0; // rep code
if ((int)best_off == last_off) best_off = 0;
return LZ5_get_price_LZ5v2(ctx, last_off, ip, ctx->off24pos, sum - common, (U32)off, common) <= LZ5_get_price_LZ5v2(ctx, last_off, best_ip, ctx->off24pos, sum - best_common, (U32)best_off, best_common);
return LIZ_get_price_LZ5v2(ctx, last_off, ip, ctx->off24pos, sum - common, (U32)off, common) <= LIZ_get_price_LZ5v2(ctx, last_off, best_ip, ctx->off24pos, sum - best_common, (U32)best_off, best_common);
}
FORCE_INLINE size_t LZ5_better_price(LZ5_stream_t* const ctx, const BYTE *best_ip, size_t best_off, size_t best_common, const BYTE *ip, size_t off, size_t common, int last_off)
FORCE_INLINE size_t LIZ_better_price(LIZ_stream_t* const ctx, const BYTE *best_ip, size_t best_off, size_t best_common, const BYTE *ip, size_t off, size_t common, int last_off)
{
if ((int)off == last_off) off = 0; // rep code
if ((int)best_off == last_off) best_off = 0;
return LZ5_get_price_LZ5v2(ctx, last_off, ip, ctx->off24pos, 0, (U32)off, common) < LZ5_get_price_LZ5v2(ctx, last_off, best_ip, ctx->off24pos, common - best_common, (U32)best_off, best_common);
return LIZ_get_price_LZ5v2(ctx, last_off, ip, ctx->off24pos, 0, (U32)off, common) < LIZ_get_price_LZ5v2(ctx, last_off, best_ip, ctx->off24pos, common - best_common, (U32)best_off, best_common);
}
FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table will be updated */
FORCE_INLINE int LIZ_FindMatchLowestPrice (LIZ_stream_t* ctx, /* Index table will be updated */
const BYTE* ip, const BYTE* const iLimit,
const BYTE** matchpos)
{
@@ -47,15 +47,15 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
int nbAttempts=ctx->params.searchNum;
size_t ml=0, mlt;
matchIndex = HashTable[LZ5_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
matchIndex = HashTable[LIZ_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
if (ctx->last_off >= LZ5_LOWESTPRICE_MIN_OFFSET) {
if (ctx->last_off >= LIZ_LOWESTPRICE_MIN_OFFSET) {
intptr_t matchIndexLO = (ip - ctx->last_off) - base;
if (matchIndexLO >= lowLimit) {
if (matchIndexLO >= dictLimit) {
match = base + matchIndexLO;
mlt = LZ5_count(ip, match, iLimit);// + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count(ip, match, iLimit);// + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET))
if (mlt > REPMINMATCH) {
*matchpos = match;
return (int)mlt;
@@ -63,8 +63,8 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
} else {
match = dictBase + matchIndexLO;
if ((U32)((dictLimit-1) - matchIndexLO) >= 3) { /* intentional overflow */
mlt = LZ5_count_2segments(ip, match, iLimit, dictEnd, lowPrefixPtr);
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count_2segments(ip, match, iLimit, dictEnd, lowPrefixPtr);
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET))
if (mlt > REPMINMATCH) {
*matchpos = base + matchIndexLO; /* virtual matchpos */
return (int)mlt;
@@ -77,16 +77,16 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
#if MINMATCH == 3
{
U32 matchIndex3 = ctx->hashTable3[LZ5_hash3Ptr(ip, ctx->params.hashLog3)];
U32 matchIndex3 = ctx->hashTable3[LIZ_hash3Ptr(ip, ctx->params.hashLog3)];
if (matchIndex3 < current && matchIndex3 >= lowLimit)
{
size_t offset = (size_t)current - matchIndex3;
if (offset < LZ5_MAX_8BIT_OFFSET)
if (offset < LIZ_MAX_8BIT_OFFSET)
{
match = ip - offset;
if (match > base && MEM_readMINMATCH(ip) == MEM_readMINMATCH(match))
{
ml = 3;//LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
ml = 3;//LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
*matchpos = match;
}
}
@@ -96,12 +96,12 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
while ((matchIndex < current) && (matchIndex >= lowLimit) && (nbAttempts)) {
nbAttempts--;
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_LOWESTPRICE_MIN_OFFSET) {
if ((U32)(ip - match) >= LIZ_LOWESTPRICE_MIN_OFFSET) {
if (matchIndex >= dictLimit) {
if (*(match+ml) == *(ip+ml) && (MEM_read32(match) == MEM_read32(ip))) {
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml && LZ5_better_price(ctx, ip, (ip - *matchpos), ml, ip, (ip - match), mlt, ctx->last_off)))
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml && LIZ_better_price(ctx, ip, (ip - *matchpos), ml, ip, (ip - match), mlt, ctx->last_off)))
{ ml = mlt; *matchpos = match; }
}
} else {
@@ -109,9 +109,9 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
// fprintf(stderr, "dictBase[%p]+matchIndex[%d]=match[%p] dictLimit=%d base=%p ip=%p iLimit=%p off=%d\n", dictBase, matchIndex, match, dictLimit, base, ip, iLimit, (U32)(ip-match));
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(matchDict) == MEM_read32(ip)) {
mlt = LZ5_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml && LZ5_better_price(ctx, ip, (ip - *matchpos), ml, ip, (U32)(ip - match), mlt, ctx->last_off)))
mlt = LIZ_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml && LIZ_better_price(ctx, ip, (ip - *matchpos), ml, ip, (U32)(ip - match), mlt, ctx->last_off)))
{ ml = mlt; *matchpos = match; } /* virtual matchpos */
}
}
@@ -123,8 +123,8 @@ FORCE_INLINE int LZ5_FindMatchLowestPrice (LZ5_stream_t* ctx, /* Index table w
}
FORCE_INLINE size_t LZ5_GetWiderMatch (
LZ5_stream_t* ctx,
FORCE_INLINE size_t LIZ_GetWiderMatch (
LIZ_stream_t* ctx,
const BYTE* const ip,
const BYTE* const iLowLimit,
const BYTE* const iHighLimit,
@@ -150,21 +150,21 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
size_t mlt;
/* First Match */
matchIndex = HashTable[LZ5_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
matchIndex = HashTable[LIZ_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
if (ctx->last_off >= LZ5_LOWESTPRICE_MIN_OFFSET) {
if (ctx->last_off >= LIZ_LOWESTPRICE_MIN_OFFSET) {
intptr_t matchIndexLO = (ip - ctx->last_off) - base;
if (matchIndexLO >= lowLimit) {
if (matchIndexLO >= dictLimit) {
match = base + matchIndexLO;
if (MEM_readMINMATCH(match) == MEM_readMINMATCH(ip)) {
int back = 0;
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
while ((ip+back > iLowLimit) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
if (mlt > longest)
if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET)) {
if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET)) {
*matchpos = match+back;
*startpos = ip+back;
longest = mlt;
@@ -175,12 +175,12 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
if ((U32)((dictLimit-1) - matchIndexLO) >= 3) /* intentional overflow */
if (MEM_readMINMATCH(match) == MEM_readMINMATCH(ip)) {
int back=0;
mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
while ((ip+back > iLowLimit) && (matchIndexLO+back > lowLimit) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
if (mlt > longest)
if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET)) {
if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET)) {
*matchpos = base + matchIndexLO + back; /* virtual matchpos */
*startpos = ip+back;
longest = mlt;
@@ -192,19 +192,19 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
#if MINMATCH == 3
{
U32 matchIndex3 = ctx->hashTable3[LZ5_hash3Ptr(ip, ctx->params.hashLog3)];
U32 matchIndex3 = ctx->hashTable3[LIZ_hash3Ptr(ip, ctx->params.hashLog3)];
if (matchIndex3 < current && matchIndex3 >= lowLimit) {
size_t offset = (size_t)current - matchIndex3;
if (offset < LZ5_MAX_8BIT_OFFSET) {
if (offset < LIZ_MAX_8BIT_OFFSET) {
match = ip - offset;
if (match > base && MEM_readMINMATCH(ip) == MEM_readMINMATCH(match)) {
mlt = LZ5_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
mlt = LIZ_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
int back = 0;
while ((ip + back > iLowLimit) && (match + back > lowPrefixPtr) && (ip[back - 1] == match[back - 1])) back--;
mlt -= back;
if (!longest || (mlt > longest && LZ5_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (ip - match), mlt, ctx->last_off))) {
if (!longest || (mlt > longest && LIZ_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (ip - match), mlt, ctx->last_off))) {
*matchpos = match + back;
*startpos = ip + back;
longest = mlt;
@@ -218,16 +218,16 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
while ((matchIndex < current) && (matchIndex >= lowLimit) && (nbAttempts)) {
nbAttempts--;
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_LOWESTPRICE_MIN_OFFSET) {
if ((U32)(ip - match) >= LIZ_LOWESTPRICE_MIN_OFFSET) {
if (matchIndex >= dictLimit) {
if (MEM_read32(match) == MEM_read32(ip)) {
int back = 0;
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
while ((ip+back > iLowLimit) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!longest || (mlt > longest && LZ5_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (ip - match), mlt, ctx->last_off)))
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!longest || (mlt > longest && LIZ_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (ip - match), mlt, ctx->last_off)))
{ longest = mlt; *startpos = ip+back; *matchpos = match+back; }
}
} else {
@@ -236,12 +236,12 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(matchDict) == MEM_read32(ip)) {
int back=0;
mlt = LZ5_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
mlt = LIZ_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
while ((ip+back > iLowLimit) && (matchIndex+back > lowLimit) && (ip[back-1] == matchDict[back-1])) back--;
mlt -= back;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!longest || (mlt > longest && LZ5_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (U32)(ip - match), mlt, ctx->last_off)))
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!longest || (mlt > longest && LIZ_better_price(ctx, *startpos, (*startpos - *matchpos), longest, ip, (U32)(ip - match), mlt, ctx->last_off)))
{ longest = mlt; *startpos = ip+back; *matchpos = match+back; } /* virtual matchpos */
}
}
@@ -255,8 +255,8 @@ FORCE_INLINE size_t LZ5_GetWiderMatch (
FORCE_INLINE int LZ5_compress_lowestPrice(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_lowestPrice(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -277,8 +277,8 @@ FORCE_INLINE int LZ5_compress_lowestPrice(
/* Main Loop */
while (ip < mflimit)
{
LZ5_Insert(ctx, ip);
ml = LZ5_FindMatchLowestPrice (ctx, ip, matchlimit, (&ref));
LIZ_Insert(ctx, ip);
ml = LIZ_FindMatchLowestPrice (ctx, ip, matchlimit, (&ref));
if (!ml) { ip++; continue; }
{
@@ -299,8 +299,8 @@ _Search:
if (ip+ml >= mflimit) { goto _Encode; }
if (ml >= sufficient_len) { goto _Encode; }
LZ5_Insert(ctx, ip);
ml2 = (int)LZ5_GetWiderMatch(ctx, ip + ml - 2, anchor, matchlimit, 0, &ref2, &start2);
LIZ_Insert(ctx, ip);
ml2 = (int)LIZ_GetWiderMatch(ctx, ip + ml - 2, anchor, matchlimit, 0, &ref2, &start2);
if (!ml2) goto _Encode;
{
@@ -310,7 +310,7 @@ _Search:
// find the lowest price for encoding ml bytes
best_pos = ip;
best_price = LZ5_MAX_PRICE;
best_price = LIZ_MAX_PRICE;
off0 = (int)(ip - ref);
off1 = (int)(start2 - ref2);
@@ -318,14 +318,14 @@ _Search:
{
int common0 = (int)(pos - ip);
if (common0 >= MINMATCH) {
price = (int)LZ5_get_price_LZ5v2(ctx, ctx->last_off, ip, ctx->off24pos, ip - anchor, (off0 == ctx->last_off) ? 0 : off0, common0);
price = (int)LIZ_get_price_LZ5v2(ctx, ctx->last_off, ip, ctx->off24pos, ip - anchor, (off0 == ctx->last_off) ? 0 : off0, common0);
{
int common1 = (int)(start2 + ml2 - pos);
if (common1 >= MINMATCH)
price += LZ5_get_price_LZ5v2(ctx, ctx->last_off, pos, ctx->off24pos, 0, (off1 == off0) ? 0 : (off1), common1);
price += LIZ_get_price_LZ5v2(ctx, ctx->last_off, pos, ctx->off24pos, 0, (off1 == off0) ? 0 : (off1), common1);
else
price += LZ5_get_price_LZ5v2(ctx, ctx->last_off, pos, ctx->off24pos, common1, 0, 0);
price += LIZ_get_price_LZ5v2(ctx, ctx->last_off, pos, ctx->off24pos, common1, 0, 0);
}
if (price < best_price) {
@@ -333,19 +333,19 @@ _Search:
best_pos = pos;
}
} else {
price = LZ5_get_price_LZ5v2(ctx, ctx->last_off, ip, ctx->off24pos, start2 - anchor, (off1 == ctx->last_off) ? 0 : off1, ml2);
price = LIZ_get_price_LZ5v2(ctx, ctx->last_off, ip, ctx->off24pos, start2 - anchor, (off1 == ctx->last_off) ? 0 : off1, ml2);
if (price < best_price)
best_pos = pos;
break;
}
}
// LZ5_DEBUG("%u: TRY last_off=%d literals=%u off=%u mlen=%u literals2=%u off2=%u mlen2=%u best=%d\n", (U32)(ip - ctx->inputBuffer), ctx->last_off, (U32)(ip - anchor), off0, (U32)ml, (U32)(start2 - anchor), off1, ml2, (U32)(best_pos - ip));
// LIZ_DEBUG("%u: TRY last_off=%d literals=%u off=%u mlen=%u literals2=%u off2=%u mlen2=%u best=%d\n", (U32)(ip - ctx->inputBuffer), ctx->last_off, (U32)(ip - anchor), off0, (U32)ml, (U32)(start2 - anchor), off1, ml2, (U32)(best_pos - ip));
ml = (int)(best_pos - ip);
}
if ((ml < MINMATCH) || ((ml < minMatchLongOff) && ((U32)(ip-ref) >= LZ5_MAX_16BIT_OFFSET)))
if ((ml < MINMATCH) || ((ml < minMatchLongOff) && ((U32)(ip-ref) >= LIZ_MAX_16BIT_OFFSET)))
{
ip = start2;
ref = ref2;
@@ -356,7 +356,7 @@ _Search:
_Encode:
if (start0 < ip)
{
if (LZ5_more_profitable(ctx, ip, (ip - ref), ml, start0, (start0 - ref0), ml0, (ref0 - ref), ctx->last_off))
if (LIZ_more_profitable(ctx, ip, (ip - ref), ml, start0, (start0 - ref0), ml0, (ref0 - ref), ctx->last_off))
{
ip = start0;
ref = ref0;
@@ -364,13 +364,13 @@ _Encode:
}
}
// if ((ml < minMatchLongOff) && ((U32)(ip-ref) >= LZ5_MAX_16BIT_OFFSET)) { printf("LZ5_encodeSequence ml=%d off=%d\n", ml, (U32)(ip-ref)); exit(0); }
if (LZ5_encodeSequence_LZ5v2(ctx, &ip, &anchor, ml, ((ip - ref == ctx->last_off) ? ip : ref))) return 0;
// if ((ml < minMatchLongOff) && ((U32)(ip-ref) >= LIZ_MAX_16BIT_OFFSET)) { printf("LIZ_encodeSequence ml=%d off=%d\n", ml, (U32)(ip-ref)); exit(0); }
if (LIZ_encodeSequence_LZ5v2(ctx, &ip, &anchor, ml, ((ip - ref == ctx->last_off) ? ip : ref))) return 0;
}
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,11 +1,11 @@
#define OPTIMAL_ML (int)((ML_MASK_LZ4-1)+MINMATCH)
//#define LZ5_NOCHAIN_HASH_FUNCTION(ip, hashLog) LZ5_hashPtr(ip, hashLog, ctx->params.searchLength)
#define LZ5_NOCHAIN_HASH_FUNCTION(ip, hashLog) LZ5_hash5Ptr(ip, hashLog)
#define LZ5_NOCHAIN_MIN_OFFSET 8
//#define LIZ_NOCHAIN_HASH_FUNCTION(ip, hashLog) LIZ_hashPtr(ip, hashLog, ctx->params.searchLength)
#define LIZ_NOCHAIN_HASH_FUNCTION(ip, hashLog) LIZ_hash5Ptr(ip, hashLog)
#define LIZ_NOCHAIN_MIN_OFFSET 8
/* Update chains up to ip (excluded) */
FORCE_INLINE void LZ5_InsertNoChain (LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE void LIZ_InsertNoChain (LIZ_stream_t* ctx, const BYTE* ip)
{
U32* const hashTable = ctx->hashTable;
const BYTE* const base = ctx->base;
@@ -14,8 +14,8 @@ FORCE_INLINE void LZ5_InsertNoChain (LZ5_stream_t* ctx, const BYTE* ip)
const int hashLog = ctx->params.hashLog;
while (idx < target) {
size_t const h = LZ5_NOCHAIN_HASH_FUNCTION(base+idx, hashLog);
if (idx >= hashTable[h] + LZ5_NOCHAIN_MIN_OFFSET)
size_t const h = LIZ_NOCHAIN_HASH_FUNCTION(base+idx, hashLog);
if (idx >= hashTable[h] + LIZ_NOCHAIN_MIN_OFFSET)
hashTable[h] = idx;
idx++;
}
@@ -25,7 +25,7 @@ FORCE_INLINE void LZ5_InsertNoChain (LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE int LZ5_InsertAndFindBestMatchNoChain (LZ5_stream_t* ctx, /* Index table will be updated */
FORCE_INLINE int LIZ_InsertAndFindBestMatchNoChain (LIZ_stream_t* ctx, /* Index table will be updated */
const BYTE* ip, const BYTE* const iLimit,
const BYTE** matchpos)
{
@@ -43,29 +43,29 @@ FORCE_INLINE int LZ5_InsertAndFindBestMatchNoChain (LZ5_stream_t* ctx, /* Inde
const U32 lowLimit = (ctx->lowLimit + maxDistance >= (U32)(ip-base)) ? ctx->lowLimit : (U32)(ip - base) - maxDistance;
/* HC4 match finder */
LZ5_InsertNoChain(ctx, ip);
matchIndex = HashTable[LZ5_NOCHAIN_HASH_FUNCTION(ip, hashLog)];
LIZ_InsertNoChain(ctx, ip);
matchIndex = HashTable[LIZ_NOCHAIN_HASH_FUNCTION(ip, hashLog)];
if (matchIndex >= lowLimit) {
if (matchIndex >= dictLimit) {
match = base + matchIndex;
#if LZ5_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_NOCHAIN_MIN_OFFSET)
#if LIZ_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_NOCHAIN_MIN_OFFSET)
#endif
if (*(match+ml) == *(ip+ml) && (MEM_read32(match) == MEM_read32(ip)))
{
size_t const mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
size_t const mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if (mlt > ml) { ml = mlt; *matchpos = match; }
}
} else {
match = dictBase + matchIndex;
// fprintf(stderr, "dictBase[%p]+matchIndex[%d]=match[%p] dictLimit=%d base=%p ip=%p iLimit=%p off=%d\n", dictBase, matchIndex, match, dictLimit, base, ip, iLimit, (U32)(ip-match));
#if LZ5_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_NOCHAIN_MIN_OFFSET)
#if LIZ_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_NOCHAIN_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
size_t mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
size_t mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if (mlt > ml) { ml = mlt; *matchpos = base + matchIndex; } /* virtual matchpos */
}
}
@@ -75,8 +75,8 @@ FORCE_INLINE int LZ5_InsertAndFindBestMatchNoChain (LZ5_stream_t* ctx, /* Inde
}
FORCE_INLINE int LZ5_InsertAndGetWiderMatchNoChain (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_InsertAndGetWiderMatchNoChain (
LIZ_stream_t* ctx,
const BYTE* const ip,
const BYTE* const iLowLimit,
const BYTE* const iHighLimit,
@@ -97,18 +97,18 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatchNoChain (
const U32 lowLimit = (ctx->lowLimit + maxDistance >= (U32)(ip-base)) ? ctx->lowLimit : (U32)(ip - base) - maxDistance;
/* First Match */
LZ5_InsertNoChain(ctx, ip);
matchIndex = HashTable[LZ5_NOCHAIN_HASH_FUNCTION(ip, hashLog)];
LIZ_InsertNoChain(ctx, ip);
matchIndex = HashTable[LIZ_NOCHAIN_HASH_FUNCTION(ip, hashLog)];
if (matchIndex>=lowLimit) {
if (matchIndex >= dictLimit) {
const BYTE* match = base + matchIndex;
#if LZ5_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - match) >= LZ5_NOCHAIN_MIN_OFFSET)
#if LIZ_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - match) >= LIZ_NOCHAIN_MIN_OFFSET)
#endif
if (*(iLowLimit + longest) == *(match - LLdelta + longest)) {
if (MEM_read32(match) == MEM_read32(ip)) {
int mlt = MINMATCH + LZ5_count(ip+MINMATCH, match+MINMATCH, iHighLimit);
int mlt = MINMATCH + LIZ_count(ip+MINMATCH, match+MINMATCH, iHighLimit);
int back = 0;
while ((ip+back > iLowLimit) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
@@ -122,13 +122,13 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatchNoChain (
}
} else {
const BYTE* match = dictBase + matchIndex;
#if LZ5_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LZ5_NOCHAIN_MIN_OFFSET)
#if LIZ_NOCHAIN_MIN_OFFSET > 0
if ((U32)(ip - (base + matchIndex)) >= LIZ_NOCHAIN_MIN_OFFSET)
#endif
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(match) == MEM_read32(ip)) {
int back=0;
size_t mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
size_t mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
while ((ip+back > iLowLimit) && (matchIndex+back > lowLimit) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
if ((int)mlt > longest) { longest = (int)mlt; *matchpos = base + matchIndex + back; *startpos = ip+back; }
@@ -140,8 +140,8 @@ FORCE_INLINE int LZ5_InsertAndGetWiderMatchNoChain (
}
FORCE_INLINE int LZ5_compress_noChain (
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_noChain (
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -163,7 +163,7 @@ FORCE_INLINE int LZ5_compress_noChain (
/* Main Loop */
while (ip < mflimit) {
ml = LZ5_InsertAndFindBestMatchNoChain (ctx, ip, matchlimit, (&ref));
ml = LIZ_InsertAndFindBestMatchNoChain (ctx, ip, matchlimit, (&ref));
if (!ml) { ip++; continue; }
/* saved, in case we would skip too much */
@@ -173,11 +173,11 @@ FORCE_INLINE int LZ5_compress_noChain (
_Search2:
if (ip+ml < mflimit)
ml2 = LZ5_InsertAndGetWiderMatchNoChain(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
ml2 = LIZ_InsertAndGetWiderMatchNoChain(ctx, ip + ml - 2, ip + 1, matchlimit, ml, &ref2, &start2);
else ml2 = ml;
if (ml2 == ml) { /* No better match */
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
continue;
}
@@ -218,16 +218,16 @@ _Search3:
/* Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18) */
if (start2 + ml2 < mflimit)
ml3 = LZ5_InsertAndGetWiderMatchNoChain(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
ml3 = LIZ_InsertAndGetWiderMatchNoChain(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3);
else ml3 = ml2;
if (ml3 == ml2) { /* No better match : 2 sequences to encode */
/* ip & ref are known; Now for ml */
if (start2 < ip+ml) ml = (int)(start2 - ip);
/* Now, encode 2 sequences */
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start2;
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml2, ref2)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml2, ref2)) return 0;
continue;
}
@@ -245,7 +245,7 @@ _Search3:
}
}
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start3;
ref = ref3;
ml = ml3;
@@ -273,7 +273,7 @@ _Search3:
if (ip + ml > start2 + ml2 - MINMATCH) {
ml = (int)(start2 - ip) + ml2 - MINMATCH;
if (ml < MINMATCH) { // match2 doesn't fit, remove it
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start3;
ref = ref3;
ml = ml3;
@@ -294,7 +294,7 @@ _Search3:
ml = (int)(start2 - ip);
}
}
if (LZ5_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
if (LIZ_encodeSequence_LZ4(ctx, &ip, &anchor, ml, ref)) return 0;
ip = start2;
ref = ref2;
@@ -309,7 +309,7 @@ _Search3:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ4(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,18 +1,18 @@
#define LZ5_LOG_PARSER(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LZ5_LOG_PRICE(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LZ5_LOG_ENCODE(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LIZ_LOG_PARSER(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LIZ_LOG_PRICE(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LIZ_LOG_ENCODE(fmt, ...) //printf(fmt, __VA_ARGS__)
#define LZ5_OPTIMAL_MIN_OFFSET 8
#define LZ5_OPT_NUM (1<<12)
#define LIZ_OPTIMAL_MIN_OFFSET 8
#define LIZ_OPT_NUM (1<<12)
#define REPMINMATCH 1
FORCE_INLINE size_t LZ5_get_price(LZ5_stream_t* const ctx, int rep, const BYTE *ip, const BYTE *off24pos, size_t litLength, U32 offset, size_t matchLength)
FORCE_INLINE size_t LIZ_get_price(LIZ_stream_t* const ctx, int rep, const BYTE *ip, const BYTE *off24pos, size_t litLength, U32 offset, size_t matchLength)
{
if (ctx->params.decompressType == LZ5_coderwords_LZ4)
return LZ5_get_price_LZ4(ctx, ip, litLength, offset, matchLength);
if (ctx->params.decompressType == LIZ_coderwords_LZ4)
return LIZ_get_price_LZ4(ctx, ip, litLength, offset, matchLength);
return LZ5_get_price_LZ5v2(ctx, rep, ip, off24pos, litLength, offset, matchLength);
return LIZ_get_price_LZ5v2(ctx, rep, ip, off24pos, litLength, offset, matchLength);
}
@@ -22,7 +22,7 @@ typedef struct
int off;
int len;
int back;
} LZ5_match_t;
} LIZ_match_t;
typedef struct
{
@@ -32,11 +32,11 @@ typedef struct
int litlen;
int rep;
const BYTE* off24pos;
} LZ5_optimal_t;
} LIZ_optimal_t;
/* Update chains up to ip (excluded) */
FORCE_INLINE void LZ5_BinTree_Insert(LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE void LIZ_BinTree_Insert(LIZ_stream_t* ctx, const BYTE* ip)
{
#if MINMATCH == 3
U32* HashTable3 = ctx->hashTable3;
@@ -45,7 +45,7 @@ FORCE_INLINE void LZ5_BinTree_Insert(LZ5_stream_t* ctx, const BYTE* ip)
U32 idx = ctx->nextToUpdate;
while(idx < target) {
HashTable3[LZ5_hash3Ptr(base+idx, ctx->params.hashLog3)] = idx;
HashTable3[LIZ_hash3Ptr(base+idx, ctx->params.hashLog3)] = idx;
idx++;
}
@@ -57,13 +57,13 @@ FORCE_INLINE void LZ5_BinTree_Insert(LZ5_stream_t* ctx, const BYTE* ip)
FORCE_INLINE int LZ5_GetAllMatches (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_GetAllMatches (
LIZ_stream_t* ctx,
const BYTE* const ip,
const BYTE* const iLowLimit,
const BYTE* const iHighLimit,
size_t best_mlen,
LZ5_match_t* matches)
LIZ_match_t* matches)
{
U32* const chainTable = ctx->chainTable;
U32* const HashTable = ctx->hashTable;
@@ -88,19 +88,19 @@ FORCE_INLINE int LZ5_GetAllMatches (
if (ip + MINMATCH > iHighLimit) return 0;
/* First Match */
HashPos = &HashTable[LZ5_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
HashPos = &HashTable[LIZ_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
matchIndex = *HashPos;
#if MINMATCH == 3
{
U32* const HashTable3 = ctx->hashTable3;
U32* HashPos3 = &HashTable3[LZ5_hash3Ptr(ip, ctx->params.hashLog3)];
U32* HashPos3 = &HashTable3[LIZ_hash3Ptr(ip, ctx->params.hashLog3)];
if ((*HashPos3 < current) && (*HashPos3 >= lowLimit)) {
size_t offset = current - *HashPos3;
if (offset < LZ5_MAX_8BIT_OFFSET) {
if (offset < LIZ_MAX_8BIT_OFFSET) {
match = ip - offset;
if (match > base && MEM_readMINMATCH(ip) == MEM_readMINMATCH(match)) {
size_t mlt = LZ5_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
size_t mlt = LIZ_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
int back = 0;
while ((ip + back > iLowLimit) && (match + back > lowPrefixPtr) && (ip[back - 1] == match[back - 1])) back--;
@@ -127,15 +127,15 @@ FORCE_INLINE int LZ5_GetAllMatches (
while ((matchIndex < current) && (matchIndex >= lowLimit) && (nbAttempts)) {
nbAttempts--;
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_OPTIMAL_MIN_OFFSET) {
if ((U32)(ip - match) >= LIZ_OPTIMAL_MIN_OFFSET) {
if (matchIndex >= dictLimit) {
if ((/*fullSearch ||*/ ip[best_mlen] == match[best_mlen]) && (MEM_readMINMATCH(match) == MEM_readMINMATCH(ip))) {
int back = 0;
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iHighLimit) + MINMATCH;
while ((ip+back > iLowLimit) && (match+back > lowPrefixPtr) && (ip[back-1] == match[back-1])) back--;
mlt -= back;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (mlt > best_mlen) {
best_mlen = mlt;
matches[mnum].off = (int)(ip - match);
@@ -143,7 +143,7 @@ FORCE_INLINE int LZ5_GetAllMatches (
matches[mnum].back = -back;
mnum++;
if (best_mlen > LZ5_OPT_NUM) break;
if (best_mlen > LIZ_OPT_NUM) break;
}
}
} else {
@@ -152,11 +152,11 @@ FORCE_INLINE int LZ5_GetAllMatches (
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_readMINMATCH(matchDict) == MEM_readMINMATCH(ip)) {
int back=0;
mlt = LZ5_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
mlt = LIZ_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iHighLimit, dictEnd, lowPrefixPtr) + MINMATCH;
while ((ip+back > iLowLimit) && (matchIndex+back > lowLimit) && (ip[back-1] == matchDict[back-1])) back--;
mlt -= back;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (mlt > best_mlen) {
best_mlen = mlt;
matches[mnum].off = (int)(ip - match);
@@ -164,7 +164,7 @@ FORCE_INLINE int LZ5_GetAllMatches (
matches[mnum].back = -back;
mnum++;
if (best_mlen > LZ5_OPT_NUM) break;
if (best_mlen > LIZ_OPT_NUM) break;
}
}
}
@@ -178,12 +178,12 @@ FORCE_INLINE int LZ5_GetAllMatches (
FORCE_INLINE int LZ5_BinTree_GetAllMatches (
LZ5_stream_t* ctx,
FORCE_INLINE int LIZ_BinTree_GetAllMatches (
LIZ_stream_t* ctx,
const BYTE* const ip,
const BYTE* const iHighLimit,
size_t best_mlen,
LZ5_match_t* matches)
LIZ_match_t* matches)
{
U32* const chainTable = ctx->chainTable;
U32* const HashTable = ctx->hashTable;
@@ -208,21 +208,21 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
if (ip + MINMATCH > iHighLimit) return 0;
/* First Match */
HashPos = &HashTable[LZ5_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
HashPos = &HashTable[LIZ_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
matchIndex = *HashPos;
#if MINMATCH == 3
{
U32* HashPos3 = &ctx->hashTable3[LZ5_hash3Ptr(ip, ctx->params.hashLog3)];
U32* HashPos3 = &ctx->hashTable3[LIZ_hash3Ptr(ip, ctx->params.hashLog3)];
if ((*HashPos3 < current) && (*HashPos3 >= lowLimit)) {
size_t offset = current - *HashPos3;
if (offset < LZ5_MAX_8BIT_OFFSET) {
if (offset < LIZ_MAX_8BIT_OFFSET) {
match = ip - offset;
if (match > base && MEM_readMINMATCH(ip) == MEM_readMINMATCH(match))
{
mlt = LZ5_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
mlt = LIZ_count(ip + MINMATCH, match + MINMATCH, iHighLimit) + MINMATCH;
matches[mnum].off = (int)offset;
matches[mnum].len = (int)mlt;
@@ -250,16 +250,16 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
if (matchIndex >= dictLimit) {
match = base + matchIndex;
// if (ip[mlt] == match[mlt])
mlt = LZ5_count(ip, match, iHighLimit);
mlt = LIZ_count(ip, match, iHighLimit);
} else {
match = dictBase + matchIndex;
mlt = LZ5_count_2segments(ip, match, iHighLimit, dictEnd, lowPrefixPtr);
mlt = LIZ_count_2segments(ip, match, iHighLimit, dictEnd, lowPrefixPtr);
if (matchIndex + (int)mlt >= dictLimit)
match = base + matchIndex; /* to prepare for next usage of match[mlt] */
}
if ((U32)(current - matchIndex) >= LZ5_OPTIMAL_MIN_OFFSET) {
if ((mlt >= minMatchLongOff) || ((U32)(current - matchIndex) < LZ5_MAX_16BIT_OFFSET))
if ((U32)(current - matchIndex) >= LIZ_OPTIMAL_MIN_OFFSET) {
if ((mlt >= minMatchLongOff) || ((U32)(current - matchIndex) < LIZ_MAX_16BIT_OFFSET))
if (mlt > best_mlen) {
best_mlen = mlt;
matches[mnum].off = (int)(current - matchIndex);
@@ -267,7 +267,7 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
matches[mnum].back = 0;
mnum++;
if (mlt > LZ5_OPT_NUM) break;
if (mlt > LIZ_OPT_NUM) break;
if (ip + mlt >= iHighLimit) break;
}
} else {
@@ -276,9 +276,9 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
size_t newml = 0, newoff = 0;
do {
newoff += (int)(current - matchIndex);
} while (newoff < LZ5_OPTIMAL_MIN_OFFSET);
} while (newoff < LIZ_OPTIMAL_MIN_OFFSET);
newMatchIndex = current - newoff;
if (newMatchIndex >= dictLimit) newml = LZ5_count(ip, base + newMatchIndex, iHighLimit);
if (newMatchIndex >= dictLimit) newml = LIZ_count(ip, base + newMatchIndex, iHighLimit);
// printf("%d: off=%d mlt=%d\n", (U32)current, (U32)(current - matchIndex), (int)mlt);
// printf("%d: newoff=%d newml=%d\n", (U32)current, (int)newoff, (int)newml);
@@ -290,7 +290,7 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
matches[mnum].back = 0;
mnum++;
if (newml > LZ5_OPT_NUM) break;
if (newml > LIZ_OPT_NUM) break;
if (ip + newml >= iHighLimit) break;
}
#endif
@@ -322,22 +322,22 @@ FORCE_INLINE int LZ5_BinTree_GetAllMatches (
#define SET_PRICE(pos, mlen, offset, litlen, price) \
{ \
while (last_pos < pos) { opt[last_pos+1].price = LZ5_MAX_PRICE; last_pos++; } \
while (last_pos < pos) { opt[last_pos+1].price = LIZ_MAX_PRICE; last_pos++; } \
opt[pos].mlen = (int)mlen; \
opt[pos].off = (int)offset; \
opt[pos].litlen = (int)litlen; \
opt[pos].price = (int)price; \
LZ5_LOG_PARSER("%d: SET price[%d/%d]=%d litlen=%d len=%d off=%d\n", (int)(inr-source), pos, last_pos, opt[pos].price, opt[pos].litlen, opt[pos].mlen, opt[pos].off); \
LIZ_LOG_PARSER("%d: SET price[%d/%d]=%d litlen=%d len=%d off=%d\n", (int)(inr-source), pos, last_pos, opt[pos].price, opt[pos].litlen, opt[pos].mlen, opt[pos].off); \
}
FORCE_INLINE int LZ5_compress_optimalPrice(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_optimalPrice(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
LZ5_optimal_t opt[LZ5_OPT_NUM + 4];
LZ5_match_t matches[LZ5_OPT_NUM + 1];
LIZ_optimal_t opt[LIZ_OPT_NUM + 4];
LIZ_match_t matches[LIZ_OPT_NUM + 1];
const BYTE *inr;
size_t res, cur, cur2, skip_num = 0;
size_t i, llen, litlen, mlen, best_mlen, price, offset, best_off, match_num, last_pos;
@@ -356,12 +356,12 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
const size_t sufficient_len = ctx->params.sufficientLength;
const int faster_get_matches = (ctx->params.fullSearch == 0);
const size_t minMatchLongOff = ctx->params.minMatchLongOff;
const int lz5OptimalMinOffset = (ctx->params.decompressType == LZ5_coderwords_LZ4) ? (1<<30) : LZ5_OPTIMAL_MIN_OFFSET;
const size_t repMinMatch = (ctx->params.decompressType == LZ5_coderwords_LZ4) ? MINMATCH : REPMINMATCH;
const int lz5OptimalMinOffset = (ctx->params.decompressType == LIZ_coderwords_LZ4) ? (1<<30) : LIZ_OPTIMAL_MIN_OFFSET;
const size_t repMinMatch = (ctx->params.decompressType == LIZ_coderwords_LZ4) ? MINMATCH : REPMINMATCH;
/* Main Loop */
while (ip < mflimit) {
memset(opt, 0, sizeof(LZ5_optimal_t));
memset(opt, 0, sizeof(LIZ_optimal_t));
last_pos = 0;
llen = ip - anchor;
@@ -372,13 +372,13 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
mlen = 0;
if ((matchIndexLO >= lowLimit) && (base + matchIndexLO + maxDistance >= ip)) {
if (matchIndexLO >= dictLimit) {
mlen = LZ5_count(ip, base + matchIndexLO, matchlimit);
mlen = LIZ_count(ip, base + matchIndexLO, matchlimit);
} else {
mlen = LZ5_count_2segments(ip, dictBase + matchIndexLO, matchlimit, dictEnd, lowPrefixPtr);
mlen = LIZ_count_2segments(ip, dictBase + matchIndexLO, matchlimit, dictEnd, lowPrefixPtr);
}
}
if (mlen >= REPMINMATCH) {
if (mlen > sufficient_len || mlen >= LZ5_OPT_NUM) {
if (mlen > sufficient_len || mlen >= LIZ_OPT_NUM) {
best_mlen = mlen; best_off = 0; cur = 0; last_pos = 1;
goto encode;
}
@@ -386,7 +386,7 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
do
{
litlen = 0;
price = LZ5_get_price(ctx, ctx->last_off, ip, ctx->off24pos, llen, 0, mlen);
price = LIZ_get_price(ctx, ctx->last_off, ip, ctx->off24pos, llen, 0, mlen);
if (mlen > last_pos || price < (size_t)opt[mlen].price)
SET_PRICE(mlen, mlen, 0, litlen, price);
mlen--;
@@ -399,16 +399,16 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
match_num = 0;
else
{
if (ctx->params.parserType == LZ5_parser_optimalPrice) {
LZ5_Insert(ctx, ip);
match_num = LZ5_GetAllMatches(ctx, ip, ip, matchlimit, last_pos, matches);
if (ctx->params.parserType == LIZ_parser_optimalPrice) {
LIZ_Insert(ctx, ip);
match_num = LIZ_GetAllMatches(ctx, ip, ip, matchlimit, last_pos, matches);
} else {
LZ5_BinTree_Insert(ctx, ip);
match_num = LZ5_BinTree_GetAllMatches(ctx, ip, matchlimit, last_pos, matches);
LIZ_BinTree_Insert(ctx, ip);
match_num = LIZ_BinTree_GetAllMatches(ctx, ip, matchlimit, last_pos, matches);
}
}
LZ5_LOG_PARSER("%d: match_num=%d last_pos=%d\n", (int)(ip-source), match_num, last_pos);
LIZ_LOG_PARSER("%d: match_num=%d last_pos=%d\n", (int)(ip-source), match_num, last_pos);
if (!last_pos && !match_num) { ip++; continue; }
if (match_num && (size_t)matches[match_num-1].len > sufficient_len) {
@@ -424,13 +424,13 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
for (i = 0; i < match_num; i++) {
mlen = (i>0) ? (size_t)matches[i-1].len+1 : best_mlen;
best_mlen = (matches[i].len < LZ5_OPT_NUM) ? matches[i].len : LZ5_OPT_NUM;
LZ5_LOG_PARSER("%d: start Found mlen=%d off=%d best_mlen=%d last_pos=%d\n", (int)(ip-source), matches[i].len, matches[i].off, best_mlen, last_pos);
best_mlen = (matches[i].len < LIZ_OPT_NUM) ? matches[i].len : LIZ_OPT_NUM;
LIZ_LOG_PARSER("%d: start Found mlen=%d off=%d best_mlen=%d last_pos=%d\n", (int)(ip-source), matches[i].len, matches[i].off, best_mlen, last_pos);
while (mlen <= best_mlen){
litlen = 0;
price = LZ5_get_price(ctx, ctx->last_off, ip, ctx->off24pos, llen + litlen, matches[i].off, mlen);
price = LIZ_get_price(ctx, ctx->last_off, ip, ctx->off24pos, llen + litlen, matches[i].off, mlen);
if ((mlen >= minMatchLongOff) || (matches[i].off < LZ5_MAX_16BIT_OFFSET))
if ((mlen >= minMatchLongOff) || (matches[i].off < LIZ_MAX_16BIT_OFFSET))
if (mlen > last_pos || price < (size_t)opt[mlen].price)
SET_PRICE(mlen, mlen, matches[i].off, litlen, price);
mlen++;
@@ -453,21 +453,21 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
litlen = opt[cur-1].litlen + 1;
if (cur != litlen) {
price = opt[cur - litlen].price + LZ5_get_price(ctx, opt[cur-litlen].rep, inr, ctx->off24pos, litlen, 0, 0);
LZ5_LOG_PRICE("%d: TRY1 opt[%d].price=%d price=%d cur=%d litlen=%d\n", (int)(inr-source), cur - litlen, opt[cur - litlen].price, price, cur, litlen);
price = opt[cur - litlen].price + LIZ_get_price(ctx, opt[cur-litlen].rep, inr, ctx->off24pos, litlen, 0, 0);
LIZ_LOG_PRICE("%d: TRY1 opt[%d].price=%d price=%d cur=%d litlen=%d\n", (int)(inr-source), cur - litlen, opt[cur - litlen].price, price, cur, litlen);
} else {
price = LZ5_get_price(ctx, ctx->last_off, inr, ctx->off24pos, llen + litlen, 0, 0);
LZ5_LOG_PRICE("%d: TRY2 price=%d cur=%d litlen=%d llen=%d\n", (int)(inr-source), price, cur, litlen, llen);
price = LIZ_get_price(ctx, ctx->last_off, inr, ctx->off24pos, llen + litlen, 0, 0);
LIZ_LOG_PRICE("%d: TRY2 price=%d cur=%d litlen=%d llen=%d\n", (int)(inr-source), price, cur, litlen, llen);
}
} else {
litlen = 1;
price = opt[cur - 1].price + LZ5_get_price(ctx, opt[cur-1].rep, inr, ctx->off24pos, litlen, 0, 0);
LZ5_LOG_PRICE("%d: TRY3 price=%d cur=%d litlen=%d litonly=%d\n", (int)(inr-source), price, cur, litlen, LZ5_get_price(ctx, rep, inr, ctx->off24pos, litlen, 0, 0));
price = opt[cur - 1].price + LIZ_get_price(ctx, opt[cur-1].rep, inr, ctx->off24pos, litlen, 0, 0);
LIZ_LOG_PRICE("%d: TRY3 price=%d cur=%d litlen=%d litonly=%d\n", (int)(inr-source), price, cur, litlen, LIZ_get_price(ctx, rep, inr, ctx->off24pos, litlen, 0, 0));
}
mlen = 1;
best_mlen = 0;
LZ5_LOG_PARSER("%d: TRY price=%d opt[%d].price=%d\n", (int)(inr-source), price, cur, opt[cur].price);
LIZ_LOG_PARSER("%d: TRY price=%d opt[%d].price=%d\n", (int)(inr-source), price, cur, opt[cur].price);
if (cur > last_pos || price <= (size_t)opt[cur].price) // || ((price == opt[cur].price) && (opt[cur-1].mlen == 1) && (cur != litlen)))
SET_PRICE(cur, mlen, -1, litlen, price);
@@ -483,11 +483,11 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
if (offset < 1) {
opt[cur].rep = opt[cur-mlen].rep;
opt[cur].off24pos = opt[cur-mlen].off24pos;
LZ5_LOG_PARSER("%d: COPYREP1 cur=%d mlen=%d rep=%d\n", (int)(inr-source), cur, mlen, opt[cur-mlen].rep);
LIZ_LOG_PARSER("%d: COPYREP1 cur=%d mlen=%d rep=%d\n", (int)(inr-source), cur, mlen, opt[cur-mlen].rep);
} else {
opt[cur].rep = (int)offset;
opt[cur].off24pos = (offset >= LZ5_MAX_16BIT_OFFSET) ? inr : opt[cur-mlen].off24pos;
LZ5_LOG_PARSER("%d: COPYREP2 cur=%d offset=%d rep=%d\n", (int)(inr-source), cur, offset, opt[cur].rep);
opt[cur].off24pos = (offset >= LIZ_MAX_16BIT_OFFSET) ? inr : opt[cur-mlen].off24pos;
LIZ_LOG_PARSER("%d: COPYREP2 cur=%d offset=%d rep=%d\n", (int)(inr-source), cur, offset, opt[cur].rep);
}
} else {
opt[cur].rep = opt[cur-1].rep; // copy rep
@@ -495,7 +495,7 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
}
rep = opt[cur].rep;
LZ5_LOG_PARSER("%d: CURRENT price[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(inr-source), cur, last_pos, opt[cur].price, opt[cur].off, opt[cur].mlen, opt[cur].litlen, opt[cur].rep);
LIZ_LOG_PARSER("%d: CURRENT price[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(inr-source), cur, last_pos, opt[cur].price, opt[cur].off, opt[cur].mlen, opt[cur].litlen, opt[cur].rep);
/* check rep code */
@@ -504,19 +504,19 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
mlen = 0;
if ((matchIndexLO >= lowLimit) && (base + matchIndexLO + maxDistance >= inr)) {
if (matchIndexLO >= dictLimit) {
mlen = LZ5_count(inr, base + matchIndexLO, matchlimit);
mlen = LIZ_count(inr, base + matchIndexLO, matchlimit);
} else {
mlen = LZ5_count_2segments(inr, dictBase + matchIndexLO, matchlimit, dictEnd, lowPrefixPtr);
mlen = LIZ_count_2segments(inr, dictBase + matchIndexLO, matchlimit, dictEnd, lowPrefixPtr);
}
}
if (mlen >= REPMINMATCH/* && mlen > best_mlen*/) {
LZ5_LOG_PARSER("%d: try REP rep=%d mlen=%d\n", (int)(inr-source), opt[cur].rep, mlen);
LZ5_LOG_PARSER("%d: Found REP mlen=%d off=%d rep=%d opt[%d].off=%d\n", (int)(inr-source), mlen, 0, opt[cur].rep, cur, opt[cur].off);
LIZ_LOG_PARSER("%d: try REP rep=%d mlen=%d\n", (int)(inr-source), opt[cur].rep, mlen);
LIZ_LOG_PARSER("%d: Found REP mlen=%d off=%d rep=%d opt[%d].off=%d\n", (int)(inr-source), mlen, 0, opt[cur].rep, cur, opt[cur].off);
if (mlen > sufficient_len || cur + mlen >= LZ5_OPT_NUM) {
if (mlen > sufficient_len || cur + mlen >= LIZ_OPT_NUM) {
best_mlen = mlen;
best_off = 0;
LZ5_LOG_PARSER("%d: REP sufficient_len=%d best_mlen=%d best_off=%d last_pos=%d\n", (int)(inr-source), sufficient_len, best_mlen, best_off, last_pos);
LIZ_LOG_PARSER("%d: REP sufficient_len=%d best_mlen=%d best_off=%d last_pos=%d\n", (int)(inr-source), sufficient_len, best_mlen, best_off, last_pos);
last_pos = cur + 1;
goto encode;
}
@@ -532,19 +532,19 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
litlen = opt[cur].litlen;
if (cur != litlen) {
price = opt[cur - litlen].price + LZ5_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen);
LZ5_LOG_PRICE("%d: TRY1 opt[%d].price=%d price=%d cur=%d litlen=%d\n", (int)(inr-source), cur - litlen, opt[cur - litlen].price, price, cur, litlen);
price = opt[cur - litlen].price + LIZ_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen);
LIZ_LOG_PRICE("%d: TRY1 opt[%d].price=%d price=%d cur=%d litlen=%d\n", (int)(inr-source), cur - litlen, opt[cur - litlen].price, price, cur, litlen);
} else {
price = LZ5_get_price(ctx, rep, inr, ctx->off24pos, llen + litlen, 0, mlen);
LZ5_LOG_PRICE("%d: TRY2 price=%d cur=%d litlen=%d llen=%d\n", (int)(inr-source), price, cur, litlen, llen);
price = LIZ_get_price(ctx, rep, inr, ctx->off24pos, llen + litlen, 0, mlen);
LIZ_LOG_PRICE("%d: TRY2 price=%d cur=%d litlen=%d llen=%d\n", (int)(inr-source), price, cur, litlen, llen);
}
} else {
litlen = 0;
price = opt[cur].price + LZ5_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen);
LZ5_LOG_PRICE("%d: TRY3 price=%d cur=%d litlen=%d getprice=%d\n", (int)(inr-source), price, cur, litlen, LZ5_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen - MINMATCH));
price = opt[cur].price + LIZ_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen);
LIZ_LOG_PRICE("%d: TRY3 price=%d cur=%d litlen=%d getprice=%d\n", (int)(inr-source), price, cur, litlen, LIZ_get_price(ctx, rep, inr, opt[cur].off24pos, litlen, 0, mlen - MINMATCH));
}
LZ5_LOG_PARSER("%d: Found REP mlen=%d off=%d price=%d litlen=%d price[%d]=%d\n", (int)(inr-source), mlen, 0, price, litlen, cur - litlen, opt[cur - litlen].price);
LIZ_LOG_PARSER("%d: Found REP mlen=%d off=%d price=%d litlen=%d price[%d]=%d\n", (int)(inr-source), mlen, 0, price, litlen, cur - litlen, opt[cur - litlen].price);
if (cur + mlen > last_pos || price <= (size_t)opt[cur + mlen].price) // || ((price == opt[cur + mlen].price) && (opt[cur].mlen == 1) && (cur != litlen))) // at equal price prefer REP instead of MATCH
SET_PRICE(cur + mlen, mlen, 0, litlen, price);
@@ -559,14 +559,14 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
continue;
}
if (ctx->params.parserType == LZ5_parser_optimalPrice) {
LZ5_Insert(ctx, inr);
match_num = LZ5_GetAllMatches(ctx, inr, ip, matchlimit, best_mlen, matches);
LZ5_LOG_PARSER("%d: LZ5_GetAllMatches match_num=%d\n", (int)(inr-source), match_num);
if (ctx->params.parserType == LIZ_parser_optimalPrice) {
LIZ_Insert(ctx, inr);
match_num = LIZ_GetAllMatches(ctx, inr, ip, matchlimit, best_mlen, matches);
LIZ_LOG_PARSER("%d: LIZ_GetAllMatches match_num=%d\n", (int)(inr-source), match_num);
} else {
LZ5_BinTree_Insert(ctx, inr);
match_num = LZ5_BinTree_GetAllMatches(ctx, inr, matchlimit, best_mlen, matches);
LZ5_LOG_PARSER("%d: LZ5_BinTree_GetAllMatches match_num=%d\n", (int)(inr-source), match_num);
LIZ_BinTree_Insert(ctx, inr);
match_num = LIZ_BinTree_GetAllMatches(ctx, inr, matchlimit, best_mlen, matches);
LIZ_LOG_PARSER("%d: LIZ_BinTree_GetAllMatches match_num=%d\n", (int)(inr-source), match_num);
}
@@ -584,8 +584,8 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
for (i = 0; i < match_num; i++) {
mlen = (i>0) ? (size_t)matches[i-1].len+1 : best_mlen;
cur2 = cur - matches[i].back;
best_mlen = (cur2 + matches[i].len < LZ5_OPT_NUM) ? (size_t)matches[i].len : LZ5_OPT_NUM - cur2;
LZ5_LOG_PARSER("%d: Found1 cur=%d cur2=%d mlen=%d off=%d best_mlen=%d last_pos=%d\n", (int)(inr-source), cur, cur2, matches[i].len, matches[i].off, best_mlen, last_pos);
best_mlen = (cur2 + matches[i].len < LIZ_OPT_NUM) ? (size_t)matches[i].len : LIZ_OPT_NUM - cur2;
LIZ_LOG_PARSER("%d: Found1 cur=%d cur2=%d mlen=%d off=%d best_mlen=%d last_pos=%d\n", (int)(inr-source), cur, cur2, matches[i].len, matches[i].off, best_mlen, last_pos);
if (mlen < (size_t)matches[i].back + 1)
mlen = matches[i].back + 1;
@@ -597,18 +597,18 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
litlen = opt[cur2].litlen;
if (cur2 != litlen)
price = opt[cur2 - litlen].price + LZ5_get_price(ctx, rep, inr, opt[cur2].off24pos, litlen, matches[i].off, mlen);
price = opt[cur2 - litlen].price + LIZ_get_price(ctx, rep, inr, opt[cur2].off24pos, litlen, matches[i].off, mlen);
else
price = LZ5_get_price(ctx, rep, inr, ctx->off24pos, llen + litlen, matches[i].off, mlen);
price = LIZ_get_price(ctx, rep, inr, ctx->off24pos, llen + litlen, matches[i].off, mlen);
} else {
litlen = 0;
price = opt[cur2].price + LZ5_get_price(ctx, rep, inr, opt[cur2].off24pos, litlen, matches[i].off, mlen);
price = opt[cur2].price + LIZ_get_price(ctx, rep, inr, opt[cur2].off24pos, litlen, matches[i].off, mlen);
}
LZ5_LOG_PARSER("%d: Found2 pred=%d mlen=%d best_mlen=%d off=%d price=%d litlen=%d price[%d]=%d\n", (int)(inr-source), matches[i].back, mlen, best_mlen, matches[i].off, price, litlen, cur - litlen, opt[cur - litlen].price);
LIZ_LOG_PARSER("%d: Found2 pred=%d mlen=%d best_mlen=%d off=%d price=%d litlen=%d price[%d]=%d\n", (int)(inr-source), matches[i].back, mlen, best_mlen, matches[i].off, price, litlen, cur - litlen, opt[cur - litlen].price);
// if (cur2 + mlen > last_pos || ((matches[i].off != opt[cur2 + mlen].off) && (price < opt[cur2 + mlen].price)))
if ((mlen >= minMatchLongOff) || (matches[i].off < LZ5_MAX_16BIT_OFFSET))
if ((mlen >= minMatchLongOff) || (matches[i].off < LIZ_MAX_16BIT_OFFSET))
if (cur2 + mlen > last_pos || price < (size_t)opt[cur2 + mlen].price)
{
SET_PRICE(cur2 + mlen, mlen, matches[i].off, litlen, price);
@@ -626,10 +626,10 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
encode: // cur, last_pos, best_mlen, best_off have to be set
for (i = 1; i <= last_pos; i++) {
LZ5_LOG_PARSER("%d: price[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+i), i, last_pos, opt[i].price, opt[i].off, opt[i].mlen, opt[i].litlen, opt[i].rep);
LIZ_LOG_PARSER("%d: price[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+i), i, last_pos, opt[i].price, opt[i].off, opt[i].mlen, opt[i].litlen, opt[i].rep);
}
LZ5_LOG_PARSER("%d: cur=%d/%d best_mlen=%d best_off=%d rep=%d\n", (int)(ip-source+cur), cur, last_pos, best_mlen, best_off, opt[cur].rep);
LIZ_LOG_PARSER("%d: cur=%d/%d best_mlen=%d best_off=%d rep=%d\n", (int)(ip-source+cur), cur, last_pos, best_mlen, best_off, opt[cur].rep);
opt[0].mlen = 1;
@@ -645,31 +645,31 @@ FORCE_INLINE int LZ5_compress_optimalPrice(
}
for (i = 0; i <= last_pos;) {
LZ5_LOG_PARSER("%d: price2[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+i), i, last_pos, opt[i].price, opt[i].off, opt[i].mlen, opt[i].litlen, opt[i].rep);
LIZ_LOG_PARSER("%d: price2[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+i), i, last_pos, opt[i].price, opt[i].off, opt[i].mlen, opt[i].litlen, opt[i].rep);
i += opt[i].mlen;
}
cur = 0;
while (cur < last_pos) {
LZ5_LOG_PARSER("%d: price3[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+cur), cur, last_pos, opt[cur].price, opt[cur].off, opt[cur].mlen, opt[cur].litlen, opt[cur].rep);
LIZ_LOG_PARSER("%d: price3[%d/%d]=%d off=%d mlen=%d litlen=%d rep=%d\n", (int)(ip-source+cur), cur, last_pos, opt[cur].price, opt[cur].off, opt[cur].mlen, opt[cur].litlen, opt[cur].rep);
mlen = opt[cur].mlen;
// if (mlen == 1) { ip++; cur++; continue; }
if (opt[cur].off == -1) { ip++; cur++; continue; }
offset = opt[cur].off;
cur += mlen;
LZ5_LOG_ENCODE("%d: ENCODE literals=%d off=%d mlen=%d ", (int)(ip-source), (int)(ip-anchor), (int)(offset), mlen);
res = LZ5_encodeSequence(ctx, &ip, &anchor, mlen, ip - offset);
LIZ_LOG_ENCODE("%d: ENCODE literals=%d off=%d mlen=%d ", (int)(ip-source), (int)(ip-anchor), (int)(offset), mlen);
res = LIZ_encodeSequence(ctx, &ip, &anchor, mlen, ip - offset);
if (res) return 0;
LZ5_LOG_PARSER("%d: offset=%d rep=%d\n", (int)(ip-source), offset, ctx->last_off);
LIZ_LOG_PARSER("%d: offset=%d rep=%d\n", (int)(ip-source), offset, ctx->last_off);
}
}
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -1,6 +1,6 @@
#define LZ5_PRICEFAST_MIN_OFFSET 8
#define LIZ_PRICEFAST_MIN_OFFSET 8
FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intptr_t matchIndex3, /* Index table will be updated */
FORCE_INLINE int LIZ_FindMatchFast(LIZ_stream_t* ctx, intptr_t matchIndex, intptr_t matchIndex3, /* Index table will be updated */
const BYTE* ip, const BYTE* const iLimit,
const BYTE** matchpos)
{
@@ -16,14 +16,14 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
const BYTE* match, *matchDict;
size_t ml=0, mlt;
if (ctx->last_off >= LZ5_PRICEFAST_MIN_OFFSET) {
if (ctx->last_off >= LIZ_PRICEFAST_MIN_OFFSET) {
intptr_t matchIndexLO = (ip - ctx->last_off) - base;
if (matchIndexLO >= lowLimit) {
if (matchIndexLO >= dictLimit) {
match = base + matchIndexLO;
if (MEM_readMINMATCH(match) == MEM_readMINMATCH(ip)) {
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET))
{
*matchpos = match;
return (int)mlt;
@@ -33,8 +33,8 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
match = dictBase + matchIndexLO;
if ((U32)((dictLimit-1) - matchIndexLO) >= 3) /* intentional overflow */
if (MEM_readMINMATCH(match) == MEM_readMINMATCH(ip)) {
mlt = LZ5_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count_2segments(ip+MINMATCH, match+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
// if ((mlt >= minMatchLongOff) || (ctx->last_off < LIZ_MAX_16BIT_OFFSET))
{
*matchpos = base + matchIndexLO; /* virtual matchpos */
return (int)mlt;
@@ -48,10 +48,10 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
#if MINMATCH == 3
if (matchIndex3 < current && matchIndex3 >= lowLimit) {
intptr_t offset = current - matchIndex3;
if (offset < LZ5_MAX_8BIT_OFFSET) {
if (offset < LIZ_MAX_8BIT_OFFSET) {
match = ip - offset;
if (match > base && MEM_readMINMATCH(ip) == MEM_readMINMATCH(match)) {
ml = 3;//LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
ml = 3;//LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
*matchpos = match;
}
}
@@ -62,12 +62,12 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
if ((matchIndex < current) && (matchIndex >= lowLimit)) {
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_PRICEFAST_MIN_OFFSET) {
if ((U32)(ip - match) >= LIZ_PRICEFAST_MIN_OFFSET) {
if (matchIndex >= dictLimit) {
if (*(match+ml) == *(ip+ml) && (MEM_read32(match) == MEM_read32(ip))) {
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml)) // && LZ5_better_price((ip - *matchpos), ml, (ip - match), mlt, ctx->last_off)))
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml)) // && LIZ_better_price((ip - *matchpos), ml, (ip - match), mlt, ctx->last_off)))
{ ml = mlt; *matchpos = match; }
}
} else {
@@ -75,9 +75,9 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
// fprintf(stderr, "dictBase[%p]+matchIndex[%d]=match[%p] dictLimit=%d base=%p ip=%p iLimit=%p off=%d\n", dictBase, matchIndex, match, dictLimit, base, ip, iLimit, (U32)(ip-match));
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(matchDict) == MEM_read32(ip)) {
mlt = LZ5_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml)) // && LZ5_better_price((ip - *matchpos), ml, (U32)(ip - match), mlt, ctx->last_off)))
mlt = LIZ_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
if (!ml || (mlt > ml)) // && LIZ_better_price((ip - *matchpos), ml, (U32)(ip - match), mlt, ctx->last_off)))
{ ml = mlt; *matchpos = match; } /* virtual matchpos */
}
}
@@ -88,7 +88,7 @@ FORCE_INLINE int LZ5_FindMatchFast(LZ5_stream_t* ctx, intptr_t matchIndex, intpt
}
FORCE_INLINE int LZ5_FindMatchFaster (LZ5_stream_t* ctx, U32 matchIndex, /* Index table will be updated */
FORCE_INLINE int LIZ_FindMatchFaster (LIZ_stream_t* ctx, U32 matchIndex, /* Index table will be updated */
const BYTE* ip, const BYTE* const iLimit,
const BYTE** matchpos)
{
@@ -106,19 +106,19 @@ FORCE_INLINE int LZ5_FindMatchFaster (LZ5_stream_t* ctx, U32 matchIndex, /* Ind
if (matchIndex < current && matchIndex >= lowLimit) {
match = base + matchIndex;
if ((U32)(ip - match) >= LZ5_PRICEFAST_MIN_OFFSET) {
if ((U32)(ip - match) >= LIZ_PRICEFAST_MIN_OFFSET) {
if (matchIndex >= dictLimit) {
if (MEM_read32(match) == MEM_read32(ip)) {
mlt = LZ5_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count(ip+MINMATCH, match+MINMATCH, iLimit) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
{ ml = mlt; *matchpos = match; }
}
} else {
matchDict = dictBase + matchIndex;
if ((U32)((dictLimit-1) - matchIndex) >= 3) /* intentional overflow */
if (MEM_read32(matchDict) == MEM_read32(ip)) {
mlt = LZ5_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LZ5_MAX_16BIT_OFFSET))
mlt = LIZ_count_2segments(ip+MINMATCH, matchDict+MINMATCH, iLimit, dictEnd, lowPrefixPtr) + MINMATCH;
if ((mlt >= minMatchLongOff) || ((U32)(ip - match) < LIZ_MAX_16BIT_OFFSET))
{ ml = mlt; *matchpos = match; } /* virtual matchpos */
}
}
@@ -130,8 +130,8 @@ FORCE_INLINE int LZ5_FindMatchFaster (LZ5_stream_t* ctx, U32 matchIndex, /* Ind
FORCE_INLINE int LZ5_compress_priceFast(
LZ5_stream_t* const ctx,
FORCE_INLINE int LIZ_compress_priceFast(
LIZ_stream_t* const ctx,
const BYTE* ip,
const BYTE* const iend)
{
@@ -158,17 +158,17 @@ FORCE_INLINE int LZ5_compress_priceFast(
/* Main Loop */
while (ip < mflimit)
{
HashPos = &HashTable[LZ5_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
HashPos = &HashTable[LIZ_hashPtr(ip, ctx->params.hashLog, ctx->params.searchLength)];
#if MINMATCH == 3
{
U32* HashPos3 = &HashTable3[LZ5_hash3Ptr(ip, ctx->params.hashLog3)];
ml = LZ5_FindMatchFast (ctx, *HashPos, *HashPos3, ip, matchlimit, (&ref));
U32* HashPos3 = &HashTable3[LIZ_hash3Ptr(ip, ctx->params.hashLog3)];
ml = LIZ_FindMatchFast (ctx, *HashPos, *HashPos3, ip, matchlimit, (&ref));
*HashPos3 = (U32)(ip - base);
}
#else
ml = LZ5_FindMatchFast (ctx, *HashPos, 0, ip, matchlimit, (&ref));
ml = LIZ_FindMatchFast (ctx, *HashPos, 0, ip, matchlimit, (&ref));
#endif
if ((U32)(ip - base) >= *HashPos + LZ5_PRICEFAST_MIN_OFFSET)
if ((U32)(ip - base) >= *HashPos + LIZ_PRICEFAST_MIN_OFFSET)
*HashPos = (U32)(ip - base);
if (!ml) { ip++; continue; }
@@ -186,9 +186,9 @@ _Search:
if (ip+ml >= mflimit) goto _Encode;
start2 = ip + ml - 2;
HashPos = &HashTable[LZ5_hashPtr(start2, ctx->params.hashLog, ctx->params.searchLength)];
ml2 = LZ5_FindMatchFaster(ctx, *HashPos, start2, matchlimit, (&ref2));
if ((U32)(start2 - base) >= *HashPos + LZ5_PRICEFAST_MIN_OFFSET)
HashPos = &HashTable[LIZ_hashPtr(start2, ctx->params.hashLog, ctx->params.searchLength)];
ml2 = LIZ_FindMatchFaster(ctx, *HashPos, start2, matchlimit, (&ref2));
if ((U32)(start2 - base) >= *HashPos + LIZ_PRICEFAST_MIN_OFFSET)
*HashPos = (U32)(start2 - base);
if (!ml2) goto _Encode;
@@ -201,7 +201,7 @@ _Search:
ref2 += back;
}
// LZ5_DEBUG("%u: TRY last_off=%d literals=%u off=%u mlen=%u literals2=%u off2=%u mlen2=%u best=%d\n", (U32)(ip - ctx->inputBuffer), ctx->last_off, (U32)(ip - anchor), off0, (U32)ml, (U32)(start2 - anchor), off1, ml2, (U32)(best_pos - ip));
// LIZ_DEBUG("%u: TRY last_off=%d literals=%u off=%u mlen=%u literals2=%u off2=%u mlen2=%u best=%d\n", (U32)(ip - ctx->inputBuffer), ctx->last_off, (U32)(ip - anchor), off0, (U32)ml, (U32)(start2 - anchor), off1, ml2, (U32)(best_pos - ip));
if (ml2 <= ml) { ml2 = 0; goto _Encode; }
@@ -227,11 +227,11 @@ _Search:
ref2 += correction;
ml2 -= correction;
if (ml2 < 3) { ml2 = 0; }
if ((ml2 < minMatchLongOff) && ((U32)(start2 - ref2) >= LZ5_MAX_16BIT_OFFSET)) { ml2 = 0; }
if ((ml2 < minMatchLongOff) && ((U32)(start2 - ref2) >= LIZ_MAX_16BIT_OFFSET)) { ml2 = 0; }
}
_Encode:
if (LZ5_encodeSequence_LZ5v2(ctx, &ip, &anchor, ml, ref)) goto _output_error;
if (LIZ_encodeSequence_LZ5v2(ctx, &ip, &anchor, ml, ref)) goto _output_error;
if (ml2)
{
@@ -243,7 +243,7 @@ _Encode:
/* Encode Last Literals */
ip = iend;
if (LZ5_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
if (LIZ_encodeLastLiterals_LZ5v2(ctx, &ip, &anchor)) goto _output_error;
/* End */
return 1;

View File

@@ -52,171 +52,171 @@ extern "C" {
/*-************************************
* Error management
**************************************/
typedef size_t LZ5F_errorCode_t;
typedef size_t LIZF_errorCode_t;
unsigned LZ5F_isError(LZ5F_errorCode_t code);
const char* LZ5F_getErrorName(LZ5F_errorCode_t code); /* return error code string; useful for debugging */
unsigned LIZF_isError(LIZF_errorCode_t code);
const char* LIZF_getErrorName(LIZF_errorCode_t code); /* return error code string; useful for debugging */
/*-************************************
* Frame compression types
**************************************/
//#define LZ5F_DISABLE_OBSOLETE_ENUMS
#ifndef LZ5F_DISABLE_OBSOLETE_ENUMS
# define LZ5F_OBSOLETE_ENUM(x) ,x
//#define LIZF_DISABLE_OBSOLETE_ENUMS
#ifndef LIZF_DISABLE_OBSOLETE_ENUMS
# define LIZF_OBSOLETE_ENUM(x) ,x
#else
# define LZ5F_OBSOLETE_ENUM(x)
# define LIZF_OBSOLETE_ENUM(x)
#endif
typedef enum {
LZ5F_default=0,
LZ5F_max128KB=1,
LZ5F_max256KB=2,
LZ5F_max1MB=3,
LZ5F_max4MB=4,
LZ5F_max16MB=5,
LZ5F_max64MB=6,
LZ5F_max256MB=7
} LZ5F_blockSizeID_t;
LIZF_default=0,
LIZF_max128KB=1,
LIZF_max256KB=2,
LIZF_max1MB=3,
LIZF_max4MB=4,
LIZF_max16MB=5,
LIZF_max64MB=6,
LIZF_max256MB=7
} LIZF_blockSizeID_t;
typedef enum {
LZ5F_blockLinked=0,
LZ5F_blockIndependent
LZ5F_OBSOLETE_ENUM(blockLinked = LZ5F_blockLinked)
LZ5F_OBSOLETE_ENUM(blockIndependent = LZ5F_blockIndependent)
} LZ5F_blockMode_t;
LIZF_blockLinked=0,
LIZF_blockIndependent
LIZF_OBSOLETE_ENUM(blockLinked = LIZF_blockLinked)
LIZF_OBSOLETE_ENUM(blockIndependent = LIZF_blockIndependent)
} LIZF_blockMode_t;
typedef enum {
LZ5F_noContentChecksum=0,
LZ5F_contentChecksumEnabled
LZ5F_OBSOLETE_ENUM(noContentChecksum = LZ5F_noContentChecksum)
LZ5F_OBSOLETE_ENUM(contentChecksumEnabled = LZ5F_contentChecksumEnabled)
} LZ5F_contentChecksum_t;
LIZF_noContentChecksum=0,
LIZF_contentChecksumEnabled
LIZF_OBSOLETE_ENUM(noContentChecksum = LIZF_noContentChecksum)
LIZF_OBSOLETE_ENUM(contentChecksumEnabled = LIZF_contentChecksumEnabled)
} LIZF_contentChecksum_t;
typedef enum {
LZ5F_frame=0,
LZ5F_skippableFrame
LZ5F_OBSOLETE_ENUM(skippableFrame = LZ5F_skippableFrame)
} LZ5F_frameType_t;
LIZF_frame=0,
LIZF_skippableFrame
LIZF_OBSOLETE_ENUM(skippableFrame = LIZF_skippableFrame)
} LIZF_frameType_t;
#ifndef LZ5F_DISABLE_OBSOLETE_ENUMS
typedef LZ5F_blockSizeID_t blockSizeID_t;
typedef LZ5F_blockMode_t blockMode_t;
typedef LZ5F_frameType_t frameType_t;
typedef LZ5F_contentChecksum_t contentChecksum_t;
#ifndef LIZF_DISABLE_OBSOLETE_ENUMS
typedef LIZF_blockSizeID_t blockSizeID_t;
typedef LIZF_blockMode_t blockMode_t;
typedef LIZF_frameType_t frameType_t;
typedef LIZF_contentChecksum_t contentChecksum_t;
#endif
typedef struct {
LZ5F_blockSizeID_t blockSizeID; /* max64KB, max256KB, max1MB, max4MB ; 0 == default */
LZ5F_blockMode_t blockMode; /* blockLinked, blockIndependent ; 0 == default */
LZ5F_contentChecksum_t contentChecksumFlag; /* noContentChecksum, contentChecksumEnabled ; 0 == default */
LZ5F_frameType_t frameType; /* LZ5F_frame, skippableFrame ; 0 == default */
LIZF_blockSizeID_t blockSizeID; /* max64KB, max256KB, max1MB, max4MB ; 0 == default */
LIZF_blockMode_t blockMode; /* blockLinked, blockIndependent ; 0 == default */
LIZF_contentChecksum_t contentChecksumFlag; /* noContentChecksum, contentChecksumEnabled ; 0 == default */
LIZF_frameType_t frameType; /* LIZF_frame, skippableFrame ; 0 == default */
unsigned long long contentSize; /* Size of uncompressed (original) content ; 0 == unknown */
unsigned reserved[2]; /* must be zero for forward compatibility */
} LZ5F_frameInfo_t;
} LIZF_frameInfo_t;
typedef struct {
LZ5F_frameInfo_t frameInfo;
LIZF_frameInfo_t frameInfo;
int compressionLevel; /* 0 == default (fast mode); values above 16 count as 16; values below 0 count as 0 */
unsigned autoFlush; /* 1 == always flush (reduce need for tmp buffer) */
unsigned reserved[4]; /* must be zero for forward compatibility */
} LZ5F_preferences_t;
} LIZF_preferences_t;
/*-*********************************
* Simple compression function
***********************************/
size_t LZ5F_compressFrameBound(size_t srcSize, const LZ5F_preferences_t* preferencesPtr);
size_t LIZF_compressFrameBound(size_t srcSize, const LIZF_preferences_t* preferencesPtr);
/*!LZ5F_compressFrame() :
/*!LIZF_compressFrame() :
* Compress an entire srcBuffer into a valid LZ5 frame, as defined by specification v1.5.1
* The most important rule is that dstBuffer MUST be large enough (dstMaxSize) to ensure compression completion even in worst case.
* You can get the minimum value of dstMaxSize by using LZ5F_compressFrameBound()
* If this condition is not respected, LZ5F_compressFrame() will fail (result is an errorCode)
* The LZ5F_preferences_t structure is optional : you can provide NULL as argument. All preferences will be set to default.
* You can get the minimum value of dstMaxSize by using LIZF_compressFrameBound()
* If this condition is not respected, LIZF_compressFrame() will fail (result is an errorCode)
* The LIZF_preferences_t structure is optional : you can provide NULL as argument. All preferences will be set to default.
* The result of the function is the number of bytes written into dstBuffer.
* The function outputs an error code if it fails (can be tested using LZ5F_isError())
* The function outputs an error code if it fails (can be tested using LIZF_isError())
*/
size_t LZ5F_compressFrame(void* dstBuffer, size_t dstMaxSize, const void* srcBuffer, size_t srcSize, const LZ5F_preferences_t* preferencesPtr);
size_t LIZF_compressFrame(void* dstBuffer, size_t dstMaxSize, const void* srcBuffer, size_t srcSize, const LIZF_preferences_t* preferencesPtr);
/*-***********************************
* Advanced compression functions
*************************************/
typedef struct LZ5F_cctx_s* LZ5F_compressionContext_t; /* must be aligned on 8-bytes */
typedef struct LIZF_cctx_s* LIZF_compressionContext_t; /* must be aligned on 8-bytes */
typedef struct {
unsigned stableSrc; /* 1 == src content will remain available on future calls to LZ5F_compress(); avoid saving src content within tmp buffer as future dictionary */
unsigned stableSrc; /* 1 == src content will remain available on future calls to LIZF_compress(); avoid saving src content within tmp buffer as future dictionary */
unsigned reserved[3];
} LZ5F_compressOptions_t;
} LIZF_compressOptions_t;
/* Resource Management */
#define LZ5F_VERSION 100
LZ5F_errorCode_t LZ5F_createCompressionContext(LZ5F_compressionContext_t* cctxPtr, unsigned version);
LZ5F_errorCode_t LZ5F_freeCompressionContext(LZ5F_compressionContext_t cctx);
/* LZ5F_createCompressionContext() :
#define LIZF_VERSION 100
LIZF_errorCode_t LIZF_createCompressionContext(LIZF_compressionContext_t* cctxPtr, unsigned version);
LIZF_errorCode_t LIZF_freeCompressionContext(LIZF_compressionContext_t cctx);
/* LIZF_createCompressionContext() :
* The first thing to do is to create a compressionContext object, which will be used in all compression operations.
* This is achieved using LZ5F_createCompressionContext(), which takes as argument a version and an LZ5F_preferences_t structure.
* The version provided MUST be LZ5F_VERSION. It is intended to track potential version differences between different binaries.
* The function will provide a pointer to a fully allocated LZ5F_compressionContext_t object.
* If the result LZ5F_errorCode_t is not zero, there was an error during context creation.
* Object can release its memory using LZ5F_freeCompressionContext();
* This is achieved using LIZF_createCompressionContext(), which takes as argument a version and an LIZF_preferences_t structure.
* The version provided MUST be LIZF_VERSION. It is intended to track potential version differences between different binaries.
* The function will provide a pointer to a fully allocated LIZF_compressionContext_t object.
* If the result LIZF_errorCode_t is not zero, there was an error during context creation.
* Object can release its memory using LIZF_freeCompressionContext();
*/
/* Compression */
size_t LZ5F_compressBegin(LZ5F_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LZ5F_preferences_t* prefsPtr);
/* LZ5F_compressBegin() :
size_t LIZF_compressBegin(LIZF_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LIZF_preferences_t* prefsPtr);
/* LIZF_compressBegin() :
* will write the frame header into dstBuffer.
* dstBuffer must be large enough to accommodate a header (dstMaxSize). Maximum header size is 15 bytes.
* The LZ5F_preferences_t structure is optional : you can provide NULL as argument, all preferences will then be set to default.
* The LIZF_preferences_t structure is optional : you can provide NULL as argument, all preferences will then be set to default.
* The result of the function is the number of bytes written into dstBuffer for the header
* or an error code (can be tested using LZ5F_isError())
* or an error code (can be tested using LIZF_isError())
*/
size_t LZ5F_compressBound(size_t srcSize, const LZ5F_preferences_t* prefsPtr);
/* LZ5F_compressBound() :
size_t LIZF_compressBound(size_t srcSize, const LIZF_preferences_t* prefsPtr);
/* LIZF_compressBound() :
* Provides the minimum size of Dst buffer given srcSize to handle worst case situations.
* Different preferences can produce different results.
* prefsPtr is optional : you can provide NULL as argument, all preferences will then be set to cover worst case.
* This function includes frame termination cost (4 bytes, or 8 if frame checksum is enabled)
*/
size_t LZ5F_compressUpdate(LZ5F_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const void* srcBuffer, size_t srcSize, const LZ5F_compressOptions_t* cOptPtr);
/* LZ5F_compressUpdate()
* LZ5F_compressUpdate() can be called repetitively to compress as much data as necessary.
size_t LIZF_compressUpdate(LIZF_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const void* srcBuffer, size_t srcSize, const LIZF_compressOptions_t* cOptPtr);
/* LIZF_compressUpdate()
* LIZF_compressUpdate() can be called repetitively to compress as much data as necessary.
* The most important rule is that dstBuffer MUST be large enough (dstMaxSize) to ensure compression completion even in worst case.
* You can get the minimum value of dstMaxSize by using LZ5F_compressBound().
* If this condition is not respected, LZ5F_compress() will fail (result is an errorCode).
* LZ5F_compressUpdate() doesn't guarantee error recovery, so you have to reset compression context when an error occurs.
* The LZ5F_compressOptions_t structure is optional : you can provide NULL as argument.
* You can get the minimum value of dstMaxSize by using LIZF_compressBound().
* If this condition is not respected, LIZF_compress() will fail (result is an errorCode).
* LIZF_compressUpdate() doesn't guarantee error recovery, so you have to reset compression context when an error occurs.
* The LIZF_compressOptions_t structure is optional : you can provide NULL as argument.
* The result of the function is the number of bytes written into dstBuffer : it can be zero, meaning input data was just buffered.
* The function outputs an error code if it fails (can be tested using LZ5F_isError())
* The function outputs an error code if it fails (can be tested using LIZF_isError())
*/
size_t LZ5F_flush(LZ5F_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LZ5F_compressOptions_t* cOptPtr);
/* LZ5F_flush()
size_t LIZF_flush(LIZF_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LIZF_compressOptions_t* cOptPtr);
/* LIZF_flush()
* Should you need to generate compressed data immediately, without waiting for the current block to be filled,
* you can call LZ5_flush(), which will immediately compress any remaining data buffered within cctx.
* you can call LIZ_flush(), which will immediately compress any remaining data buffered within cctx.
* Note that dstMaxSize must be large enough to ensure the operation will be successful.
* LZ5F_compressOptions_t structure is optional : you can provide NULL as argument.
* LIZF_compressOptions_t structure is optional : you can provide NULL as argument.
* The result of the function is the number of bytes written into dstBuffer
* (it can be zero, this means there was no data left within cctx)
* The function outputs an error code if it fails (can be tested using LZ5F_isError())
* The function outputs an error code if it fails (can be tested using LIZF_isError())
*/
size_t LZ5F_compressEnd(LZ5F_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LZ5F_compressOptions_t* cOptPtr);
/* LZ5F_compressEnd()
* When you want to properly finish the compressed frame, just call LZ5F_compressEnd().
* It will flush whatever data remained within compressionContext (like LZ5_flush())
size_t LIZF_compressEnd(LIZF_compressionContext_t cctx, void* dstBuffer, size_t dstMaxSize, const LIZF_compressOptions_t* cOptPtr);
/* LIZF_compressEnd()
* When you want to properly finish the compressed frame, just call LIZF_compressEnd().
* It will flush whatever data remained within compressionContext (like LIZ_flush())
* but also properly finalize the frame, with an endMark and a checksum.
* The result of the function is the number of bytes written into dstBuffer (necessarily >= 4 (endMark), or 8 if optional frame checksum is enabled)
* The function outputs an error code if it fails (can be tested using LZ5F_isError())
* The LZ5F_compressOptions_t structure is optional : you can provide NULL as argument.
* A successful call to LZ5F_compressEnd() makes cctx available again for next compression task.
* The function outputs an error code if it fails (can be tested using LIZF_isError())
* The LIZF_compressOptions_t structure is optional : you can provide NULL as argument.
* A successful call to LIZF_compressEnd() makes cctx available again for next compression task.
*/
@@ -224,48 +224,48 @@ size_t LZ5F_compressEnd(LZ5F_compressionContext_t cctx, void* dstBuffer, size_t
* Decompression functions
***********************************/
typedef struct LZ5F_dctx_s* LZ5F_decompressionContext_t; /* must be aligned on 8-bytes */
typedef struct LIZF_dctx_s* LIZF_decompressionContext_t; /* must be aligned on 8-bytes */
typedef struct {
unsigned stableDst; /* guarantee that decompressed data will still be there on next function calls (avoid storage into tmp buffers) */
unsigned reserved[3];
} LZ5F_decompressOptions_t;
} LIZF_decompressOptions_t;
/* Resource management */
/*!LZ5F_createDecompressionContext() :
* Create an LZ5F_decompressionContext_t object, which will be used to track all decompression operations.
* The version provided MUST be LZ5F_VERSION. It is intended to track potential breaking differences between different versions.
* The function will provide a pointer to a fully allocated and initialized LZ5F_decompressionContext_t object.
* The result is an errorCode, which can be tested using LZ5F_isError().
* dctx memory can be released using LZ5F_freeDecompressionContext();
* The result of LZ5F_freeDecompressionContext() is indicative of the current state of decompressionContext when being released.
/*!LIZF_createDecompressionContext() :
* Create an LIZF_decompressionContext_t object, which will be used to track all decompression operations.
* The version provided MUST be LIZF_VERSION. It is intended to track potential breaking differences between different versions.
* The function will provide a pointer to a fully allocated and initialized LIZF_decompressionContext_t object.
* The result is an errorCode, which can be tested using LIZF_isError().
* dctx memory can be released using LIZF_freeDecompressionContext();
* The result of LIZF_freeDecompressionContext() is indicative of the current state of decompressionContext when being released.
* That is, it should be == 0 if decompression has been completed fully and correctly.
*/
LZ5F_errorCode_t LZ5F_createDecompressionContext(LZ5F_decompressionContext_t* dctxPtr, unsigned version);
LZ5F_errorCode_t LZ5F_freeDecompressionContext(LZ5F_decompressionContext_t dctx);
LIZF_errorCode_t LIZF_createDecompressionContext(LIZF_decompressionContext_t* dctxPtr, unsigned version);
LIZF_errorCode_t LIZF_freeDecompressionContext(LIZF_decompressionContext_t dctx);
/*====== Decompression ======*/
/*!LZ5F_getFrameInfo() :
/*!LIZF_getFrameInfo() :
* This function decodes frame header information (such as max blockSize, frame checksum, etc.).
* Its usage is optional. The objective is to extract frame header information, typically for allocation purposes.
* A header size is variable and can be from 7 to 15 bytes. It's also possible to input more bytes than that.
* The number of bytes read from srcBuffer will be updated within *srcSizePtr (necessarily <= original value).
* (note that LZ5F_getFrameInfo() can also be used anytime *after* starting decompression, in this case 0 input byte is enough)
* Frame header info is *copied into* an already allocated LZ5F_frameInfo_t structure.
* The function result is an hint about how many srcSize bytes LZ5F_decompress() expects for next call,
* or an error code which can be tested using LZ5F_isError()
* (note that LIZF_getFrameInfo() can also be used anytime *after* starting decompression, in this case 0 input byte is enough)
* Frame header info is *copied into* an already allocated LIZF_frameInfo_t structure.
* The function result is an hint about how many srcSize bytes LIZF_decompress() expects for next call,
* or an error code which can be tested using LIZF_isError()
* (typically, when there is not enough src bytes to fully decode the frame header)
* Decompression is expected to resume from where it stopped (srcBuffer + *srcSizePtr)
*/
size_t LZ5F_getFrameInfo(LZ5F_decompressionContext_t dctx,
LZ5F_frameInfo_t* frameInfoPtr,
size_t LIZF_getFrameInfo(LIZF_decompressionContext_t dctx,
LIZF_frameInfo_t* frameInfoPtr,
const void* srcBuffer, size_t* srcSizePtr);
/*!LZ5F_decompress() :
/*!LIZF_decompress() :
* Call this function repetitively to regenerate data compressed within srcBuffer.
* The function will attempt to decode *srcSizePtr bytes from srcBuffer, into dstBuffer of maximum size *dstSizePtr.
*
@@ -274,25 +274,25 @@ size_t LZ5F_getFrameInfo(LZ5F_decompressionContext_t dctx,
* The number of bytes read from srcBuffer will be provided within *srcSizePtr (necessarily <= original value).
* If number of bytes read is < number of bytes provided, then decompression operation is not completed.
* It typically happens when dstBuffer is not large enough to contain all decoded data.
* LZ5F_decompress() must be called again, starting from where it stopped (srcBuffer + *srcSizePtr)
* LIZF_decompress() must be called again, starting from where it stopped (srcBuffer + *srcSizePtr)
* The function will check this condition, and refuse to continue if it is not respected.
*
* `dstBuffer` is expected to be flushed between each call to the function, its content will be overwritten.
* `dst` arguments can be changed at will at each consecutive call to the function.
*
* The function result is an hint of how many `srcSize` bytes LZ5F_decompress() expects for next call.
* The function result is an hint of how many `srcSize` bytes LIZF_decompress() expects for next call.
* Schematically, it's the size of the current (or remaining) compressed block + header of next block.
* Respecting the hint provides some boost to performance, since it does skip intermediate buffers.
* This is just a hint though, it's always possible to provide any srcSize.
* When a frame is fully decoded, the function result will be 0 (no more data expected).
* If decompression failed, function result is an error code, which can be tested using LZ5F_isError().
* If decompression failed, function result is an error code, which can be tested using LIZF_isError().
*
* After a frame is fully decoded, dctx can be used again to decompress another frame.
*/
size_t LZ5F_decompress(LZ5F_decompressionContext_t dctx,
size_t LIZF_decompress(LIZF_decompressionContext_t dctx,
void* dstBuffer, size_t* dstSizePtr,
const void* srcBuffer, size_t* srcSizePtr,
const LZ5F_decompressOptions_t* dOptPtr);
const LIZF_decompressOptions_t* dOptPtr);

View File

@@ -47,13 +47,13 @@ extern "C" {
/**************************************
* Includes
**************************************/
#include "lz5frame.h"
#include "lizframe.h"
/**************************************
* Error management
* ************************************/
#define LZ5F_LIST_ERRORS(ITEM) \
#define LIZF_LIST_ERRORS(ITEM) \
ITEM(OK_NoError) ITEM(ERROR_GENERIC) \
ITEM(ERROR_maxBlockSize_invalid) ITEM(ERROR_blockMode_invalid) ITEM(ERROR_contentChecksumFlag_invalid) \
ITEM(ERROR_compressionLevel_invalid) \
@@ -66,13 +66,13 @@ extern "C" {
ITEM(ERROR_headerChecksum_invalid) ITEM(ERROR_contentChecksum_invalid) \
ITEM(ERROR_maxCode)
//#define LZ5F_DISABLE_OLD_ENUMS
#ifndef LZ5F_DISABLE_OLD_ENUMS
#define LZ5F_GENERATE_ENUM(ENUM) LZ5F_##ENUM, ENUM = LZ5F_##ENUM,
//#define LIZF_DISABLE_OLD_ENUMS
#ifndef LIZF_DISABLE_OLD_ENUMS
#define LIZF_GENERATE_ENUM(ENUM) LIZF_##ENUM, ENUM = LIZF_##ENUM,
#else
#define LZ5F_GENERATE_ENUM(ENUM) LZ5F_##ENUM,
#define LIZF_GENERATE_ENUM(ENUM) LIZF_##ENUM,
#endif
typedef enum { LZ5F_LIST_ERRORS(LZ5F_GENERATE_ENUM) } LZ5F_errorCodes; /* enum is exposed, to handle specific errors; compare function result to -enum value */
typedef enum { LIZF_LIST_ERRORS(LIZF_GENERATE_ENUM) } LIZF_errorCodes; /* enum is exposed, to handle specific errors; compare function result to -enum value */
#if defined (__cplusplus)

View File

@@ -1,868 +0,0 @@
/*
* xxHash - Fast Hash algorithm
* Copyright (C) 2012-2016, Yann Collet
*
* BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following disclaimer
* in the documentation and/or other materials provided with the
* distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* You can contact the author at :
* - xxHash homepage: http://www.xxhash.com
* - xxHash source repository : https://github.com/Cyan4973/xxHash
*/
/* *************************************
* Tuning parameters
***************************************/
/*!XXH_FORCE_MEMORY_ACCESS :
* By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable.
* Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal.
* The below switch allow to select different access method for improved performance.
* Method 0 (default) : use `memcpy()`. Safe and portable.
* Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable).
* This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`.
* Method 2 : direct access. This method doesn't depend on compiler but violate C standard.
* It can generate buggy code on targets which do not support unaligned memory accesses.
* But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6)
* See https://stackoverflow.com/a/32095106/646947 for details.
* Prefer these methods in priority order (0 > 1 > 2)
*/
#ifndef XXH_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */
# if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) )
# define XXH_FORCE_MEMORY_ACCESS 2
# elif defined(__INTEL_COMPILER) || \
(defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) ))
# define XXH_FORCE_MEMORY_ACCESS 1
# endif
#endif
/*!XXH_ACCEPT_NULL_INPUT_POINTER :
* If the input pointer is a null pointer, xxHash default behavior is to trigger a memory access error, since it is a bad pointer.
* When this option is enabled, xxHash output for null input pointers will be the same as a null-length input.
* By default, this option is disabled. To enable it, uncomment below define :
*/
/* #define XXH_ACCEPT_NULL_INPUT_POINTER 1 */
/*!XXH_FORCE_NATIVE_FORMAT :
* By default, xxHash library provides endian-independant Hash values, based on little-endian convention.
* Results are therefore identical for little-endian and big-endian CPU.
* This comes at a performance cost for big-endian CPU, since some swapping is required to emulate little-endian format.
* Should endian-independance be of no importance for your application, you may set the #define below to 1,
* to improve speed for Big-endian CPU.
* This option has no impact on Little_Endian CPU.
*/
#ifndef XXH_FORCE_NATIVE_FORMAT /* can be defined externally */
# define XXH_FORCE_NATIVE_FORMAT 0
#endif
/*!XXH_FORCE_ALIGN_CHECK :
* This is a minor performance trick, only useful with lots of very small keys.
* It means : check for aligned/unaligned input.
* The check costs one initial branch per hash; set to 0 when the input data
* is guaranteed to be aligned.
*/
#ifndef XXH_FORCE_ALIGN_CHECK /* can be defined externally */
# if defined(__i386) || defined(_M_IX86) || defined(__x86_64__) || defined(_M_X64)
# define XXH_FORCE_ALIGN_CHECK 0
# else
# define XXH_FORCE_ALIGN_CHECK 1
# endif
#endif
/* *************************************
* Includes & Memory related functions
***************************************/
/* Modify the local functions below should you wish to use some other memory routines */
/* for malloc(), free() */
#include <stdlib.h>
static void* XXH_malloc(size_t s) { return malloc(s); }
static void XXH_free (void* p) { free(p); }
/* for memcpy() */
#include <string.h>
static void* XXH_memcpy(void* dest, const void* src, size_t size) { return memcpy(dest,src,size); }
#define XXH_STATIC_LINKING_ONLY
#include "xxhash.h"
/* *************************************
* Compiler Specific Options
***************************************/
#ifdef _MSC_VER /* Visual Studio */
# pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */
# define FORCE_INLINE static __forceinline
#else
# if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */
# ifdef __GNUC__
# define FORCE_INLINE static inline __attribute__((always_inline))
# else
# define FORCE_INLINE static inline
# endif
# else
# define FORCE_INLINE static
# endif /* __STDC_VERSION__ */
#endif
/* *************************************
* Basic Types
***************************************/
#ifndef MEM_MODULE
# define MEM_MODULE
# if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */
# include <stdint.h>
typedef uint8_t BYTE;
typedef uint16_t U16;
typedef uint32_t U32;
typedef int32_t S32;
typedef uint64_t U64;
# else
typedef unsigned char BYTE;
typedef unsigned short U16;
typedef unsigned int U32;
typedef signed int S32;
typedef unsigned long long U64;
# endif
#endif
#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))
/* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */
static U32 XXH_read32(const void* memPtr) { return *(const U32*) memPtr; }
static U64 XXH_read64(const void* memPtr) { return *(const U64*) memPtr; }
#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))
/* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */
/* currently only defined for gcc and icc */
typedef union { U32 u32; U64 u64; } __attribute__((packed)) unalign;
static U32 XXH_read32(const void* ptr) { return ((const unalign*)ptr)->u32; }
static U64 XXH_read64(const void* ptr) { return ((const unalign*)ptr)->u64; }
#else
/* portable and safe solution. Generally efficient.
* see : https://stackoverflow.com/a/32095106/646947
*/
static U32 XXH_read32(const void* memPtr)
{
U32 val;
memcpy(&val, memPtr, sizeof(val));
return val;
}
static U64 XXH_read64(const void* memPtr)
{
U64 val;
memcpy(&val, memPtr, sizeof(val));
return val;
}
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
/* ****************************************
* Compiler-specific Functions and Macros
******************************************/
#define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)
/* Note : although _rotl exists for minGW (GCC under windows), performance seems poor */
#if defined(_MSC_VER)
# define XXH_rotl32(x,r) _rotl(x,r)
# define XXH_rotl64(x,r) _rotl64(x,r)
#else
# define XXH_rotl32(x,r) ((x << r) | (x >> (32 - r)))
# define XXH_rotl64(x,r) ((x << r) | (x >> (64 - r)))
#endif
#if defined(_MSC_VER) /* Visual Studio */
# define XXH_swap32 _byteswap_ulong
# define XXH_swap64 _byteswap_uint64
#elif GCC_VERSION >= 403
# define XXH_swap32 __builtin_bswap32
# define XXH_swap64 __builtin_bswap64
#else
static U32 XXH_swap32 (U32 x)
{
return ((x << 24) & 0xff000000 ) |
((x << 8) & 0x00ff0000 ) |
((x >> 8) & 0x0000ff00 ) |
((x >> 24) & 0x000000ff );
}
static U64 XXH_swap64 (U64 x)
{
return ((x << 56) & 0xff00000000000000ULL) |
((x << 40) & 0x00ff000000000000ULL) |
((x << 24) & 0x0000ff0000000000ULL) |
((x << 8) & 0x000000ff00000000ULL) |
((x >> 8) & 0x00000000ff000000ULL) |
((x >> 24) & 0x0000000000ff0000ULL) |
((x >> 40) & 0x000000000000ff00ULL) |
((x >> 56) & 0x00000000000000ffULL);
}
#endif
/* *************************************
* Architecture Macros
***************************************/
typedef enum { XXH_bigEndian=0, XXH_littleEndian=1 } XXH_endianess;
/* XXH_CPU_LITTLE_ENDIAN can be defined externally, for example on the compiler command line */
#ifndef XXH_CPU_LITTLE_ENDIAN
static const int g_one = 1;
# define XXH_CPU_LITTLE_ENDIAN (*(const char*)(&g_one))
#endif
/* ***************************
* Memory reads
*****************************/
typedef enum { XXH_aligned, XXH_unaligned } XXH_alignment;
FORCE_INLINE U32 XXH_readLE32_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
{
if (align==XXH_unaligned)
return endian==XXH_littleEndian ? XXH_read32(ptr) : XXH_swap32(XXH_read32(ptr));
else
return endian==XXH_littleEndian ? *(const U32*)ptr : XXH_swap32(*(const U32*)ptr);
}
FORCE_INLINE U32 XXH_readLE32(const void* ptr, XXH_endianess endian)
{
return XXH_readLE32_align(ptr, endian, XXH_unaligned);
}
static U32 XXH_readBE32(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap32(XXH_read32(ptr)) : XXH_read32(ptr);
}
FORCE_INLINE U64 XXH_readLE64_align(const void* ptr, XXH_endianess endian, XXH_alignment align)
{
if (align==XXH_unaligned)
return endian==XXH_littleEndian ? XXH_read64(ptr) : XXH_swap64(XXH_read64(ptr));
else
return endian==XXH_littleEndian ? *(const U64*)ptr : XXH_swap64(*(const U64*)ptr);
}
FORCE_INLINE U64 XXH_readLE64(const void* ptr, XXH_endianess endian)
{
return XXH_readLE64_align(ptr, endian, XXH_unaligned);
}
static U64 XXH_readBE64(const void* ptr)
{
return XXH_CPU_LITTLE_ENDIAN ? XXH_swap64(XXH_read64(ptr)) : XXH_read64(ptr);
}
/* *************************************
* Macros
***************************************/
#define XXH_STATIC_ASSERT(c) { enum { XXH_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */
/* *************************************
* Constants
***************************************/
static const U32 PRIME32_1 = 2654435761U;
static const U32 PRIME32_2 = 2246822519U;
static const U32 PRIME32_3 = 3266489917U;
static const U32 PRIME32_4 = 668265263U;
static const U32 PRIME32_5 = 374761393U;
static const U64 PRIME64_1 = 11400714785074694791ULL;
static const U64 PRIME64_2 = 14029467366897019727ULL;
static const U64 PRIME64_3 = 1609587929392839161ULL;
static const U64 PRIME64_4 = 9650029242287828579ULL;
static const U64 PRIME64_5 = 2870177450012600261ULL;
XXH_PUBLIC_API unsigned XXH_versionNumber (void) { return XXH_VERSION_NUMBER; }
/* **************************
* Utils
****************************/
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* restrict dstState, const XXH32_state_t* restrict srcState)
{
memcpy(dstState, srcState, sizeof(*dstState));
}
XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* restrict dstState, const XXH64_state_t* restrict srcState)
{
memcpy(dstState, srcState, sizeof(*dstState));
}
/* ***************************
* Simple Hash Functions
*****************************/
static U32 XXH32_round(U32 seed, U32 input)
{
seed += input * PRIME32_2;
seed = XXH_rotl32(seed, 13);
seed *= PRIME32_1;
return seed;
}
FORCE_INLINE U32 XXH32_endian_align(const void* input, size_t len, U32 seed, XXH_endianess endian, XXH_alignment align)
{
const BYTE* p = (const BYTE*)input;
const BYTE* bEnd = p + len;
U32 h32;
#define XXH_get32bits(p) XXH_readLE32_align(p, endian, align)
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (p==NULL) {
len=0;
bEnd=p=(const BYTE*)(size_t)16;
}
#endif
if (len>=16) {
const BYTE* const limit = bEnd - 16;
U32 v1 = seed + PRIME32_1 + PRIME32_2;
U32 v2 = seed + PRIME32_2;
U32 v3 = seed + 0;
U32 v4 = seed - PRIME32_1;
do {
v1 = XXH32_round(v1, XXH_get32bits(p)); p+=4;
v2 = XXH32_round(v2, XXH_get32bits(p)); p+=4;
v3 = XXH32_round(v3, XXH_get32bits(p)); p+=4;
v4 = XXH32_round(v4, XXH_get32bits(p)); p+=4;
} while (p<=limit);
h32 = XXH_rotl32(v1, 1) + XXH_rotl32(v2, 7) + XXH_rotl32(v3, 12) + XXH_rotl32(v4, 18);
} else {
h32 = seed + PRIME32_5;
}
h32 += (U32) len;
while (p+4<=bEnd) {
h32 += XXH_get32bits(p) * PRIME32_3;
h32 = XXH_rotl32(h32, 17) * PRIME32_4 ;
p+=4;
}
while (p<bEnd) {
h32 += (*p) * PRIME32_5;
h32 = XXH_rotl32(h32, 11) * PRIME32_1 ;
p++;
}
h32 ^= h32 >> 15;
h32 *= PRIME32_2;
h32 ^= h32 >> 13;
h32 *= PRIME32_3;
h32 ^= h32 >> 16;
return h32;
}
XXH_PUBLIC_API unsigned int XXH32 (const void* input, size_t len, unsigned int seed)
{
#if 0
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH32_CREATESTATE_STATIC(state);
XXH32_reset(state, seed);
XXH32_update(state, input, len);
return XXH32_digest(state);
#else
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 3) == 0) { /* Input is 4-bytes aligned, leverage the speed benefit */
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
else
return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
} }
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
else
return XXH32_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
#endif
}
static U64 XXH64_round(U64 acc, U64 input)
{
acc += input * PRIME64_2;
acc = XXH_rotl64(acc, 31);
acc *= PRIME64_1;
return acc;
}
static U64 XXH64_mergeRound(U64 acc, U64 val)
{
val = XXH64_round(0, val);
acc ^= val;
acc = acc * PRIME64_1 + PRIME64_4;
return acc;
}
FORCE_INLINE U64 XXH64_endian_align(const void* input, size_t len, U64 seed, XXH_endianess endian, XXH_alignment align)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
U64 h64;
#define XXH_get64bits(p) XXH_readLE64_align(p, endian, align)
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (p==NULL) {
len=0;
bEnd=p=(const BYTE*)(size_t)32;
}
#endif
if (len>=32) {
const BYTE* const limit = bEnd - 32;
U64 v1 = seed + PRIME64_1 + PRIME64_2;
U64 v2 = seed + PRIME64_2;
U64 v3 = seed + 0;
U64 v4 = seed - PRIME64_1;
do {
v1 = XXH64_round(v1, XXH_get64bits(p)); p+=8;
v2 = XXH64_round(v2, XXH_get64bits(p)); p+=8;
v3 = XXH64_round(v3, XXH_get64bits(p)); p+=8;
v4 = XXH64_round(v4, XXH_get64bits(p)); p+=8;
} while (p<=limit);
h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
h64 = XXH64_mergeRound(h64, v1);
h64 = XXH64_mergeRound(h64, v2);
h64 = XXH64_mergeRound(h64, v3);
h64 = XXH64_mergeRound(h64, v4);
} else {
h64 = seed + PRIME64_5;
}
h64 += (U64) len;
while (p+8<=bEnd) {
U64 const k1 = XXH64_round(0, XXH_get64bits(p));
h64 ^= k1;
h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
p+=8;
}
if (p+4<=bEnd) {
h64 ^= (U64)(XXH_get32bits(p)) * PRIME64_1;
h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
p+=4;
}
while (p<bEnd) {
h64 ^= (*p) * PRIME64_5;
h64 = XXH_rotl64(h64, 11) * PRIME64_1;
p++;
}
h64 ^= h64 >> 33;
h64 *= PRIME64_2;
h64 ^= h64 >> 29;
h64 *= PRIME64_3;
h64 ^= h64 >> 32;
return h64;
}
XXH_PUBLIC_API unsigned long long XXH64 (const void* input, size_t len, unsigned long long seed)
{
#if 0
/* Simple version, good for code maintenance, but unfortunately slow for small inputs */
XXH64_CREATESTATE_STATIC(state);
XXH64_reset(state, seed);
XXH64_update(state, input, len);
return XXH64_digest(state);
#else
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if (XXH_FORCE_ALIGN_CHECK) {
if ((((size_t)input) & 7)==0) { /* Input is aligned, let's leverage the speed advantage */
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned);
else
return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned);
} }
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned);
else
return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned);
#endif
}
/* **************************************************
* Advanced Hash Functions
****************************************************/
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void)
{
return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));
}
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void)
{
return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t));
}
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr)
{
XXH_free(statePtr);
return XXH_OK;
}
/*** Hash feed ***/
XXH_PUBLIC_API XXH_errorcode XXH32_reset(XXH32_state_t* statePtr, unsigned int seed)
{
XXH32_state_t state; /* using a local state to memcpy() in order to avoid strict-aliasing warnings */
memset(&state, 0, sizeof(state));
state.seed = seed;
state.v1 = seed + PRIME32_1 + PRIME32_2;
state.v2 = seed + PRIME32_2;
state.v3 = seed + 0;
state.v4 = seed - PRIME32_1;
memcpy(statePtr, &state, sizeof(state));
return XXH_OK;
}
XXH_PUBLIC_API XXH_errorcode XXH64_reset(XXH64_state_t* statePtr, unsigned long long seed)
{
XXH64_state_t state; /* using a local state to memcpy() in order to avoid strict-aliasing warnings */
memset(&state, 0, sizeof(state));
state.seed = seed;
state.v1 = seed + PRIME64_1 + PRIME64_2;
state.v2 = seed + PRIME64_2;
state.v3 = seed + 0;
state.v4 = seed - PRIME64_1;
memcpy(statePtr, &state, sizeof(state));
return XXH_OK;
}
FORCE_INLINE XXH_errorcode XXH32_update_endian (XXH32_state_t* state, const void* input, size_t len, XXH_endianess endian)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (input==NULL) return XXH_ERROR;
#endif
state->total_len += len;
if (state->memsize + len < 16) { /* fill in tmp buffer */
XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, len);
state->memsize += (U32)len;
return XXH_OK;
}
if (state->memsize) { /* some data left from previous update */
XXH_memcpy((BYTE*)(state->mem32) + state->memsize, input, 16-state->memsize);
{ const U32* p32 = state->mem32;
state->v1 = XXH32_round(state->v1, XXH_readLE32(p32, endian)); p32++;
state->v2 = XXH32_round(state->v2, XXH_readLE32(p32, endian)); p32++;
state->v3 = XXH32_round(state->v3, XXH_readLE32(p32, endian)); p32++;
state->v4 = XXH32_round(state->v4, XXH_readLE32(p32, endian)); p32++;
}
p += 16-state->memsize;
state->memsize = 0;
}
if (p <= bEnd-16) {
const BYTE* const limit = bEnd - 16;
U32 v1 = state->v1;
U32 v2 = state->v2;
U32 v3 = state->v3;
U32 v4 = state->v4;
do {
v1 = XXH32_round(v1, XXH_readLE32(p, endian)); p+=4;
v2 = XXH32_round(v2, XXH_readLE32(p, endian)); p+=4;
v3 = XXH32_round(v3, XXH_readLE32(p, endian)); p+=4;
v4 = XXH32_round(v4, XXH_readLE32(p, endian)); p+=4;
} while (p<=limit);
state->v1 = v1;
state->v2 = v2;
state->v3 = v3;
state->v4 = v4;
}
if (p < bEnd) {
XXH_memcpy(state->mem32, p, bEnd-p);
state->memsize = (int)(bEnd-p);
}
return XXH_OK;
}
XXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* state_in, const void* input, size_t len)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_update_endian(state_in, input, len, XXH_littleEndian);
else
return XXH32_update_endian(state_in, input, len, XXH_bigEndian);
}
FORCE_INLINE U32 XXH32_digest_endian (const XXH32_state_t* state, XXH_endianess endian)
{
const BYTE * p = (const BYTE*)state->mem32;
const BYTE* const bEnd = (const BYTE*)(state->mem32) + state->memsize;
U32 h32;
if (state->total_len >= 16) {
h32 = XXH_rotl32(state->v1, 1) + XXH_rotl32(state->v2, 7) + XXH_rotl32(state->v3, 12) + XXH_rotl32(state->v4, 18);
} else {
h32 = state->seed + PRIME32_5;
}
h32 += (U32) state->total_len;
while (p+4<=bEnd) {
h32 += XXH_readLE32(p, endian) * PRIME32_3;
h32 = XXH_rotl32(h32, 17) * PRIME32_4;
p+=4;
}
while (p<bEnd) {
h32 += (*p) * PRIME32_5;
h32 = XXH_rotl32(h32, 11) * PRIME32_1;
p++;
}
h32 ^= h32 >> 15;
h32 *= PRIME32_2;
h32 ^= h32 >> 13;
h32 *= PRIME32_3;
h32 ^= h32 >> 16;
return h32;
}
XXH_PUBLIC_API unsigned int XXH32_digest (const XXH32_state_t* state_in)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH32_digest_endian(state_in, XXH_littleEndian);
else
return XXH32_digest_endian(state_in, XXH_bigEndian);
}
/* **** XXH64 **** */
FORCE_INLINE XXH_errorcode XXH64_update_endian (XXH64_state_t* state, const void* input, size_t len, XXH_endianess endian)
{
const BYTE* p = (const BYTE*)input;
const BYTE* const bEnd = p + len;
#ifdef XXH_ACCEPT_NULL_INPUT_POINTER
if (input==NULL) return XXH_ERROR;
#endif
state->total_len += len;
if (state->memsize + len < 32) { /* fill in tmp buffer */
XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, len);
state->memsize += (U32)len;
return XXH_OK;
}
if (state->memsize) { /* tmp buffer is full */
XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, 32-state->memsize);
state->v1 = XXH64_round(state->v1, XXH_readLE64(state->mem64+0, endian));
state->v2 = XXH64_round(state->v2, XXH_readLE64(state->mem64+1, endian));
state->v3 = XXH64_round(state->v3, XXH_readLE64(state->mem64+2, endian));
state->v4 = XXH64_round(state->v4, XXH_readLE64(state->mem64+3, endian));
p += 32-state->memsize;
state->memsize = 0;
}
if (p+32 <= bEnd) {
const BYTE* const limit = bEnd - 32;
U64 v1 = state->v1;
U64 v2 = state->v2;
U64 v3 = state->v3;
U64 v4 = state->v4;
do {
v1 = XXH64_round(v1, XXH_readLE64(p, endian)); p+=8;
v2 = XXH64_round(v2, XXH_readLE64(p, endian)); p+=8;
v3 = XXH64_round(v3, XXH_readLE64(p, endian)); p+=8;
v4 = XXH64_round(v4, XXH_readLE64(p, endian)); p+=8;
} while (p<=limit);
state->v1 = v1;
state->v2 = v2;
state->v3 = v3;
state->v4 = v4;
}
if (p < bEnd) {
XXH_memcpy(state->mem64, p, bEnd-p);
state->memsize = (int)(bEnd-p);
}
return XXH_OK;
}
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* state_in, const void* input, size_t len)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_update_endian(state_in, input, len, XXH_littleEndian);
else
return XXH64_update_endian(state_in, input, len, XXH_bigEndian);
}
FORCE_INLINE U64 XXH64_digest_endian (const XXH64_state_t* state, XXH_endianess endian)
{
const BYTE * p = (const BYTE*)state->mem64;
const BYTE* const bEnd = (const BYTE*)state->mem64 + state->memsize;
U64 h64;
if (state->total_len >= 32) {
U64 const v1 = state->v1;
U64 const v2 = state->v2;
U64 const v3 = state->v3;
U64 const v4 = state->v4;
h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18);
h64 = XXH64_mergeRound(h64, v1);
h64 = XXH64_mergeRound(h64, v2);
h64 = XXH64_mergeRound(h64, v3);
h64 = XXH64_mergeRound(h64, v4);
} else {
h64 = state->seed + PRIME64_5;
}
h64 += (U64) state->total_len;
while (p+8<=bEnd) {
U64 const k1 = XXH64_round(0, XXH_readLE64(p, endian));
h64 ^= k1;
h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4;
p+=8;
}
if (p+4<=bEnd) {
h64 ^= (U64)(XXH_readLE32(p, endian)) * PRIME64_1;
h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3;
p+=4;
}
while (p<bEnd) {
h64 ^= (*p) * PRIME64_5;
h64 = XXH_rotl64(h64, 11) * PRIME64_1;
p++;
}
h64 ^= h64 >> 33;
h64 *= PRIME64_2;
h64 ^= h64 >> 29;
h64 *= PRIME64_3;
h64 ^= h64 >> 32;
return h64;
}
XXH_PUBLIC_API unsigned long long XXH64_digest (const XXH64_state_t* state_in)
{
XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN;
if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT)
return XXH64_digest_endian(state_in, XXH_littleEndian);
else
return XXH64_digest_endian(state_in, XXH_bigEndian);
}
/* **************************
* Canonical representation
****************************/
/*! Default XXH result types are basic unsigned 32 and 64 bits.
* The canonical representation follows human-readable write convention, aka big-endian (large digits first).
* These functions allow transformation of hash result into and from its canonical format.
* This way, hash values can be written into a file or buffer, and remain comparable across different systems and programs.
*/
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH32_canonical_t) == sizeof(XXH32_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap32(hash);
memcpy(dst, &hash, sizeof(*dst));
}
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash)
{
XXH_STATIC_ASSERT(sizeof(XXH64_canonical_t) == sizeof(XXH64_hash_t));
if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap64(hash);
memcpy(dst, &hash, sizeof(*dst));
}
XXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src)
{
return XXH_readBE32(src);
}
XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src)
{
return XXH_readBE64(src);
}

View File

@@ -1,297 +0,0 @@
/*
xxHash - Extremely Fast Hash algorithm
Header File
Copyright (C) 2012-2016, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- xxHash source repository : https://github.com/Cyan4973/xxHash
*/
/* Notice extracted from xxHash homepage :
xxHash is an extremely fast Hash algorithm, running at RAM speed limits.
It also successfully passes all tests from the SMHasher suite.
Comparison (single thread, Windows Seven 32 bits, using SMHasher on a Core 2 Duo @3GHz)
Name Speed Q.Score Author
xxHash 5.4 GB/s 10
CrapWow 3.2 GB/s 2 Andrew
MumurHash 3a 2.7 GB/s 10 Austin Appleby
SpookyHash 2.0 GB/s 10 Bob Jenkins
SBox 1.4 GB/s 9 Bret Mulvey
Lookup3 1.2 GB/s 9 Bob Jenkins
SuperFastHash 1.2 GB/s 1 Paul Hsieh
CityHash64 1.05 GB/s 10 Pike & Alakuijala
FNV 0.55 GB/s 5 Fowler, Noll, Vo
CRC32 0.43 GB/s 9
MD5-32 0.33 GB/s 10 Ronald L. Rivest
SHA1-32 0.28 GB/s 10
Q.Score is a measure of quality of the hash function.
It depends on successfully passing SMHasher test set.
10 is a perfect score.
A 64-bits version, named XXH64, is available since r35.
It offers much better speed, but for 64-bits applications only.
Name Speed on 64 bits Speed on 32 bits
XXH64 13.8 GB/s 1.9 GB/s
XXH32 6.8 GB/s 6.0 GB/s
*/
#ifndef XXHASH_H_5627135585666179
#define XXHASH_H_5627135585666179 1
#if defined (__cplusplus)
extern "C" {
#endif
/* ****************************
* Definitions
******************************/
#include <stddef.h> /* size_t */
typedef enum { XXH_OK=0, XXH_ERROR } XXH_errorcode;
/* ****************************
* API modifier
******************************/
/** XXH_PRIVATE_API
* This is useful if you want to include xxhash functions in `static` mode
* in order to inline them, and remove their symbol from the public list.
* Methodology :
* #define XXH_PRIVATE_API
* #include "xxhash.h"
* `xxhash.c` is automatically included, so the file is still needed,
* but it's not useful to compile and link it anymore.
*/
#ifdef XXH_PRIVATE_API
# ifndef XXH_STATIC_LINKING_ONLY
# define XXH_STATIC_LINKING_ONLY
# endif
# if defined(__GNUC__)
# define XXH_PUBLIC_API static __attribute__((unused))
# elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)
# define XXH_PUBLIC_API static inline
# elif defined(_MSC_VER)
# define XXH_PUBLIC_API static __inline
# else
# define XXH_PUBLIC_API static /* this version may generate warnings for unused static functions; disable the relevant warning */
# endif
#else
# define XXH_PUBLIC_API /* do nothing */
#endif /* XXH_PRIVATE_API */
/*!XXH_NAMESPACE, aka Namespace Emulation :
If you want to include _and expose_ xxHash functions from within your own library,
but also want to avoid symbol collisions with another library which also includes xxHash,
you can use XXH_NAMESPACE, to automatically prefix any public symbol from xxhash library
with the value of XXH_NAMESPACE (so avoid to keep it NULL and avoid numeric values).
Note that no change is required within the calling program as long as it includes `xxhash.h` :
regular symbol name will be automatically translated by this header.
*/
#ifdef XXH_NAMESPACE
# define XXH_CAT(A,B) A##B
# define XXH_NAME2(A,B) XXH_CAT(A,B)
# define XXH32 XXH_NAME2(XXH_NAMESPACE, XXH32)
# define XXH64 XXH_NAME2(XXH_NAMESPACE, XXH64)
# define XXH_versionNumber XXH_NAME2(XXH_NAMESPACE, XXH_versionNumber)
# define XXH32_createState XXH_NAME2(XXH_NAMESPACE, XXH32_createState)
# define XXH64_createState XXH_NAME2(XXH_NAMESPACE, XXH64_createState)
# define XXH32_freeState XXH_NAME2(XXH_NAMESPACE, XXH32_freeState)
# define XXH64_freeState XXH_NAME2(XXH_NAMESPACE, XXH64_freeState)
# define XXH32_reset XXH_NAME2(XXH_NAMESPACE, XXH32_reset)
# define XXH64_reset XXH_NAME2(XXH_NAMESPACE, XXH64_reset)
# define XXH32_update XXH_NAME2(XXH_NAMESPACE, XXH32_update)
# define XXH64_update XXH_NAME2(XXH_NAMESPACE, XXH64_update)
# define XXH32_digest XXH_NAME2(XXH_NAMESPACE, XXH32_digest)
# define XXH64_digest XXH_NAME2(XXH_NAMESPACE, XXH64_digest)
# define XXH32_copyState XXH_NAME2(XXH_NAMESPACE, XXH32_copyState)
# define XXH64_copyState XXH_NAME2(XXH_NAMESPACE, XXH64_copyState)
#endif
/* *************************************
* Version
***************************************/
#define XXH_VERSION_MAJOR 0
#define XXH_VERSION_MINOR 6
#define XXH_VERSION_RELEASE 1
#define XXH_VERSION_NUMBER (XXH_VERSION_MAJOR *100*100 + XXH_VERSION_MINOR *100 + XXH_VERSION_RELEASE)
XXH_PUBLIC_API unsigned XXH_versionNumber (void);
/* ****************************
* Simple Hash Functions
******************************/
typedef unsigned int XXH32_hash_t;
typedef unsigned long long XXH64_hash_t;
XXH_PUBLIC_API XXH32_hash_t XXH32 (const void* input, size_t length, unsigned int seed);
XXH_PUBLIC_API XXH64_hash_t XXH64 (const void* input, size_t length, unsigned long long seed);
/*!
XXH32() :
Calculate the 32-bits hash of sequence "length" bytes stored at memory address "input".
The memory between input & input+length must be valid (allocated and read-accessible).
"seed" can be used to alter the result predictably.
Speed on Core 2 Duo @ 3 GHz (single thread, SMHasher benchmark) : 5.4 GB/s
XXH64() :
Calculate the 64-bits hash of sequence of length "len" stored at memory address "input".
"seed" can be used to alter the result predictably.
This function runs 2x faster on 64-bits systems, but slower on 32-bits systems (see benchmark).
*/
/* ****************************
* Streaming Hash Functions
******************************/
typedef struct XXH32_state_s XXH32_state_t; /* incomplete type */
typedef struct XXH64_state_s XXH64_state_t; /* incomplete type */
/*! State allocation, compatible with dynamic libraries */
XXH_PUBLIC_API XXH32_state_t* XXH32_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr);
XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void);
XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr);
/* hash streaming */
XXH_PUBLIC_API XXH_errorcode XXH32_reset (XXH32_state_t* statePtr, unsigned int seed);
XXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH32_hash_t XXH32_digest (const XXH32_state_t* statePtr);
XXH_PUBLIC_API XXH_errorcode XXH64_reset (XXH64_state_t* statePtr, unsigned long long seed);
XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* statePtr, const void* input, size_t length);
XXH_PUBLIC_API XXH64_hash_t XXH64_digest (const XXH64_state_t* statePtr);
/*
These functions generate the xxHash of an input provided in multiple segments.
Note that, for small input, they are slower than single-call functions, due to state management.
For small input, prefer `XXH32()` and `XXH64()` .
XXH state must first be allocated, using XXH*_createState() .
Start a new hash by initializing state with a seed, using XXH*_reset().
Then, feed the hash state by calling XXH*_update() as many times as necessary.
Obviously, input must be allocated and read accessible.
The function returns an error code, with 0 meaning OK, and any other value meaning there is an error.
Finally, a hash value can be produced anytime, by using XXH*_digest().
This function returns the nn-bits hash as an int or long long.
It's still possible to continue inserting input into the hash state after a digest,
and generate some new hashes later on, by calling again XXH*_digest().
When done, free XXH state space if it was allocated dynamically.
*/
/* **************************
* Utils
****************************/
#if !(defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) /* ! C99 */
# define restrict /* disable restrict */
#endif
XXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* restrict dst_state, const XXH32_state_t* restrict src_state);
XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* restrict dst_state, const XXH64_state_t* restrict src_state);
/* **************************
* Canonical representation
****************************/
typedef struct { unsigned char digest[4]; } XXH32_canonical_t;
typedef struct { unsigned char digest[8]; } XXH64_canonical_t;
XXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash);
XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash);
XXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src);
XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src);
/* Default result type for XXH functions are primitive unsigned 32 and 64 bits.
* The canonical representation uses human-readable write convention, aka big-endian (large digits first).
* These functions allow transformation of hash result into and from its canonical format.
* This way, hash values can be written into a file / memory, and remain comparable on different systems and programs.
*/
#ifdef XXH_STATIC_LINKING_ONLY
/* ================================================================================================
This section contains definitions which are not guaranteed to remain stable.
They could change in a future version, becoming incompatible with a different version of the library.
They shall only be used with static linking.
=================================================================================================== */
/* These definitions allow allocating XXH state statically (on stack) */
struct XXH32_state_s {
unsigned long long total_len;
unsigned seed;
unsigned v1;
unsigned v2;
unsigned v3;
unsigned v4;
unsigned mem32[4]; /* buffer defined as U32 for alignment */
unsigned memsize;
}; /* typedef'd to XXH32_state_t */
struct XXH64_state_s {
unsigned long long total_len;
unsigned long long seed;
unsigned long long v1;
unsigned long long v2;
unsigned long long v3;
unsigned long long v4;
unsigned long long mem64[4]; /* buffer defined as U64 for alignment */
unsigned memsize;
}; /* typedef'd to XXH64_state_t */
# ifdef XXH_PRIVATE_API
# include "xxhash.c" /* include xxhash functions as `static`, for inlining */
# endif
#endif /* XXH_STATIC_LINKING_ONLY */
#if defined (__cplusplus)
}
#endif
#endif /* XXHASH_H_5627135585666179 */

Some files were not shown because too many files have changed in this diff Show More