@@ -5,33 +5,42 @@ TensorFlow Compression (TFC) contains data compression tools for TensorFlow.
55You can use this library to build your own ML models with end-to-end optimized
66data compression built in. It's useful to find storage-efficient representations
77of your data (images, features, examples, etc.) while only sacrificing a small
8- fraction of model performance.
9-
10- Specifically, the entropy model classes in this library simplify the process of
11- designing rate–distortion optimized codes. During training, they act like
12- likelihood models. Once training is completed, they encode floating point
13- tensors into optimized bit sequences by automating the design of probability
14- tables and calling a range coder implementation behind the scenes.
15-
16- The library implements range coding (a.k.a. arithmetic coding) using a set of
17- flexible TF ops written in C++. These include an optional "overflow"
18- functionality that embeds an Elias gamma code into the range encoded bit
19- sequence, making it possible to encode the entire set of signed integers rather
20- than just a finite range.
21-
22- The main novelty of the learned approach over traditional transform coding is
23- the stochastic minimization of the rate-distortion Lagrangian, and using
24- nonlinear transforms implemented by neural networks. For an introduction to
25- this from a data compression perspective, consider our [ paper on nonlinear
26- transform coding] ( https://arxiv.org/abs/2007.03034 ) , or watch @jonycgn 's [ talk
27- on learned image compression] ( https://www.youtube.com/watch?v=x_q7cZviXkY ) . For
28- an introduction to lossy data compression from a machine learning perspective,
29- take a look at @yiboyang 's [ review paper] ( https://arxiv.org/abs/2202.06533 ) .
8+ fraction of model performance. Take a look at the [ lossy data compression
9+ tutorial] ( https://www.tensorflow.org/tutorials/generative/data_compression ) to
10+ get started.
11+
12+ For a more in-depth introduction from a classical data compression perspective,
13+ consider our [ paper on nonlinear transform
14+ coding] ( https://arxiv.org/abs/2007.03034 ) , or watch @jonycgn 's [ talk on learned
15+ image compression] ( https://www.youtube.com/watch?v=x_q7cZviXkY ) . For an
16+ introduction to lossy data compression from a machine learning perspective, take
17+ a look at @yiboyang 's [ review paper] ( https://arxiv.org/abs/2202.06533 ) .
18+
19+ The library contains (see the [ API
20+ docs] ( https://www.tensorflow.org/api_docs/python/tfc ) for details):
21+
22+ - Range coding (a.k.a. arithmetic coding) implementations in the form of
23+ flexible TF ops written in C++. These include an optional "overflow"
24+ functionality that embeds an Elias gamma code into the range encoded bit
25+ sequence, making it possible to encode alphabets containing the entire set of
26+ signed integers rather than just a finite range.
27+
28+ - Entropy model classes which simplify the process of designing rate–distortion
29+ optimized codes. During training, they act like likelihood models. Once
30+ training is completed, they encode floating point tensors into optimized bit
31+ sequences by automating the design of range coding tables and calling the
32+ range coder implementation behind the scenes.
33+
34+ - Additional TensorFlow functions and Keras layers that are useful in the
35+ context of learned data compression, such as methods to numerically find
36+ quantiles of density functions, take expectations with respect to dithering
37+ noise, convolution layers with more flexible padding options, and an
38+ implementation of generalized divisive normalization (GDN).
39+
3040
3141## Documentation & getting help
3242
33- Refer to [ the API
34- documentation] ( https://tensorflow.github.io/compression/docs/api_docs/python/tfc.html )
43+ Refer to [ the API documentation] ( https://www.tensorflow.org/api_docs/python/tfc )
3544for a complete description of the classes and functions this package implements.
3645
3746Please post all questions or comments on
0 commit comments