22
33This package contains data compression ops and layers for TensorFlow.
44
5+ For usage questions and discussions, please head over to our
6+ [ Google group] ( https://groups.google.com/forum/#!forum/tensorflow-compression ) !
7+
8+ ## Compiling
9+
10+ ** Please note** : You need TensorFlow 1.9 (or the master branch as of May 2018)
11+ or later.
12+
13+ First, compile the custom ops needed by TensorFlow.
14+ ``` shell
15+ cd compression
16+ chmod +x compile.sh
17+ ./compile.sh
18+ cd ..
19+ ```
20+
21+ To make sure the compilation and library imports succeed, try running the two
22+ tests.
23+ ```
24+ python compression/python/ops/coder_ops_test.py
25+ python compression/python/layers/entropybottleneck_test.py
26+ ```
27+
528## Entropy bottleneck layer
629
730This layer exposes a high-level interface to model the entropy (the amount of
@@ -23,11 +46,9 @@ The layer implements a flexible probability density model to estimate entropy,
2346which is described in the appendix of the paper (please cite the paper if you
2447use this code for scientific work):
2548
26- "Variational image compression with a scale hyperprior"
27-
28- Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
29-
30- https://arxiv.org/abs/1802.01436
49+ > "Variational image compression with a scale hyperprior"
50+ > Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
51+ > https://arxiv.org/abs/1802.01436
3152
3253The layer assumes that the input tensor is at least 2D, with a batch dimension
3354at the beginning and a channel dimension as specified by ` data_format ` . The
@@ -47,32 +68,18 @@ tensor values are good enough for practical purposes, the training phase must
4768be used to balance the quality of the approximation with the entropy, by
4869adding an entropy term to the training loss, as in the following example.
4970
50- ### Compiling
51-
52- * Please note* : You need TensorFlow 1.9 (or the master branch as of May 2018)
53- or later.
54-
55- First, compile the custom ops needed by TensorFlow.
56- ``` shell
57- cd compression
58- chmod +x compile.sh
59- ./compile.sh
60- cd ..
61- ```
62-
63- To make sure the compilation and library imports succeed, try running the two
64- tests.
65- ```
66- python compression/python/ops/coder_ops_test.py
67- python compression/python/layers/entropybottleneck_test.py
68- ```
69-
7071### Training
7172
7273Here, we use the entropy bottleneck to compress the latent representation of
7374an autoencoder. The data vectors ` x ` in this case are 4D tensors in
7475` 'channels_last' ` format (for example, 16x16 pixel grayscale images).
7576
77+ Note that ` forward_transform ` and ` backward_transform ` are placeholders and can
78+ be any appropriate artifical neural network. We've found that it generally helps
79+ * not* to use batch normalization, and to sandwich the bottleneck between two
80+ linear transforms or convolutions (i.e. to have no nonlinearities directly
81+ before and after).
82+
7683``` python
7784# Build autoencoder.
7885x = tf.placeholder(tf.float32, shape = [None , 16 , 16 , 1 ])
0 commit comments