Skip to content

Commit e41999f

Browse files
committed
Entropy coder for images: remove deprecated functions and update README.
1 parent c924488 commit e41999f

File tree

4 files changed

+12
-5
lines changed

4 files changed

+12
-5
lines changed

compression/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ code for the following papers:
88

99
## Organization
1010
[Image Encoder](image_encoder/): Encoding and decoding images into their binary representation.
11+
[Entropy Coder](entropy_coder/): Lossless compression of the binary representation.
1112

1213
## Contact Info
1314
Model repository maintained by Nick Johnston ([nickj-google](https://github.com/nickj-google)).

compression/entropy_coder/README.md

+8-1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,11 @@ the width of the binary codes,
1414
sliced into N groups of K, where each additional group is used by the image
1515
decoder to add more details to the reconstructed image.
1616

17+
The code in this directory only contains the underlying code probability model
18+
but does not perform the actual compression using arithmetic coding.
19+
The code probability model is enough to compute the theoretical compression
20+
ratio.
21+
1722

1823
## Prerequisites
1924
The only software requirements for running the encoder and decoder is having
@@ -22,7 +27,7 @@ Tensorflow installed.
2227
You will also need to add the top level source directory of the entropy coder
2328
to your `PYTHONPATH`, for example:
2429

25-
`export PYTHONPATH=${PYTHONPATH}:/tmp/compression/entropy_coder`
30+
`export PYTHONPATH=${PYTHONPATH}:/tmp/models/compression`
2631

2732

2833
## Training the entropy coder
@@ -38,6 +43,8 @@ less.
3843

3944
To generate a synthetic dataset with 20000 samples:
4045

46+
`mkdir -p /tmp/dataset`
47+
4148
`python ./dataset/gen_synthetic_dataset.py --dataset_dir=/tmp/dataset/
4249
--count=20000`
4350

compression/entropy_coder/core/entropy_coder_train.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ def train():
111111
decay_steps=decay_steps,
112112
decay_rate=decay_rate,
113113
staircase=True)
114-
tf.contrib.deprecated.scalar_summary('Learning Rate', learning_rate)
114+
tf.summary.scalar('Learning Rate', learning_rate)
115115
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
116116
epsilon=1.0)
117117

compression/entropy_coder/progressive/progressive.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -202,11 +202,10 @@ def BuildGraph(self, input_codes):
202202
code_length.append(code_length_block(
203203
blocks.ConvertSignCodeToZeroOneCode(x),
204204
blocks.ConvertSignCodeToZeroOneCode(predicted_x)))
205-
tf.contrib.deprecated.scalar_summary('code_length_layer_{:02d}'.format(k),
206-
code_length[-1])
205+
tf.summary.scalar('code_length_layer_{:02d}'.format(k), code_length[-1])
207206
code_length = tf.stack(code_length)
208207
self.loss = tf.reduce_mean(code_length)
209-
tf.contrib.deprecated.scalar_summary('loss', self.loss)
208+
tf.summary.scalar('loss', self.loss)
210209

211210
# Loop over all the remaining layers just to make sure they are
212211
# instantiated. Otherwise, loading model params could fail.

0 commit comments

Comments
 (0)