File tree 4 files changed +12
-5
lines changed
4 files changed +12
-5
lines changed Original file line number Diff line number Diff line change @@ -8,6 +8,7 @@ code for the following papers:
8
8
9
9
## Organization
10
10
[ Image Encoder] ( image_encoder/ ) : Encoding and decoding images into their binary representation.
11
+ [ Entropy Coder] ( entropy_coder/ ) : Lossless compression of the binary representation.
11
12
12
13
## Contact Info
13
14
Model repository maintained by Nick Johnston ([ nickj-google] ( https://github.com/nickj-google ) ).
Original file line number Diff line number Diff line change @@ -14,6 +14,11 @@ the width of the binary codes,
14
14
sliced into N groups of K, where each additional group is used by the image
15
15
decoder to add more details to the reconstructed image.
16
16
17
+ The code in this directory only contains the underlying code probability model
18
+ but does not perform the actual compression using arithmetic coding.
19
+ The code probability model is enough to compute the theoretical compression
20
+ ratio.
21
+
17
22
18
23
## Prerequisites
19
24
The only software requirements for running the encoder and decoder is having
@@ -22,7 +27,7 @@ Tensorflow installed.
22
27
You will also need to add the top level source directory of the entropy coder
23
28
to your ` PYTHONPATH ` , for example:
24
29
25
- ` export PYTHONPATH=${PYTHONPATH}:/tmp/compression/entropy_coder `
30
+ ` export PYTHONPATH=${PYTHONPATH}:/tmp/models/compression `
26
31
27
32
28
33
## Training the entropy coder
38
43
39
44
To generate a synthetic dataset with 20000 samples:
40
45
46
+ ` mkdir -p /tmp/dataset `
47
+
41
48
`python ./dataset/gen_synthetic_dataset.py --dataset_dir=/tmp/dataset/
42
49
--count=20000`
43
50
Original file line number Diff line number Diff line change @@ -111,7 +111,7 @@ def train():
111
111
decay_steps = decay_steps ,
112
112
decay_rate = decay_rate ,
113
113
staircase = True )
114
- tf .contrib . deprecated . scalar_summary ('Learning Rate' , learning_rate )
114
+ tf .summary . scalar ('Learning Rate' , learning_rate )
115
115
optimizer = tf .train .AdamOptimizer (learning_rate = learning_rate ,
116
116
epsilon = 1.0 )
117
117
Original file line number Diff line number Diff line change @@ -202,11 +202,10 @@ def BuildGraph(self, input_codes):
202
202
code_length .append (code_length_block (
203
203
blocks .ConvertSignCodeToZeroOneCode (x ),
204
204
blocks .ConvertSignCodeToZeroOneCode (predicted_x )))
205
- tf .contrib .deprecated .scalar_summary ('code_length_layer_{:02d}' .format (k ),
206
- code_length [- 1 ])
205
+ tf .summary .scalar ('code_length_layer_{:02d}' .format (k ), code_length [- 1 ])
207
206
code_length = tf .stack (code_length )
208
207
self .loss = tf .reduce_mean (code_length )
209
- tf .contrib . deprecated . scalar_summary ('loss' , self .loss )
208
+ tf .summary . scalar ('loss' , self .loss )
210
209
211
210
# Loop over all the remaining layers just to make sure they are
212
211
# instantiated. Otherwise, loading model params could fail.
You can’t perform that action at this time.
0 commit comments