Skip to content

Commit 0f3112f

Browse files
authored
create readme.md
first readme.md
1 parent e092e31 commit 0f3112f

File tree

1 file changed

+104
-0
lines changed

1 file changed

+104
-0
lines changed

README.md

+104
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
# Variational Auto-Encoder for MNIST
2+
An implementation of variational auto-encoder (VAE) for MNIST descripbed in the paper:
3+
* [Auto-Encoding Variational Bayes](https://arxiv.org/pdf/1312.6114) by Kingma et al.
4+
5+
## Results
6+
### Reproduce
7+
Well trained VAE must be able to reproduce input image.
8+
Figure 5 in the paper shows reproduce performance of leared generative models for different dimensionalities.
9+
The following results can be reproduced with command:
10+
```
11+
python run_main.py --dim_z <each value> --num_epochs 60
12+
```
13+
14+
<table align='center'>
15+
<tr align='center'>
16+
<td> Input image </td>
17+
<td> 2-D latent space </td>
18+
<td> 5-D latent space </td>
19+
<td> 10-D latent space </td>
20+
<td> 20-D latent space </td>
21+
</tr>
22+
<tr>
23+
<td><img src = 'results/input.jpg' height = '150px'>
24+
<td><img src = 'results/dim_z_2.jpg' height = '150px'>
25+
<td><img src = 'results/dim_z_5.jpg' height = '150px'>
26+
<td><img src = 'results/dim_z_10.jpg' height = '150px'>
27+
<td><img src = 'results/dim_z_20.jpg' height = '150px'>
28+
</tr>
29+
</table>
30+
31+
### Denoising
32+
33+
When training, salt & pepper noise is added to input image, so that VAE can reduce noise and restore original input image.
34+
The following results can be reproduced with command:
35+
```
36+
python run_main.py --dim_z 20 --add_noize True --num_epochs 40
37+
```
38+
<table align='center'>
39+
<tr align='center'>
40+
<td> Original input image </td>
41+
<td> Input image with noise </td>
42+
<td> Restored image via VAE </td>
43+
</tr>
44+
<tr>
45+
<td><img src = 'results/input.jpg' height = '300px'>
46+
<td><img src = 'results/input_noise.jpg' height = '300px'>
47+
<td><img src = 'results/denoising.jpg' height = '300px'>
48+
</tr>
49+
</table>
50+
51+
### Learned MNIST manifold
52+
Visualizations of learned data manifold for generative models with 2-dim. latent space are given in Figure. 4 in the paper.
53+
The following results can be reproduced with command:
54+
```
55+
python run_main.py --dim_z 2 --num_epochs 60 --PMLR True
56+
```
57+
<table align='center'>
58+
<tr align='center'>
59+
<td> Learned MNIST manifold </td>
60+
<td> Distribution of labeled data </td>
61+
</tr>
62+
<tr>
63+
<td><img src = 'results/PMLR.jpg' height = '400px'>
64+
<td><img src = 'results/PMLR_map.jpg' height = '400px'>
65+
</tr>
66+
</table>
67+
68+
## Usage
69+
```
70+
python run_main.py --dim_z <latent vector dimension>
71+
```
72+
*Example*:
73+
`python run_main.py --dim_z 20`
74+
75+
### Arguments
76+
*Required* :
77+
* `--dim_z`: Dimension of latent vector. *Default*: `20`
78+
79+
*Optional* :
80+
* `--results_path`: File path of output images. *Default*: `results`
81+
* `--add_noise`: Boolean for adding salt & pepper noise to input image. *Default*: `False`
82+
* `--n_hidden`: Number of hidden units in MLP. *Default*: `500`
83+
* `--learn_rate`: Learning rate for Adam optimizer. *Default*: `1e-3`
84+
* `--num_epochs`: The number of epochs to run. *Default*: `20`
85+
* `--batch_size`: Batch size. *Default*: `128`
86+
* `--PRR`: Boolean for plot-reproduce-result. *Default*: `True`
87+
* `--PRR_n_img_x`: Number of images along x-axis. *Default*: `10`
88+
* `--PRR_n_img_y`: Number of images along y-axis. *Default*: `10`
89+
* `--PRR_resize_factor`: Resize factor for each displayed image. *Default*: `1.0`
90+
* `--PMLR`: Boolean for plot-manifold-learning-result. *Default*: `False`
91+
* `--PMLR_n_img_x`: Number of images along x-axis. *Default*: `20`
92+
* `--PMLR_n_img_y`: Number of images along y-axis. *Default*: `20`
93+
* `--PMLR_resize_factor`: Resize factor for each displayed image. *Default*: `1.0`
94+
* `--PMLR_n_samples`: Number of samples in order to get distribution of labeled data. *Default*: `5000`
95+
96+
## References
97+
The implementation is based on the projects:
98+
[1] https://github.com/oduerr/dl_tutorial/tree/master/tensorflow/vae
99+
[2] https://github.com/fastforwardlabs/vae-tf/tree/master
100+
[3] https://github.com/kvfrans/variational-autoencoder
101+
[4] https://github.com/altosaar/vae
102+
103+
## Acknowledgements
104+
This implementation has been tested with Tensorflow r0.12 on Windows 10 and Ubuntu 14.04.

0 commit comments

Comments
 (0)