Skip to content

Commit ba90f41

Browse files
committed
rearange files
1 parent a5d37c4 commit ba90f41

File tree

40 files changed

+1961
-797
lines changed

40 files changed

+1961
-797
lines changed

README.md

+1-110
Original file line numberDiff line numberDiff line change
@@ -1,110 +1 @@
1-
# 모두를 위한 딥러닝 시즌2 : 모두가 만드는 모두를 위한 딥러닝
2-
3-
Sung Kim 교수님의 모두를 위한 딥러닝이 돌아왔습니다!
4-
5-
이 강의는 2016년 Sung Kim 교수님이 만드신 '모두를 위한 딥러닝(https://hunkim.github.io/ml/)' 의 개정판이자 후속작입니다.
6-
7-
"알파고와 이세돌의 경기를 보면서 이제 머신 러닝이 인간이 잘 한다고 여겨진 직관과 의사 결정능력에서도 충분한 데이타가 있으면 어느정도 또는 우리보다 더 잘할수도 있다는 생각을 많이 하게 되었습니다. Andrew Ng 교수님이 말씀하신것 처럼 이런 시대에 머신 러닝을 잘 이해하고 잘 다룰수 있다면 그야말로 'Super Power'를 가지게 되는 것이 아닌가 생각합니다.
8-
9-
더 많은 분들이 머신 러닝과 딥러닝에 대해 더 이해하고 본인들의 문제를 이 멋진 도구를 이용해서 풀수 있게 하기위해 비디오 강의를 준비하였습니다. 더 나아가 이론에만 그치지 않고 머신러닝을 위한 오픈소스인 구글이 공개한 TensorFlow와 페이스북이 공개한 Pytorch를 이용해서 이론을 구현해 볼수 있도록 하였습니다.
10-
11-
수학이나 컴퓨터 공학적인 지식이 없이도 쉽게 볼수 있도록 만들려고 노력하였습니다."
12-
13-
-홍콩과기대 컴퓨터공학 교수 김성훈(Sung Kim)
14-
15-
## TensorFlow
16-
Deep Learning Zero to All - TensorFlow
17-
18-
여기는 TensorFlow 버전 Github 문서입니다.
19-
20-
현재는 Tensorflow 1.12(stable)를 기반으로 작성했으며 Tensorflow 2.0이 출시되는 대로 추후 반영할 예정입니다.
21-
22-
23-
## Install Requirements
24-
```bash
25-
pip install -r requirements.txt
26-
```
27-
28-
## Contributions/Comments
29-
언제나 여러분들의 참여를 환영합니다. Comments나 Pull requests를 남겨주세요
30-
31-
We always welcome your comments and pull requests.
32-
33-
------------------------------------
34-
### Docker 사용자를 위한 안내
35-
36-
[docker_user_guide.md](docker_user_guide.md) 파일을 참고하세요! :)
37-
38-
### 목차
39-
* Lec 01: 기본적인 Machine Learning 의 용어와 개념 설명
40-
* Lab 01: (추가예정)
41-
* Lec 02: Simple Linear Regression
42-
* Lab 02: Simple Linear Regression 를 TensorFlow 로 구현하기
43-
* Lec 03: Linear Regression and How to minimize cost
44-
* Lab 03: Linear Regression and How to minimize cost 를 TensorFlow 로 구현하기
45-
* Lec 04: Multi-variable Linear Regression
46-
* Lab 04: Multi-variable Linear Regression 를 TensorFlow 로 구현하기
47-
* Lec 05-1: Logistic Regression/Classification 의 소개
48-
* Lec 05-2: Logistic Regression/Classification 의 cost 함수, 최소화
49-
* Lab 05-3: Logistic Regression/Classification 를 TensorFlow 로 구현하기
50-
* Lec 06-1: Softmax Regression: 기본 개념소개
51-
* Lec 06-2: Softmax Classifier의 cost함수
52-
* Lab 06-1: Softmax classifier 를 TensorFlow 로 구현하기
53-
* Lab 06-2: Fancy Softmax classifier 를 TensorFlow 로 구현하기
54-
* Lec 07-1: Application & Tips: 학습률(Learning Rate)과 데이터 전처리(Data Preprocessing)
55-
* Lec 07-2: Application & Tips: 오버피팅(Overfitting) & Solutions
56-
* Lab 07-1: Application & Tips: 학습률, 전처리, 오버피팅을 TensorFlow 로 실습
57-
* Lec 07-3: Application & Tips: Data & Learning
58-
* Lab 07-2: Application & Tips: 다양한 Dataset 으로 실습
59-
* Lec 08-1: 딥러닝의 기본 개념: 시작과 XOR 문제
60-
* Lec 08-2: 딥러닝의 기본 개념2: Back-propagation 과 2006/2007 '딥'의 출현
61-
* Lec 09-1: XOR 문제 딥러닝으로 풀기
62-
* Lec 09-2: 딥넷트웍 학습 시키기 (backpropagation)
63-
* Lab 09-1: Neural Net for XOR
64-
* Lab 09-2: Tensorboard (Neural Net for XOR)
65-
* Lab 10-1: Sigmoid 보다 ReLU가 더 좋아
66-
* Lab 10-2: Weight 초기화 잘해보자
67-
* Lab 10-3: Dropout
68-
* Lab 10-4: Batch Normalization
69-
* Lec 11-1: ConvNet의 Conv 레이어 만들기
70-
* Lec 11-2: ConvNet Max pooling 과 Full Network
71-
* Lec 11-3: ConvNet의 활용예
72-
* Lab 11-0: CNN Basic: Convolution
73-
* Lab 11-0: CNN Basic: Pooling
74-
* Lab 11-1: mnist cnn keras sequential eager
75-
* Lab 11-2: mnist cnn keras functional eager
76-
* Lab-11-3: mnist cnn keras subclassing eager
77-
* Lab-11-4: mnist cnn keras ensemble eager
78-
* Lab-11-5: mnist cnn best keras eager
79-
* Lec 12: NN의 꽃 RNN 이야기
80-
* [Lab 12-0: rnn basics](https://nbviewer.jupyter.org/github/deeplearningzerotoall/TensorFlow/blob/master/lab-12-0-rnn-basics-keras-eager.ipynb)
81-
* [Lab 12-1: many to one (word sentiment classification)](https://nbviewer.jupyter.org/github/deeplearningzerotoall/TensorFlow/blob/master/lab-12-1-many-to-one-keras-eager.ipynb)
82-
* [Lab 12-2: many to one stacked (sentence classification, stacked)](https://nbviewer.jupyter.org/github/deeplearningzerotoall/TensorFlow/blob/master/lab-12-2-many-to-one-stacking-keras-eager.ipynb)
83-
* [Lab 12-3: many to many (simple pos-tagger training)](https://nbviewer.jupyter.org/github/deeplearningzerotoall/TensorFlow/blob/master/lab-12-3-many-to-many-keras-eager.ipynb)
84-
* [Lab 12-4: many to many bidirectional (simpled pos-tagger training, bidirectional)](https://nbviewer.jupyter.org/github/deeplearningzerotoall/TensorFlow/blob/master/lab-12-4-many-to-many-bidirectional-keras-eager.ipynb)
85-
* [Lab 12-5: seq to seq (simple neural machine translation)](https://github.com/deeplearningzerotoall/TensorFlow/blob/master/lab-12-5-seq-to-seq-keras-eager.ipynb)
86-
* [Lab 12-6: seq to seq with attention (simple neural machine translation, attention)](https://github.com/deeplearningzerotoall/TensorFlow/blob/master/lab-12-6-seq-to-seq-with-attention-keras-eager.ipynb)
87-
88-
--------------------------
89-
90-
### 함께 만든 이들
91-
92-
Main Instructor
93-
* Prof. Kim (https://github.com/hunkim)
94-
95-
Main Creator
96-
* 김보섭 (https://github.com/aisolab)
97-
* 김수상 (https://github.com/healess)
98-
* 김준호 (https://github.com/taki0112)
99-
* 신성진 (https://github.com/aiscientist)
100-
* 이승준 (https://github.com/FinanceData)
101-
* 이진원 (https://github.com/jwlee-ml)
102-
103-
Docker Developer
104-
* 오상준 (https://github.com/juneoh)
105-
106-
Support
107-
* 네이버 커넥트재단 : 이효은, 장지수, 임우담
108-
109-
110-
1+
## Tensorflow Keras + Eager version
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
import tensorflow as tf
2+
import numpy as np
3+
import matplotlib.pyplot as plt
4+
from tensorflow.keras.utils import to_categorical
5+
from tensorflow.keras.datasets import mnist
6+
from time import time
7+
import os
8+
9+
def save(sess, saver, checkpoint_dir, model_name, step):
10+
11+
if not os.path.exists(checkpoint_dir):
12+
os.makedirs(checkpoint_dir)
13+
14+
saver.save(sess, os.path.join(checkpoint_dir, model_name + '.model'), global_step=step)
15+
16+
17+
def load(sess, saver, checkpoint_dir):
18+
print(" [*] Reading checkpoints...")
19+
20+
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
21+
if ckpt :
22+
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
23+
saver.restore(sess, os.path.join(checkpoint_dir, ckpt_name))
24+
counter = int(ckpt_name.split('-')[-1])
25+
print(" [*] Success to read {}".format(ckpt_name))
26+
return True, counter
27+
else:
28+
print(" [*] Failed to find a checkpoint")
29+
return False, 0
30+
31+
def normalize(X_train, X_test):
32+
X_train = X_train / 255.0
33+
X_test = X_test / 255.0
34+
35+
return X_train, X_test
36+
37+
def load_mnist() :
38+
(train_data, train_labels), (test_data, test_labels) = mnist.load_data()
39+
train_data = np.expand_dims(train_data, axis=-1) # [N, 28, 28] -> [N, 28, 28, 1]
40+
test_data = np.expand_dims(test_data, axis=-1) # [N, 28, 28] -> [N, 28, 28, 1]
41+
42+
train_data, test_data = normalize(train_data, test_data)
43+
44+
train_labels = to_categorical(train_labels, 10) # [N,] -> [N, 10]
45+
test_labels = to_categorical(test_labels, 10) # [N,] -> [N, 10]
46+
47+
return train_data, train_labels, test_data, test_labels
48+
49+
def classification_loss(logit, label) :
50+
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=label, logits=logit))
51+
prediction = tf.equal(tf.argmax(logit, -1), tf.argmax(label, -1))
52+
accuracy = tf.reduce_mean(tf.cast(prediction, tf.float32))
53+
54+
return loss, accuracy
55+
56+
def network(x, reuse=False) :
57+
with tf.variable_scope('network', reuse=reuse) :
58+
x = tf.layers.flatten(x) # [N, 28, 28, 1] -> [N, 784]
59+
60+
weight_init = tf.random_normal_initializer()
61+
62+
# [N, 784] -> [N, 10]
63+
hypothesis = tf.layers.dense(inputs=x, units=10, use_bias=True, kernel_initializer=weight_init, name='fully_connected_logit')
64+
65+
return hypothesis # hypothesis = logit
66+
67+
68+
""" dataset """
69+
train_x, train_y, test_x, test_y = load_mnist()
70+
71+
""" parameters """
72+
learning_rate = 0.001
73+
batch_size = 128
74+
75+
training_epochs = 1
76+
training_iterations = len(train_x) // batch_size
77+
78+
img_size = 28
79+
c_dim = 1
80+
label_dim = 10
81+
82+
train_flag = True
83+
84+
""" Graph Input using Dataset API """
85+
train_dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y)).\
86+
shuffle(buffer_size=100000).\
87+
prefetch(buffer_size=batch_size).\
88+
batch(batch_size).\
89+
repeat()
90+
91+
test_dataset = tf.data.Dataset.from_tensor_slices((test_x, test_y)).\
92+
shuffle(buffer_size=100000).\
93+
prefetch(buffer_size=len(test_x)).\
94+
batch(len(test_x)).\
95+
repeat()
96+
97+
""" Model """
98+
train_iterator = train_dataset.make_one_shot_iterator()
99+
test_iterator = test_dataset.make_one_shot_iterator()
100+
101+
train_inputs, train_labels = train_iterator.get_next()
102+
test_inputs, test_labels = test_iterator.get_next()
103+
104+
train_logits = network(train_inputs)
105+
test_logits = network(test_inputs, reuse=True)
106+
107+
train_loss, train_accuracy = classification_loss(logit=train_logits, label=train_labels)
108+
_, test_accuracy = classification_loss(logit=test_logits, label=test_labels)
109+
110+
""" Training """
111+
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(train_loss)
112+
113+
"""" Summary """
114+
summary_train_loss = tf.summary.scalar("train_loss", train_loss)
115+
summary_train_accuracy = tf.summary.scalar("train_accuracy", train_accuracy)
116+
117+
summary_test_accuracy = tf.summary.scalar("test_accuracy", test_accuracy)
118+
119+
train_summary = tf.summary.merge([summary_train_loss, summary_train_accuracy])
120+
test_summary = tf.summary.merge([summary_test_accuracy])
121+
122+
123+
with tf.Session() as sess :
124+
tf.global_variables_initializer().run()
125+
start_time = time()
126+
127+
saver = tf.train.Saver()
128+
checkpoint_dir = 'checkpoints'
129+
logs_dir = 'logs'
130+
131+
model_dir = 'nn_softmax'
132+
model_name = 'dense'
133+
134+
checkpoint_dir = os.path.join(checkpoint_dir, model_dir)
135+
logs_dir = os.path.join(logs_dir, model_dir)
136+
137+
138+
if train_flag :
139+
writer = tf.summary.FileWriter(logs_dir, sess.graph)
140+
else :
141+
writer = None
142+
143+
144+
# restore check-point if it exits
145+
could_load, checkpoint_counter = load(sess, saver, checkpoint_dir)
146+
147+
if could_load:
148+
start_epoch = (int)(checkpoint_counter / training_iterations)
149+
start_batch_index = checkpoint_counter - start_epoch * training_iterations
150+
counter = checkpoint_counter
151+
print(" [*] Load SUCCESS")
152+
else:
153+
start_epoch = 0
154+
start_batch_index = 0
155+
counter = 1
156+
print(" [!] Load failed...")
157+
158+
if train_flag :
159+
""" Training phase """
160+
for epoch in range(start_epoch, training_epochs) :
161+
for idx in range(start_batch_index, training_iterations) :
162+
163+
# train
164+
_, summary_str, train_loss_val, train_accuracy_val = sess.run([optimizer, train_summary, train_loss, train_accuracy])
165+
writer.add_summary(summary_str, counter)
166+
167+
# test
168+
summary_str, test_accuracy_val = sess.run([test_summary, test_accuracy])
169+
writer.add_summary(summary_str, counter)
170+
171+
counter += 1
172+
print("Epoch: [%2d] [%5d/%5d] time: %4.4f, train_loss: %.8f, train_accuracy: %.2f, test_Accuracy: %.2f" \
173+
% (epoch, idx, training_iterations, time() - start_time, train_loss_val, train_accuracy_val, test_accuracy_val))
174+
175+
start_batch_index = 0
176+
save(sess, saver, checkpoint_dir, model_name, counter)
177+
178+
save(sess, saver, checkpoint_dir, model_name, counter)
179+
print('Learning Finished!')
180+
181+
test_accuracy_val = sess.run(test_accuracy)
182+
print("Test accuracy: %.8f" % (test_accuracy_val))
183+
184+
else :
185+
""" Test phase """
186+
test_accuracy_val = sess.run(test_accuracy)
187+
print("Test accuracy: %.8f" % (test_accuracy_val))
188+
189+
""" Get test image """
190+
r = np.random.randint(low=0, high=len(test_x) - 1)
191+
print("Label: ", np.argmax(test_y[r: r+1], axis=-1))
192+
print("Prediction: ", sess.run(tf.argmax(test_logits, axis=-1), feed_dict={test_inptus: test_x[r: r+1]}))
193+
194+
plt.imshow(test_x[r:r + 1].reshape(28, 28), cmap='Greys', interpolation='nearest')
195+
plt.show()

0 commit comments

Comments
 (0)