-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME.todo
257 lines (203 loc) · 4.33 KB
/
README.todo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
cum_loss4x300.txt
LR: 0.005
decay 0.5
RMS prop
2 batch size
1000 epochs
5 layers, 300 neurons per layer. (363,300 parameters total)
1 input, 10 output.
Got it down to ~150. (worse tho)1
cum_loss_largebatch_20.txt
LR: 0.005
decay 0.5
RMS prop
20 batch size
1000 epochs
5 layers, 300 neurons per layer.
1 input, 10 output.
Got it down to ~150. (worse)2
This took 78.05 seconds.
cum_loss_largebatch_50.txt
LR: 0.005
decay 0.5
RMS prop
50 batch size
1000 epochs
5 layers, 300 neurons per layer.
1 input, 10 output.
Got it down to ~150. 3
This took 41.3 seconds.
cum_loss_lr_0.0005.txt
LR: 0.0005
decay 0.9
RMS prop
50 batch size
1000 epochs
5 layers, 300 neurons per layer.
1 input, 10 output.
Got it down to ~25. (best result).4
This took 38.7 seconds.
cum_loss_lr_0.00005.txt
LR: 0.0005
decay 0.9
RMS prop
50 batch size
1000 epochs
5 layers, 300 neurons per layer.
1 input, 10 output.
Got it down to ~25. (best result).4
This took 38.7 seconds.
cum_loss_newinit.txt
init: stddev at 2, scale whole thing by 0.01.
LR: 0.0005
decay 0.9
RMS prop
50 batch size
2000 epochs
5 layers, 300 neurons per layer.
1 input, 10 output.
Got it down to ~200. (best result).4
This is now giving normally looking spectrum
This took 39.7 seconds.
cum_loss_good.txt
init: stddev at 0.1. Don't scale anything.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
2000 epochs
5 layers, 300 neurons per
1input, 10 output
Got it down to ~2.0
This is giving good spectrum (taking residuals)
This took 40seconds.
cum_loss_dev0.5.txt
init: stddev at 0.5. Don't scale anything.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
2000 epochs
5 layers, 300 neurons per
1input, 10 output
Got it down to ~3.5
This is giving good spectrum (taking residuals)
This took 40seconds.
cum_loss_dev0.1scaled.txt
init: stddev at 0.1. scaled by 0.5
LR: 0.00005
decay: 0.9
RMS prop
50 batch
2000 epochs
5 layers, 300 neurons per
1input, 10 output
Got it down to ~2.5
This is giving good spectrum (taking residuals)
This took 40seconds.
cum_loss_lr_0.000005.txt
init: stddev at 0.`. Don't scale anything.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
2000 epochs
5 layers, 300 neurons per
1input, 10 output
Got it down to ~1000
This took 40seconds.
cum_loss_longrun.txt
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
3000 epochs
5 layers, 300 neurons per
1input, 10 output
Got it down to ~1.6
This took 40seconds.
cum_loss_small.txt
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
2000 epochs
5 layers, 100 neurons per
1input, 10 output
Got it down to ~150
This took 11seconds.
cum_loss_small10000.txt
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
10000 epochs
5 layers, 100 neurons per
1input, 10 output
Got it down to ~.276
This took 56seconds.
cum_loss_3x50.txt
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
10000 epochs
3 layers, 50 neurons per
1input, 10 output
Got it down to ~13.3
This took 32seconds.
cum_loss_3x50_long.txt
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
50 000 epochs
3 layers, 50 neurons per (5,100)
1input, 10 output
Got it down to ~.11
This took 32seconds.
cum_loss_5x20_long.txt (overwritten)
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
50 000 epochs
5 layers, 20 neurons per (1,820)
1input, 10 output
Got it down to ~.23
This took 145seconds.
We are going to use this network.
cum_loss_5x20_long.txt (rerun)
init: stddev at 0.1.
LR: 0.00005
decay: 0.9
RMS prop
50 batch
50 000 epochs
5 layers, 20 neurons per (1,820)
1input, 10 output
Got it down to ~1.06
This took 144seconds.
We are going to use this network.
These weights are saved as the copy versions for backup.
Cool beans. It works!
Now I need to generate the y data that is fixed.
large_complex_net_5x20.txt
Used larger dataset
init: stddev at 0.1
LR: 0.00005
decay: 0.9
RMS prop
200 batch
100,000 epochs
5 layers (20 neurons per) (1,820)
1 input, 100 output
Got it down to ~200
This took 4538.9797399seconds.
We are now going to test it.