|
82 | 82 | 2, |
83 | 83 | None, |
84 | 84 | 'good-books-with-hands-on-material-and-codes'), |
| 85 | + ('Twst yourself: Deep learning 1', |
| 86 | + 2, |
| 87 | + None, |
| 88 | + 'twst-yourself-deep-learning-1'), |
| 89 | + ('Test yourself: Deep learning 2', |
| 90 | + 2, |
| 91 | + None, |
| 92 | + 'test-yourself-deep-learning-2'), |
| 93 | + ('Test yourself: Optimization part', |
| 94 | + 2, |
| 95 | + None, |
| 96 | + 'test-yourself-optimization-part'), |
| 97 | + ('Test yourself: Analysis of results', |
| 98 | + 2, |
| 99 | + None, |
| 100 | + 'test-yourself-analysis-of-results'), |
85 | 101 | ('Types of machine learning', |
86 | 102 | 2, |
87 | 103 | None, |
|
358 | 374 | <!-- navigation toc: --> <li><a href="#gaussian-processes-and-bayesian-analysis" style="font-size: 80%;">Gaussian processes and Bayesian analysis</a></li> |
359 | 375 | <!-- navigation toc: --> <li><a href="#hpc-path" style="font-size: 80%;">HPC path</a></li> |
360 | 376 | <!-- navigation toc: --> <li><a href="#good-books-with-hands-on-material-and-codes" style="font-size: 80%;">Good books with hands-on material and codes</a></li> |
| 377 | + <!-- navigation toc: --> <li><a href="#twst-yourself-deep-learning-1" style="font-size: 80%;">Twst yourself: Deep learning 1</a></li> |
| 378 | + <!-- navigation toc: --> <li><a href="#test-yourself-deep-learning-2" style="font-size: 80%;">Test yourself: Deep learning 2</a></li> |
| 379 | + <!-- navigation toc: --> <li><a href="#test-yourself-optimization-part" style="font-size: 80%;">Test yourself: Optimization part</a></li> |
| 380 | + <!-- navigation toc: --> <li><a href="#test-yourself-analysis-of-results" style="font-size: 80%;">Test yourself: Analysis of results</a></li> |
361 | 381 | <!-- navigation toc: --> <li><a href="#types-of-machine-learning" style="font-size: 80%;">Types of machine learning</a></li> |
362 | 382 | <!-- navigation toc: --> <li><a href="#main-categories" style="font-size: 80%;">Main categories</a></li> |
363 | 383 | <!-- navigation toc: --> <li><a href="#the-plethora-of-machine-learning-algorithms-methods" style="font-size: 80%;">The plethora of machine learning algorithms/methods</a></li> |
@@ -643,6 +663,44 @@ <h2 id="good-books-with-hands-on-material-and-codes" class="anchor">Good books w |
643 | 663 | from Goodfellow, Bengio and Courville's text <a href="https://www.deeplearningbook.org/" target="_self">Deep Learning</a> |
644 | 664 | </p> |
645 | 665 |
|
| 666 | +<!-- !split --> |
| 667 | +<h2 id="twst-yourself-deep-learning-1" class="anchor">Twst yourself: Deep learning 1 </h2> |
| 668 | + |
| 669 | +<ol> |
| 670 | +<li> Describe the architecture of a typical feed forward Neural Network (NN).</li> |
| 671 | +<li> What is an activation function and discuss the use of an activation function.</li> |
| 672 | +<li> Can you name and explain three different types of activation functions?</li> |
| 673 | +<li> You are using a deep neural network for a prediction task. After training your model, you notice that it is strongly overfitting the training set and that the performance on the test isn’t good. What can you do to reduce overfitting?</li> |
| 674 | +<li> How would you know if your model is suffering from the problem of exploding Gradients?</li> |
| 675 | +<li> Can you name and explain a few hyperparameters used for training a neural network?</li> |
| 676 | +</ol> |
| 677 | +<!-- !split --> |
| 678 | +<h2 id="test-yourself-deep-learning-2" class="anchor">Test yourself: Deep learning 2 </h2> |
| 679 | + |
| 680 | +<ol> |
| 681 | +<li> Describe the architecture of a typical Convolutional Neural Network (CNN)</li> |
| 682 | +<li> What is the vanishing gradient problem in Neural Networks and how to fix it?</li> |
| 683 | +<li> When it comes to training an artificial neural network, what could the reason be for why the cost/loss doesn't decrease in a few epochs?</li> |
| 684 | +<li> How does L1/L2 regularization affect a neural network?</li> |
| 685 | +<li> What is(are) the advantage(s) of deep learning over traditional methods like linear regression or logistic regression?</li> |
| 686 | +</ol> |
| 687 | +<!-- !split --> |
| 688 | +<h2 id="test-yourself-optimization-part" class="anchor">Test yourself: Optimization part </h2> |
| 689 | + |
| 690 | +<ol> |
| 691 | +<li> Which is the basic mathematical root-finding method behind essentially all gradient descent approaches(stochastic and non-stochastic)?</li> |
| 692 | +<li> And why don't we use it? Or stated differently, why do we introduce the learning rate as a parameter?</li> |
| 693 | +<li> What might happen if you set the momentum hyperparameter too close to 1 (e.g., 0.9999) when using an optimizer for the learning rate?</li> |
| 694 | +<li> Why should we use stochastic gradient descent instead of plain gradient descent?</li> |
| 695 | +<li> Which parameters would you need to tune when use a stochastic gradient descent approach?</li> |
| 696 | +</ol> |
| 697 | +<!-- !split --> |
| 698 | +<h2 id="test-yourself-analysis-of-results" class="anchor">Test yourself: Analysis of results </h2> |
| 699 | +<ol> |
| 700 | +<li> How do you assess overfitting and underfitting?</li> |
| 701 | +<li> Why do we divide the data in test and train and/or eventually validation sets?</li> |
| 702 | +<li> Why would you use resampling methods in the data analysis? Mention some widely popular resampling methods.</li> |
| 703 | +</ol> |
646 | 704 | <!-- !split --> |
647 | 705 | <h2 id="types-of-machine-learning" class="anchor">Types of machine learning </h2> |
648 | 706 |
|
|
0 commit comments