Skip to content

Latest commit

 

History

History
72 lines (50 loc) · 3.03 KB

File metadata and controls

72 lines (50 loc) · 3.03 KB

Course Notes: Neural Networks and Deep Learning

Course notes aggregated from Roger Grosse and Jimmy Ba's Neural Networks and Deep Learning Class at the University of Toronto.

NNs

Table of Contents

  1. Introduction

    • Introduction to the course content and foundational concepts.
  2. Linear_Regression

    • Introduction to linear regression.
    • Mathematical foundations and applications.
  3. Linear_Classifiers

    • Overview of linear classifiers.
    • Perceptron, support vector machines, and logistic regression.
  4. Training_a_Classifier

    • Techniques for training classifiers.
    • Loss functions, gradient descent, and overfitting.
  5. Multilayer_Perceptrons

    • Detailed explanation of multilayer perceptrons (MLPs).
    • Activation functions, architecture, and training.
  6. Backpropagation

    • Understanding backpropagation.
    • Derivation and practical implementation in neural networks.
  7. Distributed_Representations

    • Concept of distributed representations in neural networks.
    • Word embeddings and feature learning.
  8. Automatic_Differentiation

    • Overview of automatic differentiation.
    • Techniques and applications in deep learning.
  9. Optimization

    • Methods for optimization in machine learning.
    • Gradient descent variants, and other optimization algorithms.
  10. Convolutional_Networks

    • Introduction to convolutional neural networks (CNNs).
    • Architectures, convolution operations, and applications.
  11. Image_Classification

    • Techniques and models for image classification.
    • CNNs, data augmentation, and transfer learning.
  12. Generalization

    • Concepts of generalization in machine learning models.
    • Bias-variance tradeoff and regularization techniques.
  13. Recurrent_Neural_Nets

    • Recurrent neural networks (RNNs) and their applications.
    • Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU).
  14. Exploding_and_Vanishing_Gradients

    • Challenges of training deep networks.
    • Techniques to address exploding and vanishing gradients.
  15. Autoregressive_and_Reversible_Models

    • Overview of autoregressive and reversible models.
    • Applications in sequence modeling and generative models.