Skip to content

Latest commit

 

History

History
38 lines (32 loc) · 9.71 KB

File metadata and controls

38 lines (32 loc) · 9.71 KB

Neural Network Visualizer - A step to understand DL better

A small talk:

This repository contains my learnings of neural networks since I started learning about them in 2018 and tried visualizing them through coding. I believe the best way to learn something is by actually having a hands-on experience. We generally have one idea while programming, but end up getting multiple errors. These errors generally help us in understanding quite a lot more than what we planned and helps us more than what we expected. Thus on that note, I have tried to hard code a lot of concepts from scratch using basic numpy implementation. Many worked after a lot of effort and I thought of sharing my work to the community, so that new learners have a better understanding of the building blocks of neural nets.

It is a general practice to download a pretrained network on a famous dataset and get a pinnacle accuracy, but this process heavily limits us to tweak the model and understand the foundation. The math is not completely understood in implementing models from pytorch or tensorflow, or even how a simple gradient descent and optimizers like SGD, Adam work. Thus this repository covers from scratch to finally implementing models from torch (because these models are super optimized and necessary when we go to bigger tasks). The idea is to understand the math and how things work in neural net before jumping to direct application of torch models. I have to thank Ian Goodfellow's book of Deep Learning from which I gained theoretical knowledge about DL, and countless medium blogs and ofcourse the best teacher Youtube (seriously world is a better place because of you xD), finally Stack Overflow for my uncountable error support.

This is my entire work from 2018 until now, a small collection having basic implementations of key concepts of DL. I'll be adding if I implement any tutorial related material, into this repository. All the contents will be mostly in PyTorch because I am most comfortable with it.

Order of uploads (Please learn in the same order because many notebooks are related to the previous notebooks):

  1. Vectors.ipynb - Intro to vectors with implementation of dot products and linear combination.
  2. MPNeuronAndPerceptron.ipynb - Intro to MP neuron and perceptron.
  3. SigmoidNeuron.ipynb - Visualising Sigmoid Neuron from scratch.
  4. SigmoidNeuron_RealWorldData.ipynb (keep mobile_cleaned.csv in same directory) - Visualising Sigmoid Neuron over real world data.
  5. FeedForwardNetwork_new.ipynb (check out FFNetworkMultiClass.png, FFNetworkSingle.png, SimpleNetwork.png for understanding the architecture implemented) - Creating Feedforward Neural nets from scratch using Numpy with multi class classification.
  6. ScalarBackpropagation.ipynb (check out FirstNetwork.png, SecondNetwork.png for understanding the architecture implemented) - Implemented backpropagation for the same neural net used in ffn case.
  7. VectorizedFeedForwardNetworks.ipynb (check out FirstNetwork.png, SecondNetwork.png for understanding the architecture implemented) - Same as the previous notebook, except used vectorized formats for weights and inputs. Increased performance to a large extent compared to scalar backpropagation.
  8. GDAlgorithms.ipynb - Used sigmoid neuron setup and implemented optimizers GD, MiniBatch, Momentum and Nesterov accelerated gradient descent and visualised the loss plot and the model flow to reach global minima in the loss plot.
  9. GDAlgorithms_cont.ipynb - Same as the previous tasks, added few more famous optimizers such as Adagrad, RMSProp and Adam.
  10. VectorizedGDAlgorithms.ipynb - Switched to vector annotations and created a setup for vectorized inputs and weights, and implemented all the optimizers to understand which performs best with which hyperparameters.
  11. InitialisationActivationFunctions.ipynb - Tried some initialisation methods like He, Xavier, Zeros and even random init along with different activation functions like sigmoid, tanh, relu and leaky relu.
  12. OverfittingAndRegularisation.ipynb - Demonstrated overfitting and tried L2 regularisation to counter it. Also tried adding noise to reduce overfitting of the model.
  13. PytorchIntro.ipynb - Gave basic intro to PyTorch and its environment. Understanding how to move data to GPU and implemented auto differentiability for large problem.
  14. FFNetworksWithPyTorch.ipynb - Heavily focussed on Pytorch's NN module and understanding different types of neural architecture implementation such as sequential, parametric and linear. Also focussed on moving data as well to GPU.
  15. PyTorchCNN.ipynb - We now move into the concept of convolutional neural nets where we deal with image inputs (but not limited to only images). This is because CNNs are comparitively way faster than a fully connected neural net because of they have quicker grasp on structured information. We implement single convolution layer architecture followed by deep convolution and finally touch up on some known architecture like LeNet.
  16. LargeCNNs.ipynb - We now look into far more complex and large CNN architectures like VGG net, ResNet and Inception net and implement them over CIFAR 10 dataset. We also have a look into concept of deep copies.
  17. CNNVisualisation.ipynb (keep data.zip in same directory) - Understand how torchvision dataset loader work while loading custom dataset, and look in some insightful visualisation such as occlusion analysis (occlusion sensitivity is a simple technique for understanding which parts of an image are most important for a deep network's classification) and filter visualisation (looking into filter outputs helps us understand what the model perceives when looking at a patch of image).
  18. BatchNorm_Dropout.ipynb - We look into some famous regularizers like Batch normalization (normalizing the data points according to the batch size in order to have a balanced data with zero mean with data spread of 2 standard deviation) and Dropout (creates multiple neural architectures which are loosely connected from core architecture by creating noise).
  19. HyperparameterTuning_MLFlow.ipynb - We use an opensource tool called MLflow to tune hyperparameters and figure out the best possible setup which works most effectively. MLflow also gives lot of inputs which can be highly insightful.
  20. RNNs.ipynb (keep name2lang.txt) - Moving into sequence learning problems, we deal with RNNs (recurrent neural networks). A custom language dataset is used and some well known algorithms like LSTMs and GRUs are implemented.
  21. BatchSeqModels.ipynb (keep name2lang.txt) - Introducing batching of sequential data and its effect in the output. Also focussed on moving data as well to GPU.
  22. EncoderDecoderArchitecture.ipynb (keep NEWS2012RefEnHi.xml, NEWS2012TrainingEnHi.xml in same directory) - Transliteration is carried out where data from one language is converted to another language. In this case its from English to Hindi. Encoder Decoder architecure is implemented which in simple terms, helps in encoding the data from one form to machine readable form and then converting back to another form which is human readable. Thus the input english words are converted after it passes through deep neural net, and then converted back (or decoded back) to Hindi after passing through deep neural net.
  23. Will be added soon!

Contributions

Do not hesitate to contribute by filling an issue or a PR !

Do let me know if there is any field where you guys want me to create a notebook addressing some DL concepts. Raise an issue, so that I can know your views!