-
Notifications
You must be signed in to change notification settings - Fork 35
/
Copy pathREADME.old
69 lines (62 loc) · 18.9 KB
/
README.old
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
**Neural Networks and DeepLearning**
**Books**
+ [Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning. 2016](https://www.deeplearningbook.org/)
+ [Hastie, Tibshirani and Friedman. The Elements of Statistical Learning. 2nd edition, 2009](https://web.stanford.edu/~hastie/ElemStatLearn/)
+ [Molnar, Christoph. Interpretable Machine Learning. Lulu. com, 2020.](https://christophm.github.io/interpretable-ml-book/)
+ [Strang, Gilbert. Linear algebra and learning from data. Wellesley-Cambridge Press, 2019.](https://math.mit.edu/~gs/learningfromdata/)
**Papers**
+ [100 best NLP Papers](https://github.com/mhagiwara/100-nlp-papers)
+ [Papers on Text Style Transfer](https://github.com/fuzhenxin/Style-Transfer-in-Text)
**References**
+ Neural Networks and Deep Learning [ref1](http://neuralnetworksanddeeplearning.com/chap1.html), [ref2](https://towardsdatascience.com/neural-network-architectures-156e5bad51ba), [ref3](https://medium.com/intuitive-deep-learning/intuitive-deep-learning-part-1a-introduction-to-neural-networks-d7b16ebf6b99), [ref4](https://towardsdatascience.com/a-weird-introduction-to-deep-learning-7828803693b0), [ref5](https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/), [ref6](https://www.analyticsvidhya.com/blog/2018/05/deep-learning-faq/), [ref7](https://www.kdnuggets.com/2018/02/8-neural-network-architectures-machine-learning-researchers-need-learn.html), [ref8](https://towardsdatascience.com/newbies-guide-to-deep-learning-6bf601c5a98e), [ref9](https://towardsdatascience.com/understanding-neural-networks-19020b758230), [ref10](https://towardsdatascience.com/understanding-neural-networks-what-how-and-why-18ec703ebd31), [simple implementation](https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae7e74410795), [book](http://www.deeplearningbook.org/contents/TOC.html), [viz](https://towardsdatascience.com/activation-maps-for-deep-learning-models-in-a-few-lines-of-code-ed9ced1e8d21), [video](https://www.youtube.com/watch?v=aircAruvnKk&t=0s&index=1&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi), [meme](https://indico.io/blog/wp-content/uploads/2016/02/inception_meme.jpg)
+ Deep Learning and NLP [ref1](https://medium.com/dair-ai/deep-learning-for-nlp-an-overview-of-recent-trends-d0d8f40a776d)
+ Neural Network concepts
+ [General Math](https://towardsdatascience.com/https-medium-com-piotr-skalski92-deep-dive-into-deep-networks-math-17660bc376ba)
+ [Linear](https://medium.com/datathings/linear-layers-explained-in-a-simple-way-2319a9c2d1aa) and [dense](https://medium.com/datathings/dense-layers-explained-in-a-simple-way-62fe1db0ed75) layers
+ Weight Initialization [ref1](https://towardsdatascience.com/deep-learning-best-practices-1-weight-initialization-14e5c0295b94), [ref2](https://towardsdatascience.com/random-initialization-for-neural-networks-a-thing-of-the-past-bfcdd806bf9e)
+ Weight Averaging [ref1](https://towardsdatascience.com/stochastic-weight-averaging-a-new-way-to-get-state-of-the-art-results-in-deep-learning-c639ccf36a)
+ Hyperparameter Tuning [ref1](https://www.analyticsvidhya.com/blog/2018/11/neural-networks-hyperparameter-tuning-regularization-deeplearning/), [ref2](https://towardsdatascience.com/hyper-parameters-in-action-a524bf5bf1c)
+ Gradient Descent [ref1](https://towardsdatascience.com/gradient-descent-algorithm-and-its-variants-10f652806a3), [ref2](https://towardsdatascience.com/learning-parameters-part-5-65a2f3583f7d), [ref3](https://towardsdatascience.com/an-intuitive-explanation-of-gradient-descent-83adf68c9c33), [video](https://www.youtube.com/watch?v=IHZwWFHWa-w&t=0s&index=2&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)
+ Backpropagation [ref1](https://www.youtube.com/watch?v=Ilg3gGewQ5U&t=0s&index=3&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)
+ Loss Functions [ref1](https://medium.com/deep-learning-demystified/loss-functions-explained-3098e8ff2b27)
+ Convolutional Neural Networks [ref1](https://towardsdatascience.com/intuitively-understanding-convolutions-for-deep-learning-1f6f42faee1), [ref2](https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-convolution-neural-networks-e3f054dd5daa), [ref3](https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/), [ref4](https://towardsdatascience.com/gentle-dive-into-math-behind-convolutional-neural-networks-79a07dd44cf9), [ref5](https://www.saama.com/blog/different-kinds-convolutional-filters/), [ref6](https://software.intel.com/en-us/articles/hands-on-ai-part-16-modern-deep-neural-network-architectures-for-image-classification), [ref6](https://towardsdatascience.com/convolutional-neural-networks-the-biologically-inspired-model-f2d23a301f71), [ref7](https://medium.com/analytics-vidhya/deep-learning-methods-1700548a3093), [ref8](https://towardsdatascience.com/advanced-topics-in-deep-convolutional-neural-networks-71ef1190522d), [ref9](https://www.analyticsvidhya.com/blog/2020/02/mathematics-behind-convolutional-neural-network/), [ref10](https://www.analyticsvidhya.com/blog/2020/02/cnn-vs-rnn-vs-mlp-analyzing-3-types-of-neural-networks-in-deep-learning/), [architectures](https://towardsdatascience.com/illustrated-10-cnn-architectures-95d78ace614d), [viz](https://towardsdatascience.com/understanding-your-convolution-network-with-visualizations-a4883441533b)
+ CNNs for NLP [ref1](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/), [ref2](https://towardsdatascience.com/deep-learning-for-nlp-anns-rnns-and-lstms-explained-95866c1db2e4)
+ CNNs for object detection [ref1](https://towardsdatascience.com/evolution-of-object-detection-and-localization-algorithms-e241021d8bad)
+ CNNs for Time Series [ref1](https://blog.goodaudience.com/introduction-to-1d-convolutional-neural-networks-in-keras-for-time-sequences-3a7ff801a2cf)
+ Capsule Networks [ref1](https://medium.com/ai%C2%B3-theory-practice-business/understanding-hintons-capsule-networks-part-i-intuition-b4b559d1159b)
+ RNNs (Sequence Models) [ref1](https://towardsdatascience.com/recurrent-neural-networks-explained-ffb9f94c5e09), [ref2](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/), [ref3](https://towardsdatascience.com/understanding-rnn-and-lstm-f7cdf6dfc14e)
+ Attention Models [ref1](https://towardsdatascience.com/breaking-bert-down-430461f60efb)
+ LSTMs and GRUs [ref1](https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21), [ref2](https://towardsdatascience.com/what-is-a-recurrent-nns-and-gated-recurrent-unit-grus-ea71d2a05a69), [ref3](https://towardsdatascience.com/forward-and-backpropagation-in-grus-derived-deep-learning-5764f374f3f5), [ref4](https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be), [ref5](http://colah.github.io/posts/2015-08-Understanding-LSTMs/), [ref6](https://towardsdatascience.com/visualising-lstm-activations-in-keras-b50206da96ff)
+ Word Embeedings [ref1](https://towardsdatascience.com/introduction-to-word-embeddings-4cf857b12edc), [ref2](https://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa), [ref3](https://towardsdatascience.com/what-the-heck-is-word-embedding-b30f67f01c81), [ref4](https://towardsdatascience.com/word-embeddings-for-nlp-5b72991e01d4), [ref5](https://towardsdatascience.com/why-do-we-use-embeddings-in-nlp-2f20e1b632d2), [ref6](https://medium.com/deep-learning-demystified/deep-nlp-word-vectors-with-word2vec-d62cb29b40b3), [ref6](https://towardsdatascience.com/comparing-word-embeddings-c2efd2455fe3)
+ Word2vec [ref1](https://towardsdatascience.com/word2vec-made-easy-139a31a4b8ae), [ref2](https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-understanding-word2vec-e0128a460f0f), [ref3](https://towardsdatascience.com/using-word2vec-for-better-embeddings-of-categorical-features-de75020e1233), [ref4](https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/), [ref5](https://towardsdatascience.com/an-implementation-guide-to-word2vec-using-numpy-and-google-sheets-13445eebd281), [ref6](https://towardsdatascience.com/using-word2vec-to-analyze-news-headlines-and-predict-article-success-cdeda5f14751), [ref7](https://pathmind.com/wiki/word2vec), [ref](https://towardsdatascience.com/different-techniques-to-represent-words-as-vectors-word-embeddings-3e4b9ab7ceb4), [video](https://www.youtube.com/watch?v=GMCwS7tS5ZM&t=1292s), [en-us trained models](http://ahogrammer.com/2017/01/20/the-list-of-pretrained-word-embeddings/), [pt-br trained models](http://www.nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc), [ge-de trained models](https://deepset.ai/german-word-embeddings), [pre-trained models](https://www.analyticsvidhya.com/blog/2020/03/pretrained-word-embeddings-nlp/)
+ Char2vec [ref1](https://hackernoon.com/chars2vec-character-based-language-model-for-handling-real-world-texts-with-spelling-errors-and-a3e4053a147d)
+ Sentence Embeddings [ref1](https://towardsdatascience.com/fse-2b1ffa791cf9), [ref2](https://towardsdatascience.com/word2vec-for-phrases-learning-embeddings-for-more-than-one-word-727b6cf723cf),
+ Doc2vec [ref1](https://medium.com/scaleabout/a-gentle-introduction-to-doc2vec-db3e8c0cce5e), [ref2](https://towardsdatascience.com/document-embedding-techniques-fed3e7a6a25d), [ref3](https://medium.com/@eggie5/search-query-embeddings-using-query2vec-f5931df27d79)
+ Beyond Word Embeddings [ref1](https://towardsdatascience.com/beyond-word-embeddings-part-1-an-overview-of-neural-nlp-milestones-82b97a47977f?), [ref2](https://towardsdatascience.com/beyond-word-embeddings-part-2-word-vectors-nlp-modeling-from-bow-to-bert-4ebd4711d0ec), [ref3](https://towardsdatascience.com/beyond-word-embeddings-part-3-four-common-flaws-in-state-of-the-art-neural-nlp-models-c1d35d3496d0), [ref4](https://towardsdatascience.com/from-pre-trained-word-embeddings-to-pre-trained-language-models-focus-on-bert-343815627598), [ref5](https://towardsdatascience.com/deep-transfer-learning-for-natural-language-processing-text-classification-with-universal-1a2c69e5baa9), [ref6](https://medium.com/swlh/semantic-search-with-nlp-86084ca81247), [ref7](https://ruder.io/word-embeddings-2017/)
+ Glove [ref1](https://medium.com/@ria.kulshrestha16/nlp-101-negative-sampling-and-glove-936c88f3bc68), [ref2](https://towardsdatascience.com/light-on-math-ml-intuitive-guide-to-understanding-glove-embeddings-b13b4f19c010), [ref3](https://towardsdatascience.com/nlp-101-negative-sampling-and-glove-936c88f3bc68)
+ FastText [ref1](https://towardsdatascience.com/super-easy-way-to-get-sentence-embedding-using-fasttext-in-python-a70f34ac5b7c), [ref2](https://towardsdatascience.com/supercharging-word-vectors-be80ee5513d)
+ Misspelling Oblivious Word Embeddings (MOE) [ref1](https://ai.facebook.com/blog/-a-new-model-for-word-embeddings-that-are-resilient-to-misspellings-/), [ref2](https://haptik.ai/tech/extract-spelling-mistakes-fasttext/)
+ Transformers [ref1](https://jalammar.github.io/illustrated-transformer/), [ref2](https://medium.com/inside-machine-learning/what-is-a-transformer-d07dd1fbec04), [ref3](https://medium.com/saarthi-ai/transformers-attention-based-seq2seq-machine-translation-a28940aaa4fe), [ref4](https://towardsdatascience.com/beyond-bert-6f51a8bc5ce1), [ref5](https://medium.com/analytics-vidhya/hugging-face-transformers-how-to-use-pipelines-10775aa3db7e)
+ BERT [ref1](https://towardsdatascience.com/bert-for-dummies-step-by-step-tutorial-fb90890ffe03), [ref2](https://medium.com/swlh/a-simple-guide-on-using-bert-for-text-classification-bbf041ac8d04), [ref3](https://www.analyticsvidhya.com/blog/2019/09/demystifying-bert-groundbreaking-nlp-framework/), [ref4](https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270), [ref5](https://medium.com/sciforce/googles-bert-changing-the-nlp-landscape-5f4a7bf65cc5), [ref6](https://www.analyticsvidhya.com/blog/2019/03/pretrained-models-get-started-nlp/), [ref7](https://medium.com/swlh/simplerepresentations-bert-roberta-xlm-xlnet-and-distilbert-features-for-any-nlp-task-468b5ceb6f0e?), [ref8](https://medium.com/swlh/bert-bidirectional-encoder-representations-from-transformers-c1ba3ef5e2f4), [ref9](https://medium.com/analytics-vidhya/3-simple-tricks-to-get-the-most-out-of-your-bert-based-text-similarity-system-5b0f2ffa035a), [ref10](https://medium.com/@edwardcqian/machine-comprehension-with-bert-6eadf16c87c1), [ref11](https://towardsdatascience.com/nlp-extract-contextualized-word-embeddings-from-bert-keras-tf-67ef29f60a7b)
+ ALBERT [ref1](https://github.com/google-research/ALBERT), [ref2](https://towardsdatascience.com/fine-tune-albert-with-pre-training-on-custom-corpus-f56ea3cfdc82)
+ XLNet [ref1](https://towardsdatascience.com/what-is-xlnet-and-why-it-outperforms-bert-8d8fce710335)
+ ULMFiT [ref1](https://medium.com/bitbroker-labs/universal-language-model-fine-tuning-for-text-classification-ulmfit-73ea2b383926)
+ GPT-3 [ref1](https://towardsdatascience.com/gpt3-the-dream-machine-in-real-world-c99592d4842f)
+ Reinforcement Learning [ref1](https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287), [ref2](https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-part-ii-trpo-ppo-87f2c5919bb9), [ref3](https://towardsdatascience.com/reinforcement-learning-demystified-36c39c11ec14), [ref4](https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690), [ref5](https://blog.insightdatascience.com/reinforcement-learning-from-scratch-819b65f074d8), [ref6](https://towardsdatascience.com/reinforcement-learning-with-python-8ef0242a2fa2), [programming resource](https://gym.openai.com/docs/)
+ Transfer Learning [ref1](https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a), [ref2](https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751), [ref3](https://towardsdatascience.com/style-up-your-photos-with-a-touch-of-deep-learning-magic-60a003c676f9), [ref4](https://towardsdatascience.com/transfer-learning-and-image-classification-using-keras-on-kaggle-kernels-c76d3b030649), [ref5](https://towardsdatascience.com/advanced-topics-in-neural-networks-f27fbcc638ae), [ref6](https://medium.com/modern-nlp/transfer-learning-in-nlp-f5035cc3f62f), [ref6](https://medium.com/@matthew_stewart/neural-style-transfer-and-visualization-of-convolutional-networks-7362f6cf4b9b), [ref7](https://medium.com/modern-nlp/3-ways-to-make-new-language-models-f3642e3a4816)
+ Autoencoders [ref1](https://medium.com/intuitive-deep-learning/autoencoders-neural-networks-for-unsupervised-learning-83af5f092f0b), [ref2](https://towardsdatascience.com/a-high-level-guide-to-autoencoders-b103ccd45924), [ref3](https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726), [ref4](https://towardsdatascience.com/everything-you-need-to-know-about-autoencoders-in-tensorflow-b6a63e8255f0), [ref5](https://www.analyticsvidhya.com/blog/2020/02/what-is-autoencoder-enhance-image-resolution/)
+ Generative Adversarial Networks [ref1](https://www.analyticsvidhya.com/blog/2017/06/introductory-generative-adversarial-networks-gans/), [ref2](https://www.analyticsvidhya.com/blog/2020/01/generative-models-gans-computer-vision/), [ref3](https://towardsdatascience.com/an-easy-introduction-to-generative-adversarial-networks-6f8498dc4bcd), [ref4](https://towardsdatascience.com/gan-by-example-using-keras-on-tensorflow-backend-1a6d515a60d0), [ref5](https://medium.com/@mattiaspinelli/simple-generative-adversarial-network-gans-with-keras-1fe578e44a87), [ref6](https://towardsdatascience.com/style-transfer-with-gans-on-hd-images-88e8efcf3716), [ref7](https://medium.com/@gregoiredeletang/how-gans-really-work-2e1db1f407bb), [ref8](https://towardsdatascience.com/pix2pix-gan-in-tensorflow-2-0-fe0ab475c713), [GANS and Deepfakes](https://towardsdatascience.com/realistic-deepfakes-colab-e13ef7b2bba7), [Colab Notebooks](https://towardsdatascience.com/12-colab-notebooks-that-matter-e14ce1e3bdd0)
** Software **
+ [Anaconda - Jupyter - Python](https://www.anaconda.com/products/individual)(Python Environment)
+ Jupyter [ref1](http://jupyter.org/), [ref2](https://github.com/jupyterlab/jupyterlab), [ref3](https://www.analyticsvidhya.com/blog/2018/05/starters-guide-jupyter-notebook/), [ref4](https://towardsdatascience.com/data-science-for-startups-r-python-2ca2cd149c5c), [ref5](https://towardsdatascience.com/a-very-simple-demo-of-interactive-controls-on-jupyter-notebook-4429cf46aabd), [ref6](https://towardsdatascience.com/10-simple-hacks-to-speed-up-your-data-analysis-in-python-ec18c6396e6b), [ref7](https://towardsdatascience.com/google-drive-google-colab-github-dont-just-read-do-it-5554d5824228?), [ref8](https://github.com/xtreamsrl/jupytemplate)
+ Numpy [ref1](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html), [ref2](https://www.datacamp.com/community/tutorials/python-numpy-tutorial), [ref3](https://towardsdatascience.com/reshaping-numpy-arrays-in-python-a-step-by-step-pictorial-tutorial-aed5f471cf0b)
+ Pandas [ref1](http://proquest.safaribooksonline.com/9781449323592), [ref2](http://pandas.pydata.org/pandas-docs/stable/), [ref3](https://www.machinelearningplus.com/python/101-pandas-exercises-python/), [ref4](https://medium.com/@nik.piepenbreier/learn-how-to-easily-do-3-advanced-excel-tasks-in-python-925a6b7dd081), [styling](https://towardsdatascience.com/my-top-4-functions-to-style-the-pandas-dataframe-932cdc79be39)
+ [Pandas Machine Learning](http://pandas-ml.readthedocs.io/en/stable/)
+ Machine Learning Tools
+ Tensor Flow [ref1](https://www.tensorflow.org/), [ref2](https://www.analyticsvidhya.com/blog/2020/03/tensorflow-serving-deploy-deep-learning-models/), [ref3](https://towardsdatascience.com/the-ultimate-beginner-guide-to-tensorflow-af82fd4b8626), [ref4](https://towardsdatascience.com/3-ways-to-create-a-machine-learning-model-with-keras-and-tensorflow-2-0-de09323af4d3), [ref5](https://towardsdatascience.com/battle-of-the-deep-learning-frameworks-part-i-cff0e3841750)
+ TF.Text [ref1](https://medium.com/tensorflow/introducing-tf-text-438c8552bd5e)
+ TF Datasets [ref1](https://medium.com/tensorflow/introducing-tensorflow-datasets-c7f01f7e19f3)
+ Keras [ref1](https://keras.io/), [ref2](https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html), [ref3](https://towardsdatascience.com/4-awesome-things-you-can-do-with-keras-and-the-code-you-need-to-make-it-happen-858f022eec85)
+ PyTorch [ref1](https://pytorch.org/), [Comparison Tensorflow vs PyTorch](https://towardsdatascience.com/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b), [Comparison Tensorflow vs PyTorch 2](https://towardsdatascience.com/pytorch-vs-tensorflow-in-code-ada936fd5406)
+ Hugging Face [ref1](https://www.analyticsvidhya.com/blog/2020/06/hugging-face-tokenizers-nlp-library/)
+ [Gensim](https://radimrehurek.com/gensim/)