diff --git a/README.md b/README.md index 88a5c41..964f364 100644 --- a/README.md +++ b/README.md @@ -5,4 +5,24 @@ To avoid writing the same person, please report the person's name in https://docs.google.com/spreadsheets/d/153XruMO7DPONzBTkxh8ZoYSto1E_2zO021vs0prWZ_Q/edit?usp=sharing First come first serve! ------- -Write here +# Thang Luong +![alt text](https://avatars2.githubusercontent.com/u/396613?v=4&s=460 "Logo Title Text 1") +# Motivation + Anyone who is paying attention to Deep Learning researches in NLP especially in Neural Machine Translation, will defintely hear of **Thang Luong**. His most famous work is [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025). The multiplicative attention mechanism in sequence-to-sequecne training he proposed has already been applied in many Deep Learning tasks in these years. + Because of my recent research and interest, I recently pay my attention on Deep Learning about NLP tasks mainly. I spend a lot of time and energy studying the whole process and concepts of NMT, and from the process I've deeply known that how attention mechanism deeply affects NMT. + +# Introduction + Thang Luong is now the researcher in Google Brain who focus on Neural Machine Translationa and Deep Learning in Natural Language Processing tasks. After Luong finish his bachelor degree in National University of Singapore, he then pursues Ph.D in Stanford University with focus on Neural Machine Translation and other language models research. + +# Influnce + Here is the link that he introduces all his research work in NMT in person: [Imthang/thesis](https://github.com/lmthang/thesis) + Besides the most famous research work of proposing attention mechanism, I'd like to present some other his researches which I'm really impressed with. + + [Addressing the rare word problem in neural machine translation](https://arxiv.org/pdf/1410.8206) + + [A hierarchical neural autoencoder for paragraphs and documents](https://arxiv.org/pdf/1506.01057) + + [When Are Tree Structures Necessary for Deep Learning of Representations?](https://arxiv.org/pdf/1503.00185) + + +