From ff942cd60f1025d5b50ffd400d68edeeb7d80e41 Mon Sep 17 00:00:00 2001 From: Tommy Ho Date: Thu, 21 Sep 2017 15:28:24 +0800 Subject: [PATCH 1/4] Update README.md --- README.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/README.md b/README.md index 88a5c41..41ea995 100644 --- a/README.md +++ b/README.md @@ -6,3 +6,18 @@ https://docs.google.com/spreadsheets/d/153XruMO7DPONzBTkxh8ZoYSto1E_2zO021vs0prW First come first serve! ------- Write here + +# Thang Luong + +# Motivation + Anyone who is paying attention to Deep Learning researches in NLP especially in Neural Machine Translation, will defintely hear of **Thang Luong**. His most famous work is [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025). The multiplicative attention mechanism in sequence-to-sequecne training he proposed has already been applied in many Deep Learning tasks in these years. + Because of my recent research and interest, I recently pay my attention on Deep Learning about NLP tasks mostly. + + +# Introduction + +# Influnce + + + + From fa3340ff832fb00b8aa301010ca922632cc89898 Mon Sep 17 00:00:00 2001 From: Tommy Ho Date: Mon, 25 Sep 2017 00:03:10 +0800 Subject: [PATCH 2/4] update readme.md --- README.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 41ea995..ec8663c 100644 --- a/README.md +++ b/README.md @@ -5,19 +5,21 @@ To avoid writing the same person, please report the person's name in https://docs.google.com/spreadsheets/d/153XruMO7DPONzBTkxh8ZoYSto1E_2zO021vs0prWZ_Q/edit?usp=sharing First come first serve! ------- -Write here - # Thang Luong # Motivation Anyone who is paying attention to Deep Learning researches in NLP especially in Neural Machine Translation, will defintely hear of **Thang Luong**. His most famous work is [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025). The multiplicative attention mechanism in sequence-to-sequecne training he proposed has already been applied in many Deep Learning tasks in these years. - Because of my recent research and interest, I recently pay my attention on Deep Learning about NLP tasks mostly. - + Because of my recent research and interest, I recently pay my attention on Deep Learning about NLP tasks mainly. I spend a lot of time and energy studying the whole process and concepts of NMT, and from the process I've deeply known that how attention mechanism deeply affects NMT. # Introduction + Thang Luong is now the researcher in Google Brain who focus on Neural Machine Translationa and Deep Learning in Natural Language Processing tasks. After Luong finish his bachelor degree in National University of Singapore, he then pursues Ph.D in Stanford University with focus on Neural Machine Translation and other language models research. # Influnce - + Here is the link that he introduces all his research work in NMT in person: [Imthang/thesis](https://github.com/lmthang/thesis) + Besides the most famous research work of proposing attention mechanism, I'd like to present some other his researches which I'm really impressed with. + [Addressing the rare word problem in neural machine translation](https://arxiv.org/pdf/1410.8206) + [A hierarchical neural autoencoder for paragraphs and documents](https://arxiv.org/pdf/1506.01057) + [When Are Tree Structures Necessary for Deep Learning of Representations?](https://arxiv.org/pdf/1503.00185) From 882e1862776526a494879f807b84d47c9393066c Mon Sep 17 00:00:00 2001 From: Tommy Ho Date: Mon, 25 Sep 2017 00:04:53 +0800 Subject: [PATCH 3/4] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ec8663c..6f51b88 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ https://docs.google.com/spreadsheets/d/153XruMO7DPONzBTkxh8ZoYSto1E_2zO021vs0prW First come first serve! ------- # Thang Luong - +![alt text](https://avatars2.githubusercontent.com/u/396613?v=4&s=460 "Logo Title Text 1") # Motivation Anyone who is paying attention to Deep Learning researches in NLP especially in Neural Machine Translation, will defintely hear of **Thang Luong**. His most famous work is [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025). The multiplicative attention mechanism in sequence-to-sequecne training he proposed has already been applied in many Deep Learning tasks in these years. Because of my recent research and interest, I recently pay my attention on Deep Learning about NLP tasks mainly. I spend a lot of time and energy studying the whole process and concepts of NMT, and from the process I've deeply known that how attention mechanism deeply affects NMT. From 7cc4b69466752fce00b7a0ef32b34daebdaffce9 Mon Sep 17 00:00:00 2001 From: Tommy Ho Date: Mon, 25 Sep 2017 00:05:18 +0800 Subject: [PATCH 4/4] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 6f51b88..964f364 100644 --- a/README.md +++ b/README.md @@ -17,8 +17,11 @@ First come first serve! # Influnce Here is the link that he introduces all his research work in NMT in person: [Imthang/thesis](https://github.com/lmthang/thesis) Besides the most famous research work of proposing attention mechanism, I'd like to present some other his researches which I'm really impressed with. + [Addressing the rare word problem in neural machine translation](https://arxiv.org/pdf/1410.8206) + [A hierarchical neural autoencoder for paragraphs and documents](https://arxiv.org/pdf/1506.01057) + [When Are Tree Structures Necessary for Deep Learning of Representations?](https://arxiv.org/pdf/1503.00185)