-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
87f9bda
commit cef0574
Showing
69 changed files
with
2,009 additions
and
68 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
https://arxiv.org/abs/2102.12092 | ||
|
||
Zero-Shot Text-to-Image Generation (Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever) | ||
|
||
openai의 학습 팁들이 들어있음. powersgd, 16bit 회피 등. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2110.14883 | ||
|
||
Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training (Shenggui Li, Jiarui Fang, Zhengda Bian, Hongxin Liu, Yuliang Liu, Haichen Huang, Boxiang Wang, Yang You) |
3 changes: 3 additions & 0 deletions
3
...1/221125 Solving math word problems with process- and outcome-based feedback.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2211.14275 | ||
|
||
Solving math word problems with process- and outcome-based feedback (Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, Irina Higgins) |
3 changes: 3 additions & 0 deletions
3
papers/2021/221204 Languages You Know Influence Those You Learn.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2212.01757 | ||
|
||
Languages You Know Influence Those You Learn: Impact of Language Characteristics on Multi-Lingual Text-to-Text Transfer (Benjamin Muller, Deepanshu Gupta, Siddharth Patwardhan, Jean-Philippe Fauconnier, David Vandyke, Sachin Agarwal) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2212.08073 | ||
|
||
Constitutional AI: Harmlessness from AI Feedback (Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan) |
3 changes: 3 additions & 0 deletions
3
... Upgrading Multilingual Machine Translation Models to Support More Languages.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2302.03528 | ||
|
||
Efficiently Upgrading Multilingual Machine Translation Models to Support More Languages (Simeng Sun, Maha Elbayad, Anna Sun, James Cross) |
3 changes: 3 additions & 0 deletions
3
papers/2023/230209 Efficient Attention via Control Variates.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
https://arxiv.org/abs/2302.04542 | ||
|
||
Efficient Attention via Control Variates (Lin Zheng, Jianbo Yuan, Chong Wang, Lingpeng Kong) |
11 changes: 11 additions & 0 deletions
11
papers/2023/230209 In-Context Learning with Many Demonstration Examples.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
https://arxiv.org/abs/2302.04931 | ||
|
||
In-Context Learning with Many Demonstration Examples (Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong) | ||
|
||
transformer lm의 context length가 그리 넉넉하지 않은데 in-context learning을 위해 프롬프트와 예제를 추가하다보면 더 부족해지죠. 그 문제에 대한 제안이네요. 사실 핵심은 long-range efficient attention과 efficient attention의 locality + ciruclar positional embedding을 사용한 length extrapolation입니다. 여기서 efficient attention으로는 EVA (https://arxiv.org/abs/2302.04542) 를 썼습니다. (놀랍게도 저자가 한 명 빼고 안 겹치네요.) | ||
|
||
EVA 논문의 설명이 복잡하긴 한데 이 논문의 요약을 따르자면 청킹 / 청크 밖 원격 feature 내에 efficient attention과 pooling, remote feature와 청크 내 feature에 대한 plain attention 조합이군요. 이 설명으로는 window attention + long rang feature의 조합으로 보이네요. | ||
|
||
지금 openai나 구글에서는 context length 문제에 대해 어떻게 대응하고 있는지 궁금하긴 합니다. (그냥 gpt-3 그대로일까요?) | ||
|
||
#efficient_attention #transformer |
7 changes: 7 additions & 0 deletions
7
...0 The Wisdom of Hindsight Makes Language Models Better Instruction Followers.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
https://arxiv.org/abs/2302.05206 | ||
|
||
The Wisdom of Hindsight Makes Language Models Better Instruction Followers (Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E. Gonzalez) | ||
|
||
rl 없이 instruct tuning을 해보자. instruction prompt / query에 모델로 샘플링한 answer로 triplet을 만든 다음 이 answer의 스코어가 높아지도록 instruction prompt를 편집하는 방식으로 작동하는군요. 스코어 평가는 그렇다 치고 instruction prompt를 편집하는 것이 문제인데 여기서는 prompt 자체를 정답 생성 / 오답 생성으로 만들고 편집은 negation을 취하는 방식으로 했습니다. 흠. | ||
|
||
#instruct #reinforcement_learning |
9 changes: 9 additions & 0 deletions
9
...ribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
https://arxiv.org/abs/2302.05578 | ||
|
||
Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models (Renat Aksitov, Chung-Ching Chang, David Reitter, Siamak Shakeri, Yunhsuan Sung) | ||
|
||
retrieval augment가 lm의 hallucination 문제를 해소할 수 있는 수단으로 여겨지고 있지만 그냥 retrieve 해서 앞에 붙인다고 해서 되는 문제는 아니겠죠. fluency (그럴 듯하고 퀄리티 높은 텍스트 생성)과 accountability (근거 텍스트에 충실한 텍스트 생성) 에 trade off가 발생하리라고 생각하고 이 두 메트릭을 자동 측정할 수 있는 방법 개발, 그리고 여러 조건에서 이 스코어가 어떻게 변화하는지를 분석했네요. | ||
|
||
take away 메시지를 보면 결과적으로 좋은 retriever를 사용해서 context length를 넘지 않을 정도로 사용해야 하고 작은 모델을 쓰는 경우는 샘플을 여러 개 뽑아서 re-ranking을 해야한다가 되는 군요. | ||
|
||
#retrieval #llm |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
최근 놀랐던 것이 사람들이 text2img 생성 모델을 튜닝해서 온갖 스타일의 이미지 생성용 체크포인트들을 만들고 있다는 것이었다. LoRA (https://arxiv.org/abs/2106.09685) 같은 나왔을 당시에는 아 이렇게도 할 수 있겠네 하는 느낌으로 넘어갔던 방법들이 제한된 컴퓨팅 리소스에서 사용하기 위해 사용되고 있었다는 것도. (덕분에 LoRA는 가장 성공적인 파인튜닝 방법 중 하나가 되지 않을까 싶다.) 이전에 Novel AI가 등장하고, 체크포인트가 유출되어서 암암리에 돌아다니고 있다는 이야기 정도를 들은 이후로 상황을 잘 몰랐는데, 그 사이에 이런 유행들이 차고 흘러넘쳐 이젠 내가 접할 수 있는 채널에까지 등장하게 된 것 같다. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
https://arxiv.org/abs/2302.05872 | ||
|
||
I$^2$SB: Image-to-Image Schrödinger Bridge (Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, Anima Anandkumar) | ||
|
||
gan에서 optimal transport로 묘기하던 것이 diffusion의 등장과 함께 지나가는가 했더니 이젠 optimal transport와 sde를 결합한 묘기가 등장하고 있네요. image restoration에 schrodigner bridge를 적용한 방법인데 수학을 다 걷어내고 나면 핵심은 프로세스의 양 끝단, 입력과 출력 사이의 중간 지점을 샘플링하고 이 샘플을 모델에 통과시켜 입력과 출력의 차이를 예측하게 하는 방식으로 보입니다. | ||
|
||
흥미로운 건 이 접근을 쓰니 super resolution 같은 경우엔 low res 이미지에서 점진적으로 high res로 이미지가 향상되는 프로세스가 나타난다는 것이네요. | ||
|
||
#ddpm #sde #image_restoration |
7 changes: 7 additions & 0 deletions
7
papers/2023/230213 3D-aware Blending with Generative NeRFs.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
https://arxiv.org/abs/2302.06608 | ||
|
||
3D-aware Blending with Generative NeRFs (Hyunsu Kim, Gayoung Lee, Yunjey Choi, Jin-Hwa Kim, Jun-Yan Zhu) | ||
|
||
3d generative 모델에서 이미지 블렌딩하기. 2d 이미지처럼 블렌딩하면 2d 이미지를 위에 갖다 붙인 느낌이 나니 3d aware한 블렌딩이 필요하겠네요. 일단 이미지를 3d 구조를 활용해 align한 다음 이 이미지들의 perceptual loss와 density 차이의 loss를 사용해 latent code를 최적화하는 방식이군요. 흥미롭네요. | ||
|
||
#image_editing #3d_generative_model |
Oops, something went wrong.