Skip to content

Commit 23288af

Browse files
committed
PyTorch 2.1.2 설치 문서 반영
1 parent 8c6b7d4 commit 23288af

File tree

8 files changed

+119
-177
lines changed

8 files changed

+119
-177
lines changed

_get_started/installation/linux.md

Lines changed: 22 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Installing on Linux
22
{:.no_toc}
33

4-
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone) [support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors)..
4+
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone) [support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors) or [ROCm](https://docs.amd.com) support.
55

66
## Prerequisites
77
{: #linux-prerequisites}
@@ -16,7 +16,7 @@ PyTorch is supported on Linux distributions that use [glibc](https://www.gnu.org
1616
* [Fedora](https://getfedora.org/), minimum version 24
1717
* [Mint](https://linuxmint.com/download.php), minimum version 14
1818
* [OpenSUSE](https://software.opensuse.org/), minimum version 42.1
19-
* [PCLinuxOS](https://www.pclinuxos.com/get-pclinuxos/), minimum version 2014.7
19+
* [PCLinuxOS](https://www.pclinuxos.com/), minimum version 2014.7
2020
* [Slackware](http://www.slackware.com/getslack/), minimum version 14.2
2121
* [Ubuntu](https://www.ubuntu.com/download/desktop), minimum version 13.04
2222

@@ -25,7 +25,7 @@ PyTorch is supported on Linux distributions that use [glibc](https://www.gnu.org
2525
### Python
2626
{: #linux-python}
2727

28-
Python 3.7 or greater is generally installed by default on any of our supported Linux distributions, which meets our recommendation.
28+
Python 3.8-3.11 is generally installed by default on any of our supported Linux distributions, which meets our recommendation.
2929

3030
> Tip: By default, you will have to use the command `python3` to run Python. If you want to use just the command `python`, instead of `python3`, you can symlink `python` to the `python3` binary.
3131
@@ -40,8 +40,6 @@ If you decide to use APT, you can run the following command to install it:
4040
sudo apt install python
4141
```
4242

43-
> It is recommended that you use Python 3.6, 3.7 or 3.8, which can be installed via any of the mechanisms above .
44-
4543
> If you use [Anaconda](#anaconda) to install PyTorch, it will install a sandboxed version of Python that will be used for running PyTorch applications.
4644
4745
### Package Manager
@@ -80,28 +78,37 @@ sudo apt install python3-pip
8078
### Anaconda
8179
{: #linux-anaconda}
8280

83-
#### No CUDA
81+
#### No CUDA/ROCm
8482

85-
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Conda and CUDA: None.
83+
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Conda, Language: Python and Compute Platform: CPU.
8684
Then, run the command that is presented to you.
8785

8886
#### With CUDA
8987

9088
To install PyTorch via Anaconda, and you do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Conda and the CUDA version suited to your machine. Often, the latest CUDA version is better.
9189
Then, run the command that is presented to you.
9290

91+
#### With ROCm
92+
93+
PyTorch via Anaconda is not supported on ROCm currently. Please use pip instead.
94+
9395

9496
### pip
9597
{: #linux-pip}
9698

9799
#### No CUDA
98100

99-
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Pip and CUDA: None.
101+
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) or [ROCm-capable](https://docs.amd.com) system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
100102
Then, run the command that is presented to you.
101103

102104
#### With CUDA
103105

104-
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip and the CUDA version suited to your machine. Often, the latest CUDA version is better.
106+
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the CUDA version suited to your machine. Often, the latest CUDA version is better.
107+
Then, run the command that is presented to you.
108+
109+
#### With ROCm
110+
111+
To install PyTorch via pip, and do have a [ROCm-capable](https://docs.amd.com) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
105112
Then, run the command that is presented to you.
106113

107114
## Verification
@@ -126,7 +133,7 @@ tensor([[0.3380, 0.3845, 0.3217],
126133
[0.4675, 0.3947, 0.1426]])
127134
```
128135

129-
Additionally, to check if your GPU driver and CUDA is enabled and accessible by PyTorch, run the following commands to return whether or not the CUDA driver is enabled:
136+
Additionally, to check if your GPU driver and CUDA/ROCm is enabled and accessible by PyTorch, run the following commands to return whether or not the GPU driver is enabled (the ROCm build of PyTorch uses the same semantics at the python API level (https://github.com/pytorch/pytorch/blob/master/docs/source/notes/hip.rst#hip-interfaces-reuse-the-cuda-interfaces), so the below commands should also work for ROCm):
130137

131138
```python
132139
import torch
@@ -141,8 +148,10 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
141148
### Prerequisites
142149
{: #linux-prerequisites-2}
143150

144-
1. Install [Anaconda](#anaconda)
145-
2. Install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
151+
1. Install [Anaconda](#anaconda) or [Pip](#pip)
152+
2. If you need to build PyTorch with GPU support
153+
a. for NVIDIA GPUs, install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
154+
b. for AMD GPUs, install [ROCm](https://docs.amd.com), if your machine has a [ROCm-enabled GPU](https://docs.amd.com)
146155
3. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
147156

148-
You can verify the installation as described [above](#linux-verification).
157+
You can verify the installation as described [above](#linux-verification).

_get_started/installation/mac.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
# macOS에서 설치하기
22
{:.no_toc}
33

4-
PyTorch 는 macOS 에서 설치 및 사용할 수 있습니다.
5-
PyTorch를 설치할 시스템과 사용할 수 있는 GPU 에 따라, Mac에서의 처리 속도 측면에서의 PyTorch 사용 경험은 사람마다 다를 수 있습니다.
4+
PyTorch 는 macOS 에서 설치 및 사용할 수 있습니다. PyTorch를 설치할 시스템과 사용할 수 있는 GPU 에 따라, Mac에서의 처리 속도 측면에서의 PyTorch 사용 경험은 사람마다 다를 수 있습니다.
65

76
## 요구 사항
87
{: #mac-prerequisites}
@@ -14,7 +13,7 @@ PyTorch는 macOS 10.15 (Catalina) 이후 macOS에서 설치할 수 있습니다.
1413
### Python
1514
{: #mac-python}
1615

17-
Python 3.7 이상의 버전을 사용하기를 권장합니다. 해당 버전은 아나콘다 패키지 관리자 (아래 [참조](#anaconda)]), [HomeBrew](https://brew.sh), [Python 웹사이트](https://www.python.org/downloads/mac-osx/) 에서 설치할 수 있습니다.
16+
Python 3.8 ~ 3.11 사이의 버전을 사용하기를 권장합니다. 해당 버전은 아나콘다 패키지 관리자 (아래 [참조](#아나콘다)), [HomeBrew](https://brew.sh), [Python 웹사이트](https://www.python.org/downloads/mac-osx/) 에서 설치할 수 있습니다.
1817

1918
### 패키지 관리자
2019
{: #mac-package-manager}
@@ -28,17 +27,17 @@ Python과 PyTorch 설치 환경을 쉽게 격리할 수 있는 아나콘다를
2827
명령줄 인스톨러를 사용하는 경우, installer link를 복사하여 붙여넣거나 인텔 맥에서는 아래와 같이 실행할 수 있습니다.
2928

3029
```bash
31-
# 설치 시기에 따라 아나콘다 버전은 다를 수 있습니다.`
30+
# 설치 시기에 따라 아나콘다 버전은 다를 수 있습니다.
3231
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
3332
sh Miniconda3-latest-MacOSX-x86_64.sh
34-
# 프롬프트가 나오면 옵션을 선택합니다, 일반적으로 기본값을 사용합니다.`
33+
# 프롬프트가 나오면 옵션을 선택합니다, 일반적으로 기본값을 사용합니다.
3534
```
3635
혹은 m1 맥의 경우 아래와 같이 실행할 수 있습니다.
3736
```bash
38-
# 설치 시기에 따라 아나콘다 버전은 다를 수 있습니다.`
37+
# 설치 시기에 따라 아나콘다 버전은 다를 수 있습니다.
3938
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh
4039
sh Miniconda3-latest-MacOSX-arm64.sh
41-
# 프롬프트가 나오면 옵션을 선택합니다, 일반적으로 기본값을 사용합니다.`
40+
# 프롬프트가 나오면 옵션을 선택합니다, 일반적으로 기본값을 사용합니다.
4241
```
4342
#### pip
4443

@@ -96,12 +95,12 @@ tensor([[0.3380, 0.3845, 0.3217],
9695
## 소스에서 빌드
9796
{: #mac-from-source}
9897

99-
대부분의 PyTorch 사용자들은, 패키지 관리자를 통해 사전에 빌드된 바이너리를 사용하는 것이 제일 좋습니다.
100-
정식으로 릴리즈 되지 않은 최신 PyTorch 코드를 사용하려고 하거나, PyTorch core에 대한 테스트나 개발을 하는 경우에는 직접 PyTorch를 빌드해야합니다.
98+
대부분의 PyTorch 사용자들은, 패키지 관리자를 통해 사전에 빌드된 바이너리를 사용하는 것이 제일 좋습니다. 정식으로 릴리즈 되지 않은 최신 PyTorch 코드를 사용하려고 하거나, PyTorch core에 대한 테스트나 개발을 하는 경우에는 직접 PyTorch를 빌드해야합니다.
99+
101100
### 요구사항
102101
{: #mac-prerequisites-2}
103102

104-
1. [선택사항] [아나콘다](#anaconda) 설치
103+
1. [선택사항] [아나콘다](#아나콘다) 설치
105104
2. [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source) 참조하여 빌드 (영문)
106105

107106
[위 섹션](#mac-verification)을 참조하여 잘 설치되었는지 검증 할 수 있습니다.

_get_started/installation/windows.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ PyTorch is supported on the following Windows distributions:
1818
### Python
1919
{: #windows-python}
2020

21-
Currently, PyTorch on Windows only supports Python 3.7-3.9; Python 2.x is not supported.
21+
Currently, PyTorch on Windows only supports Python 3.8-3.10; Python 2.x is not supported.
2222

2323
As it is not installed by default on Windows, there are multiple ways to install Python:
2424

@@ -28,9 +28,9 @@ As it is not installed by default on Windows, there are multiple ways to install
2828

2929
> If you use Anaconda to install PyTorch, it will install a sandboxed version of Python that will be used for running PyTorch applications.
3030
31-
> If you decide to use Chocolatey, and haven't installed Chocolatey yet, ensure that you are [running your command prompt as an administrator](https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/).
31+
> If you decide to use Chocolatey, and haven't installed Chocolatey yet, ensure that you are running your command prompt as an administrator.
3232
33-
For a Chocolatey-based install, run the following command in an [administrative command prompt](https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/):
33+
For a Chocolatey-based install, run the following command in an administrative command prompt:
3434

3535
```bash
3636
choco install python
@@ -131,4 +131,4 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
131131
3. If you want to build on Windows, Visual Studio with MSVC toolset, and NVTX are also needed. The exact requirements of those dependencies could be found out [here](https://github.com/pytorch/pytorch#from-source).
132132
4. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
133133

134-
You can verify the installation as described [above](#windows-verification).
134+
You can verify the installation as described [above](#windows-verification).

_get_started/pytorch.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,7 @@ The minifier automatically reduces the issue you are seeing to a small snippet o
283283

284284
If you are not seeing the speedups that you expect, then we have the **torch.\_dynamo.explain** tool that explains which parts of your code induced what we call “graph breaks”. Graph breaks generally hinder the compiler from speeding up the code, and reducing the number of graph breaks likely will speed up your code (up to some limit of diminishing returns).
285285

286-
You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/stable/dynamo/troubleshooting.html).
286+
You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html).
287287

288288
### Dynamic Shapes
289289

@@ -363,10 +363,10 @@ We have built utilities for partitioning an FX graph into subgraphs that contain
363363

364364
We are super excited about the direction that we’ve taken for PyTorch 2.0 and beyond. The road to the final 2.0 release is going to be rough, but come join us on this journey early-on. If you are interested in deep-diving further or contributing to the compiler, please continue reading below which includes more information on how to get started (e.g., tutorials, benchmarks, models, FAQs) and **Ask the Engineers: 2.0 Live Q&A Series** starting this month. Additional resources include:
365365

366-
- Getting Started @ [https://pytorch.org/docs/stable/dynamo/get-started.html](https://pytorch.org/docs/stable/dynamo/get-started.html)
367-
- Tutorials @ [https://pytorch.org/tutorials/](https://pytorch.org/tutorials/)
368-
- Documentation @ [https://pytorch.org/docs/stable](https://pytorch.org/docs/stable) and [http://pytorch.org/docs/stable/dynamo](http://pytorch.org/docs/stable/dynamo)
369-
- Developer Discussions @ [https://dev-discuss.pytorch.org](https://dev-discuss.pytorch.org)
366+
- [Getting Started](https://pytorch.org/docs/stable/torch.compiler_get_started.html)
367+
- [Tutorials](https://pytorch.org/tutorials/)
368+
- [Documentation](https://pytorch.org/docs/stable)
369+
- [Developer Discussions](https://dev-discuss.pytorch.org)
370370

371371
<script page-id="pytorch" src="{{ site.baseurl }}/assets/menu-tab-selection.js"></script>
372372
<script src="{{ site.baseurl }}/assets/quick-start-module.js"></script>
@@ -496,7 +496,7 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
496496
3. Graph compilation, where the kernels call their corresponding low-level device-specific operations.
497497
498498
9. **What new components does PT2.0 add to PT?**
499-
- **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/stable/dynamo/guards-overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361))
499+
- **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/stable/torch.compiler_guards_overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361))
500500
- **AOTAutograd** to generate the backward graph corresponding to the forward graph captured by TorchDynamo ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-update-6-training-support-with-aotautograd/570)).
501501
- **PrimTorch** to decompose complicated PyTorch operations into simpler and more elementary ops ([read more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-2/645)).
502502
- **\[Backend]** Backends integrate with TorchDynamo to compile the graph into IR that can run on accelerators. For example, **TorchInductor** compiles the graph to either **Triton** for GPU execution or **OpenMP** for CPU execution ([read more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)).
@@ -511,10 +511,10 @@ DDP and FSDP in Compiled mode can run up to 15% faster than Eager-Mode in FP32
511511
The [PyTorch Developers forum](http://dev-discuss.pytorch.org/) is the best place to learn about 2.0 components directly from the developers who build them.
512512
513513
13. **Help my code is running slower with 2.0’s Compiled Mode!**
514-
The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/stable/dynamo/faq.html#why-am-i-not-seeing-speedups).
514+
The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/stable/torch.compiler_faq.html#why-am-i-not-seeing-speedups).
515515
516516
14. **My previously-running code is crashing with 2.0’s Compiled Mode! How do I debug it?**
517-
Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing](https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing).
517+
Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/stable/torch.compiler_faq.html#why-is-my-code-crashing](https://pytorch.org/docs/stable/torch.compiler_faq.html#why-is-my-code-crashing).
518518
519519
## Ask the Engineers: 2.0 Live Q&A Series
520520
Lines changed: 3 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -1,58 +1,4 @@
11
<div class="sticky-top get-started-locally-sidebar">
2-
<p id="get-started-shortcuts-menu">Shortcuts</p>
3-
<ul id="get-started-locally-sidebar-list">
4-
<li>
5-
<a href="#overview">Overview</a>
6-
</li>
7-
<li>
8-
<a href="#pytorch-2x-faster-more-pythonic-and-as-dynamic-as-ever">
9-
PyTorch 2.x: faster, more pythonic and as dynamic as ever
10-
</a>
11-
</li>
12-
<li>
13-
<a href="#testimonials">Testimonials</a>
14-
</li>
15-
16-
<li>
17-
<a href="#motivation">Motivation</a>
18-
</li>
19-
<li>
20-
<a href="#technology-overview">Technology Overview</a>
21-
</li>
22-
<li>
23-
<a href="#user-experience">User Experience</a>
24-
</li>
25-
26-
<li>
27-
<a href="#distributed">Distributed</a>
28-
</li>
29-
30-
<li>
31-
<a href="#developervendor-experience">Developer/Vendor Experience</a>
32-
</li>
33-
34-
<li>
35-
<a href="#final-thoughts">Final Thoughts</a>
36-
</li>
37-
<li>
38-
<a href="#accelerating-hugging-face-and-timm-models-with-pytorch-20">
39-
Accelerating Hugging Face And Timm Models With Pytorch 2.0
40-
</a>
41-
</li>
42-
<li>
43-
<a href="#requirements">Requirements</a>
44-
</li>
45-
<li>
46-
<a href="#getting-started">Getting Started</a>
47-
</li>
48-
<li>
49-
<a href="#faqs">FAQs</a>
50-
</li>
51-
<li>
52-
<a href="#ask-the-engineers-20-live-qa-series">Ask The Engineers 2.0 Live Q&A Series</a>
53-
</li>
54-
<li>
55-
<a href="#watch-the-talks-from-pytorch-conference">Watch The Talks From Pytorch Conference</a>
56-
</li>
57-
</ul>
58-
</div>
2+
<p id="get-started-shortcuts-menu">바로가기</p>
3+
<ul id="get-started-locally-sidebar-list"></ul>
4+
</div>

0 commit comments

Comments
 (0)