You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _get_started/installation/linux.md
+22-13Lines changed: 22 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# Installing on Linux
2
2
{:.no_toc}
3
3
4
-
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone)[support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors)..
4
+
PyTorch can be installed and used on various Linux distributions. Depending on your system and compute requirements, your experience with PyTorch on Linux may vary in terms of processing time. It is recommended, but not required, that your Linux system has an NVIDIA or AMD GPU in order to harness the full power of PyTorch's [CUDA](https://developer.nvidia.com/cuda-zone)[support](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html?highlight=cuda#cuda-tensors) or [ROCm](https://docs.amd.com) support.
5
5
6
6
## Prerequisites
7
7
{: #linux-prerequisites}
@@ -16,7 +16,7 @@ PyTorch is supported on Linux distributions that use [glibc](https://www.gnu.org
16
16
*[Fedora](https://getfedora.org/), minimum version 24
17
17
*[Mint](https://linuxmint.com/download.php), minimum version 14
18
18
*[OpenSUSE](https://software.opensuse.org/), minimum version 42.1
19
-
*[PCLinuxOS](https://www.pclinuxos.com/get-pclinuxos/), minimum version 2014.7
19
+
*[PCLinuxOS](https://www.pclinuxos.com/), minimum version 2014.7
20
20
*[Slackware](http://www.slackware.com/getslack/), minimum version 14.2
21
21
*[Ubuntu](https://www.ubuntu.com/download/desktop), minimum version 13.04
22
22
@@ -25,7 +25,7 @@ PyTorch is supported on Linux distributions that use [glibc](https://www.gnu.org
25
25
### Python
26
26
{: #linux-python}
27
27
28
-
Python 3.7 or greater is generally installed by default on any of our supported Linux distributions, which meets our recommendation.
28
+
Python 3.8-3.11 is generally installed by default on any of our supported Linux distributions, which meets our recommendation.
29
29
30
30
> Tip: By default, you will have to use the command `python3` to run Python. If you want to use just the command `python`, instead of `python3`, you can symlink `python` to the `python3` binary.
31
31
@@ -40,8 +40,6 @@ If you decide to use APT, you can run the following command to install it:
40
40
sudo apt install python
41
41
```
42
42
43
-
> It is recommended that you use Python 3.6, 3.7 or 3.8, which can be installed via any of the mechanisms above .
44
-
45
43
> If you use [Anaconda](#anaconda) to install PyTorch, it will install a sandboxed version of Python that will be used for running PyTorch applications.
46
44
47
45
### Package Manager
@@ -80,28 +78,37 @@ sudo apt install python3-pip
80
78
### Anaconda
81
79
{: #linux-anaconda}
82
80
83
-
#### No CUDA
81
+
#### No CUDA/ROCm
84
82
85
-
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Condaand CUDA: None.
83
+
To install PyTorch via Anaconda, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone)or [ROCm-capable](https://docs.amd.com)system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Conda, Language: Python and Compute Platform: CPU.
86
84
Then, run the command that is presented to you.
87
85
88
86
#### With CUDA
89
87
90
88
To install PyTorch via Anaconda, and you do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Conda and the CUDA version suited to your machine. Often, the latest CUDA version is better.
91
89
Then, run the command that is presented to you.
92
90
91
+
#### With ROCm
92
+
93
+
PyTorch via Anaconda is not supported on ROCm currently. Please use pip instead.
94
+
93
95
94
96
### pip
95
97
{: #linux-pip}
96
98
97
99
#### No CUDA
98
100
99
-
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system or do not require CUDA, in the above selector, choose OS: Linux, Package: Pipand CUDA: None.
101
+
To install PyTorch via pip, and do not have a [CUDA-capable](https://developer.nvidia.com/cuda-zone)or [ROCm-capable](https://docs.amd.com)system or do not require CUDA/ROCm (i.e. GPU support), in the above selector, choose OS: Linux, Package: Pip, Language: Python and Compute Platform: CPU.
100
102
Then, run the command that is presented to you.
101
103
102
104
#### With CUDA
103
105
104
-
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip and the CUDA version suited to your machine. Often, the latest CUDA version is better.
106
+
To install PyTorch via pip, and do have a [CUDA-capable](https://developer.nvidia.com/cuda-zone) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the CUDA version suited to your machine. Often, the latest CUDA version is better.
107
+
Then, run the command that is presented to you.
108
+
109
+
#### With ROCm
110
+
111
+
To install PyTorch via pip, and do have a [ROCm-capable](https://docs.amd.com) system, in the above selector, choose OS: Linux, Package: Pip, Language: Python and the ROCm version supported.
Additionally, to check if your GPU driver and CUDA is enabled and accessible by PyTorch, run the following commands to return whether or not the CUDA driver is enabled:
136
+
Additionally, to check if your GPU driver and CUDA/ROCm is enabled and accessible by PyTorch, run the following commands to return whether or not the GPU driver is enabled (the ROCm build of PyTorch uses the same semantics at the python API level (https://github.com/pytorch/pytorch/blob/master/docs/source/notes/hip.rst#hip-interfaces-reuse-the-cuda-interfaces), so the below commands should also work for ROCm):
130
137
131
138
```python
132
139
import torch
@@ -141,8 +148,10 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
141
148
### Prerequisites
142
149
{: #linux-prerequisites-2}
143
150
144
-
1. Install [Anaconda](#anaconda)
145
-
2. Install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
151
+
1. Install [Anaconda](#anaconda) or [Pip](#pip)
152
+
2. If you need to build PyTorch with GPU support
153
+
a. for NVIDIA GPUs, install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
154
+
b. for AMD GPUs, install [ROCm](https://docs.amd.com), if your machine has a [ROCm-enabled GPU](https://docs.amd.com)
146
155
3. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
147
156
148
-
You can verify the installation as described [above](#linux-verification).
157
+
You can verify the installation as described [above](#linux-verification).
Copy file name to clipboardExpand all lines: _get_started/installation/mac.md
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,7 @@
1
1
# macOS에서 설치하기
2
2
{:.no_toc}
3
3
4
-
PyTorch 는 macOS 에서 설치 및 사용할 수 있습니다.
5
-
PyTorch를 설치할 시스템과 사용할 수 있는 GPU 에 따라, Mac에서의 처리 속도 측면에서의 PyTorch 사용 경험은 사람마다 다를 수 있습니다.
4
+
PyTorch 는 macOS 에서 설치 및 사용할 수 있습니다. PyTorch를 설치할 시스템과 사용할 수 있는 GPU 에 따라, Mac에서의 처리 속도 측면에서의 PyTorch 사용 경험은 사람마다 다를 수 있습니다.
6
5
7
6
## 요구 사항
8
7
{: #mac-prerequisites}
@@ -14,7 +13,7 @@ PyTorch는 macOS 10.15 (Catalina) 이후 macOS에서 설치할 수 있습니다.
14
13
### Python
15
14
{: #mac-python}
16
15
17
-
Python 3.7 이상의 버전을 사용하기를 권장합니다. 해당 버전은 아나콘다 패키지 관리자 (아래 [참조](#anaconda)]), [HomeBrew](https://brew.sh), [Python 웹사이트](https://www.python.org/downloads/mac-osx/) 에서 설치할 수 있습니다.
16
+
Python 3.8 ~ 3.11 사이의 버전을 사용하기를 권장합니다. 해당 버전은 아나콘다 패키지 관리자 (아래 [참조](#아나콘다)), [HomeBrew](https://brew.sh), [Python 웹사이트](https://www.python.org/downloads/mac-osx/) 에서 설치할 수 있습니다.
18
17
19
18
### 패키지 관리자
20
19
{: #mac-package-manager}
@@ -28,17 +27,17 @@ Python과 PyTorch 설치 환경을 쉽게 격리할 수 있는 아나콘다를
28
27
명령줄 인스톨러를 사용하는 경우, installer link를 복사하여 붙여넣거나 인텔 맥에서는 아래와 같이 실행할 수 있습니다.
대부분의 PyTorch 사용자들은, 패키지 관리자를 통해 사전에 빌드된 바이너리를 사용하는 것이 제일 좋습니다.
100
-
정식으로 릴리즈 되지 않은 최신 PyTorch 코드를 사용하려고 하거나, PyTorch core에 대한 테스트나 개발을 하는 경우에는 직접 PyTorch를 빌드해야합니다.
98
+
대부분의 PyTorch 사용자들은, 패키지 관리자를 통해 사전에 빌드된 바이너리를 사용하는 것이 제일 좋습니다. 정식으로 릴리즈 되지 않은 최신 PyTorch 코드를 사용하려고 하거나, PyTorch core에 대한 테스트나 개발을 하는 경우에는 직접 PyTorch를 빌드해야합니다.
Copy file name to clipboardExpand all lines: _get_started/installation/windows.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ PyTorch is supported on the following Windows distributions:
18
18
### Python
19
19
{: #windows-python}
20
20
21
-
Currently, PyTorch on Windows only supports Python 3.7-3.9; Python 2.x is not supported.
21
+
Currently, PyTorch on Windows only supports Python 3.8-3.10; Python 2.x is not supported.
22
22
23
23
As it is not installed by default on Windows, there are multiple ways to install Python:
24
24
@@ -28,9 +28,9 @@ As it is not installed by default on Windows, there are multiple ways to install
28
28
29
29
> If you use Anaconda to install PyTorch, it will install a sandboxed version of Python that will be used for running PyTorch applications.
30
30
31
-
> If you decide to use Chocolatey, and haven't installed Chocolatey yet, ensure that you are [running your command prompt as an administrator](https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/).
31
+
> If you decide to use Chocolatey, and haven't installed Chocolatey yet, ensure that you are running your command prompt as an administrator.
32
32
33
-
For a Chocolatey-based install, run the following command in an [administrative command prompt](https://www.howtogeek.com/194041/how-to-open-the-command-prompt-as-administrator-in-windows-8.1/):
33
+
For a Chocolatey-based install, run the following command in an administrative command prompt:
34
34
35
35
```bash
36
36
choco install python
@@ -131,4 +131,4 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
131
131
3. If you want to build on Windows, Visual Studio with MSVC toolset, and NVTX are also needed. The exact requirements of those dependencies could be found out [here](https://github.com/pytorch/pytorch#from-source).
132
132
4. Follow the steps described here: [https://github.com/pytorch/pytorch#from-source](https://github.com/pytorch/pytorch#from-source)
133
133
134
-
You can verify the installation as described [above](#windows-verification).
134
+
You can verify the installation as described [above](#windows-verification).
Copy file name to clipboardExpand all lines: _get_started/pytorch.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -283,7 +283,7 @@ The minifier automatically reduces the issue you are seeing to a small snippet o
283
283
284
284
If you are not seeing the speedups that you expect, then we have the **torch.\_dynamo.explain** tool that explains which parts of your code induced what we call “graph breaks”. Graph breaks generally hinder the compiler from speeding up the code, and reducing the number of graph breaks likely will speed up your code (up to some limit of diminishing returns).
285
285
286
-
You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/stable/dynamo/troubleshooting.html).
286
+
You can read about these and more in our [troubleshooting guide](https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html).
287
287
288
288
### Dynamic Shapes
289
289
@@ -363,10 +363,10 @@ We have built utilities for partitioning an FX graph into subgraphs that contain
363
363
364
364
We are super excited about the direction that we’ve taken for PyTorch 2.0 and beyond. The road to the final 2.0 release is going to be rough, but come join us on this journey early-on. If you are interested in deep-diving further or contributing to the compiler, please continue reading below which includes more information on how to get started (e.g., tutorials, benchmarks, models, FAQs) and **Ask the Engineers: 2.0 Live Q&A Series** starting this month. Additional resources include:
365
365
366
-
- Getting Started @ [https://pytorch.org/docs/stable/dynamo/get-started.html](https://pytorch.org/docs/stable/dynamo/get-started.html)
-Documentation @ [https://pytorch.org/docs/stable](https://pytorch.org/docs/stable) and [http://pytorch.org/docs/stable/dynamo](http://pytorch.org/docs/stable/dynamo)
@@ -496,7 +496,7 @@ In 2.0, if you wrap your model in `model = torch.compile(model)`, your model goe
496
496
3. Graph compilation, where the kernels call their corresponding low-level device-specific operations.
497
497
498
498
9. **What new components does PT2.0 add to PT?**
499
-
- **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/stable/dynamo/guards-overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361))
499
+
- **TorchDynamo** generates FX Graphs from Python bytecode. It maintains the eager-mode capabilities using [guards](https://pytorch.org/docs/stable/torch.compiler_guards_overview.html#caching-and-guards-overview) to ensure the generated graphs are valid ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-an-experiment-in-dynamic-python-bytecode-transformation/361))
500
500
- **AOTAutograd** to generate the backward graph corresponding to the forward graph captured by TorchDynamo ([read more](https://dev-discuss.pytorch.org/t/torchdynamo-update-6-training-support-with-aotautograd/570)).
501
501
- **PrimTorch** to decompose complicated PyTorch operations into simpler and more elementary ops ([read more](https://dev-discuss.pytorch.org/t/tracing-with-primitives-update-2/645)).
502
502
- **\[Backend]** Backends integrate with TorchDynamo to compile the graph into IR that can run on accelerators. For example, **TorchInductor** compiles the graph to either **Triton** for GPU execution or **OpenMP** for CPU execution ([read more](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747)).
@@ -511,10 +511,10 @@ DDP and FSDP in Compiled mode can run up to 15% faster than Eager-Mode in FP32
511
511
The [PyTorch Developers forum](http://dev-discuss.pytorch.org/) is the best place to learn about 2.0 components directly from the developers who build them.
512
512
513
513
13. **Help my code is running slower with 2.0’s Compiled Mode!**
514
-
The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/stable/dynamo/faq.html#why-am-i-not-seeing-speedups).
514
+
The most likely reason for performance hits is too many graph breaks. For instance, something innocuous as a print statement in your model’s forward triggers a graph break. We have ways to diagnose these - read more [here](https://pytorch.org/docs/stable/torch.compiler_faq.html#why-am-i-not-seeing-speedups).
515
515
516
516
14. **My previously-running code is crashing with 2.0’s Compiled Mode! How do I debug it?**
517
-
Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing](https://pytorch.org/docs/stable/dynamo/faq.html#why-is-my-code-crashing).
517
+
Here are some techniques to triage where your code might be failing, and printing helpful logs: [https://pytorch.org/docs/stable/torch.compiler_faq.html#why-is-my-code-crashing](https://pytorch.org/docs/stable/torch.compiler_faq.html#why-is-my-code-crashing).
0 commit comments