Skip to content

Commit 5b58b63

Browse files
authored
[SYCL][NATIVECPU][DOCS] add Native CPU info to GettingStartedGuide (#19768)
Adding Native CPU to the startup guide and linking the design doc, adding mention of -march option in design doc
1 parent d84a020 commit 5b58b63

File tree

2 files changed

+18
-0
lines changed

2 files changed

+18
-0
lines changed

sycl/doc/GetStartedGuide.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ and a wide range of compute accelerators such as GPU and FPGA.
1212
* [Build DPC++ toolchain with support for NVIDIA CUDA](#build-dpc-toolchain-with-support-for-nvidia-cuda)
1313
* [Build DPC++ toolchain with support for HIP AMD](#build-dpc-toolchain-with-support-for-hip-amd)
1414
* [Build DPC++ toolchain with support for HIP NVIDIA](#build-dpc-toolchain-with-support-for-hip-nvidia)
15+
* [Build DPC++ toolchain with support for Native CPU](#build-dpc-toolchain-with-support-for-native-cpu)
1516
* [Build DPC++ toolchain with support for ARM processors](#build-dpc-toolchain-with-support-for-arm-processors)
1617
* [Build DPC++ toolchain with additional features enabled that require runtime/JIT compilation](#build-dpc-toolchain-with-additional-features-enabled-that-require-runtimejit-compilation)
1718
* [Build DPC++ toolchain with device image compression support](#build-dpc-toolchain-with-device-image-compression-support)
@@ -123,6 +124,7 @@ flags can be found by launching the script with `--help`):
123124
* `--hip-platform` -> select the platform used by the hip backend, `AMD` or
124125
`NVIDIA` (see [HIP AMD](#build-dpc-toolchain-with-support-for-hip-amd) or see
125126
[HIP NVIDIA](#build-dpc-toolchain-with-support-for-hip-nvidia))
127+
* `--native_cpu` -> use the Native CPU backend (see [Native CPU](#build-dpc-toolchain-with-support-for-native-cpu))
126128
* `--enable-all-llvm-targets` -> build compiler (but not a runtime) with all
127129
supported targets
128130
* `--shared-libs` -> Build shared libraries
@@ -297,6 +299,13 @@ as well as the CUDA Runtime API to be installed, see [NVIDIA CUDA Installation
297299
Guide for
298300
Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html).
299301

302+
### Build DPC++ toolchain with support for Native CPU
303+
304+
Native CPU is a cpu device which by default has no other dependency than DPC++. This device works with all cpu targets supported by the DPC++ runtime.
305+
Supported targets include x86, Aarch64 and riscv_64.
306+
307+
To enable Native CPU in a DPC++ build just add `--native_cpu` to the set of flags passed to `configure.py`.
308+
300309
### Build DPC++ toolchain with support for ARM processors
301310

302311
There is no continuous integration for this, and there are no guarantees for supported platforms or configurations.
@@ -696,6 +705,13 @@ clang++ -fsycl -fsycl-targets=nvptx64-nvidia-cuda \
696705
simple-sycl-app.cpp -o simple-sycl-app-cuda.exe
697706
```
698707
708+
When building for Native CPU use the SYCL target native_cpu:
709+
710+
```bash
711+
clang++ -fsycl -fsycl-targets=native_cpu simple-sycl-app.cpp -o simple-sycl-app.exe
712+
```
713+
More Native CPU build options can be found in [SYCLNativeCPU.md](design/SYCLNativeCPU.md).
714+
699715
**Linux & Windows (64-bit)**:
700716
701717
```bash

sycl/doc/design/SYCLNativeCPU.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,8 @@ Whole Function Vectorization is enabled by default, and can be controlled throug
105105
* `-mllvm -sycl-native-cpu-no-vecz`: disable Whole Function Vectorization.
106106
* `-mllvm -sycl-native-cpu-vecz-width`: sets the vector width to the specified value, defaults to 8.
107107

108+
The `-march=` option can be used to select specific target cpus which may improve performance of the vectorized code.
109+
108110
For more details on how the Whole Function Vectorizer is integrated for SYCL Native CPU, refer to the [Technical details](#technical-details) section.
109111

110112
# Code coverage

0 commit comments

Comments
 (0)