Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] clarify headers in development.md #4048

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 39 additions & 12 deletions docs/development.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Checkout and build from source
# Environment

## Check out the code

Expand Down Expand Up @@ -31,7 +31,6 @@ it with the following `apt` command on Ubuntu/Debian.
sudo apt install python3-dev
```


## (Optional) Set up pre-commit

This project uses [pre-commit](https://pre-commit.com/) in its CI. You can
Expand All @@ -45,15 +44,21 @@ pip install pre-commit
pre-commit install
```

## CMake Build
# Building

## With CMake

### Configure for Building...

Two setups are possible to build: in-tree and out-of-tree. The in-tree setup is the most straightforward, as it will build LLVM dependencies as well.

### Building torch-mlir in-tree
#### ...with LLVM "in-tree" using...

The following commands generate configuration files to build the project *in-tree*, that is, using llvm/llvm-project as the main build. This will build LLVM as well as torch-mlir and its subprojects. On Windows, use the "Developer PowerShell for Visual Studio" to ensure that the compiler and linker binaries are in the `PATH` variable.

The following command generates configuration files to build the project *in-tree*, that is, using llvm/llvm-project as the main build. This will build LLVM as well as torch-mlir and its subprojects. On Windows, use the "Developer PowerShell for Visual Studio" to ensure that the compiler and linker binaries are in the `PATH` variable.
##### ...Base + Optimization Options

This requires `lld`, `clang`, `ccache`, and other dependencies for building `libtorch` / `PyTorch` wheels from source. If you run into issues because of these, try the [simplified build command](#simplified-build).
This requires `lld`, `clang`, `ccache`, and other dependencies for building `libtorch` / `PyTorch` wheels from source. If you run into issues because of these, try the [simplified build command](#base-options).

```shell
cmake -GNinja -Bbuild \
Expand Down Expand Up @@ -85,7 +90,7 @@ cmake -GNinja -Bbuild \
-DLIBTORCH_VARIANT=shared
```

# Simplified build
##### ...Base Options

If you're running into issues with the above build command, consider using the following:

Expand All @@ -101,15 +106,15 @@ cmake -GNinja -Bbuild \
externals/llvm-project/llvm
```

#### Flags to enable MLIR debugging:
#### Options to enable MLIR debugging

* Enabling `--debug` and `--debug-only` flags (see [MLIR docs](https://mlir.llvm.org/getting_started/Debugging/)) for the `torch-mlir-opt` tool
```shell
-DCMAKE_BUILD_TYPE=RelWithDebInfo \ # or =Debug
-DLLVM_ENABLE_ASSERTIONS=ON \
```

#### Flags to run end-to-end tests:
#### Options to run end-to-end tests

Running the end-to-end execution tests locally requires enabling the native PyTorch extension features and the JIT IR importer, which depends on the
former and defaults to `ON` if not changed:
Expand All @@ -118,7 +123,7 @@ former and defaults to `ON` if not changed:
-DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON \
```

### Building against a pre-built LLVM
#### ...with LLVM "out-of-tree"

If you have built llvm-project separately in the directory `$LLVM_INSTALL_DIR`, you can also build the project *out-of-tree* using the following command as template:
```shell
Expand All @@ -135,10 +140,10 @@ The same QoL CMake flags can be used to enable clang, ccache, and lld. Be sure t

Be aware that the installed version of LLVM needs in general to match the committed version in `externals/llvm-project`. Using a different version may or may not work.


### Build commands
### Initiate Build

After either cmake run (in-tree/out-of-tree), use one of the following commands to build the project:

```shell
# Build just torch-mlir (not all of LLVM)
cmake --build build --target tools/torch-mlir/all
Expand Down Expand Up @@ -173,6 +178,7 @@ To test the MLIR output to torch dialect, you can use `test/python/fx_importer/b

Make sure you have activated the virtualenv and set the `PYTHONPATH` above
(if running on Windows, modify the environment variable as shown above):

```shell
source mlir_venv/bin/activate
export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/test/python/fx_importer
Expand All @@ -187,6 +193,7 @@ This path doesn't give access to the current generation work that is being drive
and may lead to errors.

Same as above, but with different python path and example:

```shell
export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/projects/pt1/examples
python projects/pt1/examples/torchscript_resnet18_all_output_types.py
Expand All @@ -197,6 +204,7 @@ This will display the Resnet18 network example in three dialects: TORCH, LINALG
The main functionality is on `torch_mlir.torchscript.compile()`'s `output_type`.

Ex:

```python
module = torch_mlir.torchscript.compile(resnet18, torch.ones(1, 3, 224, 224), output_type="torch")
```
Expand All @@ -206,6 +214,7 @@ module = torch_mlir.torchscript.compile(resnet18, torch.ones(1, 3, 224, 224), ou
## Jupyter

Jupyter notebook:

```shell
python -m ipykernel install --user --name=torch-mlir --env PYTHONPATH "$PYTHONPATH"
# Open in jupyter, and then navigate to
Expand Down Expand Up @@ -237,24 +246,29 @@ manually `source`'d in a shell.
Torch-MLIR can also be built using Bazel (apart from the official CMake build) for users that depend on Bazel in their workflows. To build `torch-mlir-opt` using Bazel, follow these steps:

1. Launch an interactive docker container with the required deps installed:

```shell
./utils/bazel/docker/run_docker.sh
```

Comment on lines 248 to 252
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the newline before the code block.

You should also indent these code blocks so they appear nested under the list items:

Suggested change
1. Launch an interactive docker container with the required deps installed:
```shell
./utils/bazel/docker/run_docker.sh
```
1. Launch an interactive docker container with the required deps installed:
```shell
./utils/bazel/docker/run_docker.sh
```

First here is with the indentation, second is without:
image

2. Build torch-mlir:

```shell
bazel build @torch-mlir//:torch-mlir-opt
```

The built binary should be at `bazel-bin/external/torch-mlir/torch-mlir-opt`.

3. Test torch-mlir (lit test only):

```shell
bazel test @torch-mlir//test/...
```

We welcome patches to torch-mlir's Bazel build. If you do contribute,
please complete your PR with an invocation of buildifier to ensure
the BUILD files are formatted consistently:

```shell
bazel run @torch-mlir//:buildifier
```
Expand Down Expand Up @@ -287,6 +301,7 @@ TM_PACKAGES="in-tree" ./build_tools/python_deploy/build_linux_packages.sh
### Out-of-Tree builds

Build LLVM/MLIR first and then build Torch-MLIR referencing that build

```shell
TM_PACKAGES="out-of-tree" ./build_tools/python_deploy/build_linux_packages.sh
```
Expand Down Expand Up @@ -339,38 +354,48 @@ The following additional environmental variables can be used to customize your d

* Custom Release Docker image:
Defaults to `stellaraccident/manylinux2014_x86_64-bazel-5.1.0:latest`

```shell
TM_RELEASE_DOCKER_IMAGE="stellaraccident/manylinux2014_x86_64-bazel-5.1.0:latest"
```

* Custom CI Docker image:
Defaults to `powderluv/torch-mlir-ci:latest`. This assumes an Ubuntu LTS like image. You can build your own with `./build_tools/docker/Dockerfile`

```shell
TM_CI_DOCKER_IMAGE="powderluv/torch-mlir-ci:latest"
```

* Custom Python Versions for Release builds:
Version of Python to use in Release builds. Ignored in CIs. Defaults to `cp39-cp39 cp310-cp310 cp312-cp312`

```shell
TM_PYTHON_VERSIONS="cp39-cp39 cp310-cp310 cp312-cp312"
```

* Location to store Release build wheels

```shell
TM_OUTPUT_DIR="./build_tools/python_deploy/wheelhouse"
```

* What "packages" to build:
Defaults to torch-mlir. Options are `torch-mlir out-of-tree in-tree`

```shell
TM_PACKAGES="torch-mlir out-of-tree in-tree"
```

* Use pre-built Pytorch:
Defaults to using pre-built Pytorch. Setting it to `OFF` builds from source

```shell
TM_USE_PYTORCH_BINARY="OFF"
```

* Skip running tests
Skip running tests if you want quick build only iteration. Default set to `OFF`

```shell
TM_SKIP_TESTS="OFF"
```
Expand All @@ -389,6 +414,7 @@ CMAKE_GENERATOR=Ninja python setup.py bdist_wheel

To package a completed CMake build directory,
you can use the `TORCH_MLIR_CMAKE_BUILD_DIR` and `TORCH_MLIR_CMAKE_ALREADY_BUILT` environment variables:

```shell
TORCH_MLIR_CMAKE_BUILD_DIR=build/ TORCH_MLIR_CMAKE_ALREADY_BUILT=1 python setup.py bdist_wheel
```
Expand Down Expand Up @@ -490,6 +516,7 @@ Most of the unit tests use the [`FileCheck` tool](https://llvm.org/docs/CommandG
# PyTorch source builds and custom PyTorch versions

Torch-MLIR by default builds with the latest nightly PyTorch version. This can be toggled to build from latest PyTorch source with

```
-DTORCH_MLIR_USE_INSTALLED_PYTORCH=OFF
-DTORCH_MLIR_SRC_PYTORCH_REPO=vivekkhandelwal1/pytorch # Optional. Github path. Defaults to pytorch/pytorch
Expand Down
Loading