From 717ebb9862dc76724d106ae64d156472e424eecb Mon Sep 17 00:00:00 2001 From: Jacob Gordon Date: Mon, 23 Dec 2024 20:04:42 +0000 Subject: [PATCH 1/2] style(docs/development.md): ensures fenced code blocks are surrounded by blank lines - enforced by markdown linter --- docs/development.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/docs/development.md b/docs/development.md index 61c60a646a5c..717e9fd9e4f2 100644 --- a/docs/development.md +++ b/docs/development.md @@ -139,6 +139,7 @@ Be aware that the installed version of LLVM needs in general to match the commit ### Build commands After either cmake run (in-tree/out-of-tree), use one of the following commands to build the project: + ```shell # Build just torch-mlir (not all of LLVM) cmake --build build --target tools/torch-mlir/all @@ -173,6 +174,7 @@ To test the MLIR output to torch dialect, you can use `test/python/fx_importer/b Make sure you have activated the virtualenv and set the `PYTHONPATH` above (if running on Windows, modify the environment variable as shown above): + ```shell source mlir_venv/bin/activate export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/test/python/fx_importer @@ -187,6 +189,7 @@ This path doesn't give access to the current generation work that is being drive and may lead to errors. Same as above, but with different python path and example: + ```shell export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/projects/pt1/examples python projects/pt1/examples/torchscript_resnet18_all_output_types.py @@ -197,6 +200,7 @@ This will display the Resnet18 network example in three dialects: TORCH, LINALG The main functionality is on `torch_mlir.torchscript.compile()`'s `output_type`. Ex: + ```python module = torch_mlir.torchscript.compile(resnet18, torch.ones(1, 3, 224, 224), output_type="torch") ``` @@ -206,6 +210,7 @@ module = torch_mlir.torchscript.compile(resnet18, torch.ones(1, 3, 224, 224), ou ## Jupyter Jupyter notebook: + ```shell python -m ipykernel install --user --name=torch-mlir --env PYTHONPATH "$PYTHONPATH" # Open in jupyter, and then navigate to @@ -237,17 +242,21 @@ manually `source`'d in a shell. Torch-MLIR can also be built using Bazel (apart from the official CMake build) for users that depend on Bazel in their workflows. To build `torch-mlir-opt` using Bazel, follow these steps: 1. Launch an interactive docker container with the required deps installed: + ```shell ./utils/bazel/docker/run_docker.sh ``` 2. Build torch-mlir: + ```shell bazel build @torch-mlir//:torch-mlir-opt ``` + The built binary should be at `bazel-bin/external/torch-mlir/torch-mlir-opt`. 3. Test torch-mlir (lit test only): + ```shell bazel test @torch-mlir//test/... ``` @@ -255,6 +264,7 @@ bazel test @torch-mlir//test/... We welcome patches to torch-mlir's Bazel build. If you do contribute, please complete your PR with an invocation of buildifier to ensure the BUILD files are formatted consistently: + ```shell bazel run @torch-mlir//:buildifier ``` @@ -287,6 +297,7 @@ TM_PACKAGES="in-tree" ./build_tools/python_deploy/build_linux_packages.sh ### Out-of-Tree builds Build LLVM/MLIR first and then build Torch-MLIR referencing that build + ```shell TM_PACKAGES="out-of-tree" ./build_tools/python_deploy/build_linux_packages.sh ``` @@ -339,38 +350,48 @@ The following additional environmental variables can be used to customize your d * Custom Release Docker image: Defaults to `stellaraccident/manylinux2014_x86_64-bazel-5.1.0:latest` + ```shell TM_RELEASE_DOCKER_IMAGE="stellaraccident/manylinux2014_x86_64-bazel-5.1.0:latest" ``` + * Custom CI Docker image: Defaults to `powderluv/torch-mlir-ci:latest`. This assumes an Ubuntu LTS like image. You can build your own with `./build_tools/docker/Dockerfile` + ```shell TM_CI_DOCKER_IMAGE="powderluv/torch-mlir-ci:latest" ``` * Custom Python Versions for Release builds: Version of Python to use in Release builds. Ignored in CIs. Defaults to `cp39-cp39 cp310-cp310 cp312-cp312` + ```shell TM_PYTHON_VERSIONS="cp39-cp39 cp310-cp310 cp312-cp312" ``` * Location to store Release build wheels + ```shell TM_OUTPUT_DIR="./build_tools/python_deploy/wheelhouse" ``` * What "packages" to build: Defaults to torch-mlir. Options are `torch-mlir out-of-tree in-tree` + ```shell TM_PACKAGES="torch-mlir out-of-tree in-tree" ``` + * Use pre-built Pytorch: Defaults to using pre-built Pytorch. Setting it to `OFF` builds from source + ```shell TM_USE_PYTORCH_BINARY="OFF" ``` + * Skip running tests Skip running tests if you want quick build only iteration. Default set to `OFF` + ```shell TM_SKIP_TESTS="OFF" ``` @@ -389,6 +410,7 @@ CMAKE_GENERATOR=Ninja python setup.py bdist_wheel To package a completed CMake build directory, you can use the `TORCH_MLIR_CMAKE_BUILD_DIR` and `TORCH_MLIR_CMAKE_ALREADY_BUILT` environment variables: + ```shell TORCH_MLIR_CMAKE_BUILD_DIR=build/ TORCH_MLIR_CMAKE_ALREADY_BUILT=1 python setup.py bdist_wheel ``` @@ -490,6 +512,7 @@ Most of the unit tests use the [`FileCheck` tool](https://llvm.org/docs/CommandG # PyTorch source builds and custom PyTorch versions Torch-MLIR by default builds with the latest nightly PyTorch version. This can be toggled to build from latest PyTorch source with + ``` -DTORCH_MLIR_USE_INSTALLED_PYTORCH=OFF -DTORCH_MLIR_SRC_PYTORCH_REPO=vivekkhandelwal1/pytorch # Optional. Github path. Defaults to pytorch/pytorch From 14f95aec4cd5d0c09443c28f6c00563f0bafe987 Mon Sep 17 00:00:00 2001 From: Jacob Gordon Date: Tue, 17 Dec 2024 17:37:41 +0000 Subject: [PATCH 2/2] docs(development.md): clarifies header structure --- docs/development.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/docs/development.md b/docs/development.md index 717e9fd9e4f2..a3e20fbacc33 100644 --- a/docs/development.md +++ b/docs/development.md @@ -1,4 +1,4 @@ -# Checkout and build from source +# Environment ## Check out the code @@ -31,7 +31,6 @@ it with the following `apt` command on Ubuntu/Debian. sudo apt install python3-dev ``` - ## (Optional) Set up pre-commit This project uses [pre-commit](https://pre-commit.com/) in its CI. You can @@ -45,15 +44,21 @@ pip install pre-commit pre-commit install ``` -## CMake Build +# Building + +## With CMake + +### Configure for Building... Two setups are possible to build: in-tree and out-of-tree. The in-tree setup is the most straightforward, as it will build LLVM dependencies as well. -### Building torch-mlir in-tree +#### ...with LLVM "in-tree" using... -The following command generates configuration files to build the project *in-tree*, that is, using llvm/llvm-project as the main build. This will build LLVM as well as torch-mlir and its subprojects. On Windows, use the "Developer PowerShell for Visual Studio" to ensure that the compiler and linker binaries are in the `PATH` variable. +The following commands generate configuration files to build the project *in-tree*, that is, using llvm/llvm-project as the main build. This will build LLVM as well as torch-mlir and its subprojects. On Windows, use the "Developer PowerShell for Visual Studio" to ensure that the compiler and linker binaries are in the `PATH` variable. -This requires `lld`, `clang`, `ccache`, and other dependencies for building `libtorch` / `PyTorch` wheels from source. If you run into issues because of these, try the [simplified build command](#simplified-build). +##### ...Base + Optimization Options + +This requires `lld`, `clang`, `ccache`, and other dependencies for building `libtorch` / `PyTorch` wheels from source. If you run into issues because of these, try the [simplified build command](#base-options). ```shell cmake -GNinja -Bbuild \ @@ -85,7 +90,7 @@ cmake -GNinja -Bbuild \ -DLIBTORCH_VARIANT=shared ``` -# Simplified build +##### ...Base Options If you're running into issues with the above build command, consider using the following: @@ -101,7 +106,7 @@ cmake -GNinja -Bbuild \ externals/llvm-project/llvm ``` -#### Flags to enable MLIR debugging: +#### Options to enable MLIR debugging * Enabling `--debug` and `--debug-only` flags (see [MLIR docs](https://mlir.llvm.org/getting_started/Debugging/)) for the `torch-mlir-opt` tool ```shell @@ -109,7 +114,7 @@ cmake -GNinja -Bbuild \ -DLLVM_ENABLE_ASSERTIONS=ON \ ``` -#### Flags to run end-to-end tests: +#### Options to run end-to-end tests Running the end-to-end execution tests locally requires enabling the native PyTorch extension features and the JIT IR importer, which depends on the former and defaults to `ON` if not changed: @@ -118,7 +123,7 @@ former and defaults to `ON` if not changed: -DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON \ ``` -### Building against a pre-built LLVM +#### ...with LLVM "out-of-tree" If you have built llvm-project separately in the directory `$LLVM_INSTALL_DIR`, you can also build the project *out-of-tree* using the following command as template: ```shell @@ -135,8 +140,7 @@ The same QoL CMake flags can be used to enable clang, ccache, and lld. Be sure t Be aware that the installed version of LLVM needs in general to match the committed version in `externals/llvm-project`. Using a different version may or may not work. - -### Build commands +### Initiate Build After either cmake run (in-tree/out-of-tree), use one of the following commands to build the project: