Skip to content

Commit

Permalink
Bump to upstream c808621
Browse files Browse the repository at this point in the history
  • Loading branch information
pranav-prakash committed May 13, 2021
2 parents 935cf01 + a47a234 commit 7bbd049
Show file tree
Hide file tree
Showing 257 changed files with 23,997 additions and 1,696 deletions.
44 changes: 22 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,40 @@
<p align="center"><img width="50%" src="docs/images/ONNX_Runtime_logo_dark.png" /></p>

**ONNX Runtime** is a cross-platform **inference and training machine-learning accelerator** compatible with deep learning frameworks, PyTorch and TensorFlow/Keras, as well as classical machine learning libraries such as scikit-learn, and more.
**ONNX Runtime is a cross-platform inference and training machine-learning accelerator**.

# Systolic Quickstart

This is a fork of upstream onnxruntime modified to work on riscv platforms and particularly focused on supporting the Gemmini accelerator. Gemmini is not necessarily *required* though, so this should also be suitable for those wanting to perform cpu-only inference on a riscv platform -- although this is less thoroughly tested so may not be as performant (the main blocker I can think of is that the `sgemm` kernel is implemented via naive matmul; ideally this should be linked with a proper BLAS implementation, but since floating point inference was not the main goal with this fork – rather, running quantized networks was – that is still a todo).

Read [BUILD.md](systolic_runner/docs/BUILD.md) in `systolic_runner`(along with other documentation) for information on cross-compiling and usage instructions.

---

ONNX Runtime uses the portable [ONNX](https://onnx.ai) computation graph format, backed by execution providers optimized for operating systems, drivers and hardware.
**ONNX Runtime inference** can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. [Learn more &rarr;](https://www.onnxruntime.ai/docs/#onnx-runtime-for-inferencing)

Common use cases for ONNX Runtime:

* Improve inference performance for a wide variety of ML models
* Reduce time and cost of training large models
* Train in Python but deploy into a C#/C++/Java app
* Run with optimized performance on different hardware and operating systems
* Support models created in several different frameworks

[ONNX Runtime inference](https://www.onnxruntime.ai/docs/get-started/inference.html) APIs are stable and production-ready since the [1.0 release](https://github.com/microsoft/onnxruntime/releases/tag/v1.0.0) in October 2019 and can enable faster customer experiences and lower costs.

[ONNX Runtime training](https://www.onnxruntime.ai/docs/get-started/training.html) feature was introduced in May 2020 in preview. This feature supports acceleration of PyTorch training on multi-node NVIDIA GPUs for transformer models. Additional updates for this feature are coming soon.
**ONNX Runtime training** can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. [Learn more &rarr;](https://www.onnxruntime.ai/docs/#onnx-runtime-for-training)


## Get Started

**http://onnxruntime.ai/**
* [Install](https://www.onnxruntime.ai/docs/get-started/install.html)
* [Inference](https://www.onnxruntime.ai/docs/get-started/inference.html)
* [Training](https://www.onnxruntime.ai/docs/get-started/training.html)
* [Documentation](https://www.onnxruntime.ai/docs/)
* [Samples and Tutorials](https://www.onnxruntime.ai/docs/tutorials/)
* [Build Instructions](https://www.onnxruntime.ai/docs/how-to/build.html)
* [Frequently Asked Questions](./docs/FAQ.md)
* [Overview](https://www.onnxruntime.ai/docs/)
* [Tutorials](https://www.onnxruntime.ai/docs/tutorials/)
* [Inferencing](https://www.onnxruntime.ai/docs/tutorials/inferencing/)
* [Training](https://www.onnxruntime.ai/docs/tutorials/training/)
* [How To](https://www.onnxruntime.ai/docs/how-to)
* [Install](https://www.onnxruntime.ai/docs/how-to/install.html)
* [Build](https://www.onnxruntime.ai/docs/how-to/build/)
* [Tune performance](https://www.onnxruntime.ai/docs/how-to/tune-performance.html)
* [Quantize models](https://www.onnxruntime.ai/docs/how-to/quantization.html)
* [Deploy on mobile](https://www.onnxruntime.ai/docs/how-to/deploy-on-mobile.html)
* [Use custom ops](https://www.onnxruntime.ai/docs/how-to/add-custom-op.html)
* [Add a new EP](https://www.onnxruntime.ai/docs/how-to/add-execution-provider.html)
* [Reference](https://www.onnxruntime.ai/docs/reference)
* [API documentation](https://www.onnxruntime.ai/docs/reference/api/)
* [Execution Providers](https://www.onnxruntime.ai/docs/reference/execution-providers/)
* [Releases and servicing](https://www.onnxruntime.ai/docs/reference/releases-servicing.html)
* [Citing](https://www.onnxruntime.ai/docs/reference/citing.html)
* [Additional resources](https://www.onnxruntime.ai/docs/resources/)

## Build Pipeline Status
|System|CPU|GPU|EPs|
Expand All @@ -49,7 +49,7 @@ Common use cases for ONNX Runtime:

## Data/Telemetry

This project may collect usage data and send it to Microsoft to help improve our products and services. See the [privacy statement](docs/Privacy.md) for more details.
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the [privacy statement](docs/Privacy.md) for more details.

## Contributions and Feedback

Expand Down
28 changes: 28 additions & 0 deletions ThirdPartyNotices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4685,3 +4685,31 @@ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
© 2021 GitHub, Inc.

_____

react-native

MIT License

https://github.com/facebook/react-native

Copyright (c) Facebook, Inc. and its affiliates.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

_____
2 changes: 2 additions & 0 deletions cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,8 @@ if (onnxruntime_ENABLE_BITCODE)
endif()
set(CMAKE_XCODE_ATTRIBUTE_ENABLE_BITCODE YES)
set(CMAKE_XCODE_ATTRIBUTE_BITCODE_GENERATION_MODE "bitcode")
else()
set(CMAKE_XCODE_ATTRIBUTE_ENABLE_BITCODE NO)
endif()

if (onnxruntime_ENABLE_MEMORY_PROFILE)
Expand Down
4 changes: 2 additions & 2 deletions cmake/onnxruntime_java.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ if (CMAKE_SYSTEM_NAME STREQUAL "Android")
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E create_symlink $<TARGET_FILE:onnxruntime> ${ANDROID_PACKAGE_ABI_DIR}/$<TARGET_LINKER_FILE_NAME:onnxruntime>)
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E create_symlink $<TARGET_FILE:onnxruntime4j_jni> ${ANDROID_PACKAGE_ABI_DIR}/$<TARGET_LINKER_FILE_NAME:onnxruntime4j_jni>)
# Generate the Android AAR package
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${GRADLE_EXECUTABLE} -b build-android.gradle -c settings-android.gradle build -DjniLibsDir=${ANDROID_PACKAGE_JNILIBS_DIR} -DbuildDir=${ANDROID_PACKAGE_OUTPUT_DIR} WORKING_DIRECTORY ${JAVA_ROOT})
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${GRADLE_EXECUTABLE} -b build-android.gradle -c settings-android.gradle build -DjniLibsDir=${ANDROID_PACKAGE_JNILIBS_DIR} -DbuildDir=${ANDROID_PACKAGE_OUTPUT_DIR} -DminSdkVer=${ANDROID_MIN_SDK} WORKING_DIRECTORY ${JAVA_ROOT})

if (onnxruntime_BUILD_UNIT_TESTS)
set(ANDROID_TEST_PACKAGE_ROOT ${JAVA_ROOT}/src/test/android)
Expand All @@ -180,6 +180,6 @@ if (CMAKE_SYSTEM_NAME STREQUAL "Android")
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${CMAKE_COMMAND} -E create_symlink ${ANDROID_PACKAGE_OUTPUT_DIR}/outputs/aar/onnxruntime-debug.aar ${ANDROID_TEST_PACKAGE_LIB_DIR}/onnxruntime-debug.aar)
# Build Android test apk for java package
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${GRADLE_EXECUTABLE} clean WORKING_DIRECTORY ${ANDROID_TEST_PACKAGE_DIR})
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${GRADLE_EXECUTABLE} assembleDebug assembleDebugAndroidTest WORKING_DIRECTORY ${ANDROID_TEST_PACKAGE_DIR})
add_custom_command(TARGET onnxruntime4j_jni POST_BUILD COMMAND ${GRADLE_EXECUTABLE} assembleDebug assembleDebugAndroidTest -DminSdkVer=${ANDROID_MIN_SDK} WORKING_DIRECTORY ${ANDROID_TEST_PACKAGE_DIR})
endif()
endif()
4 changes: 4 additions & 0 deletions docs/ContribOperators.md
Original file line number Diff line number Diff line change
Expand Up @@ -3063,6 +3063,10 @@ This version of the operator has been available since version 1 of the 'com.micr
#### Attributes

<dl>
<dt><tt>coordinate_transformation_mode</tt> : string</dt>
<dd></dd>
<dt><tt>mode</tt> : string</dt>
<dd></dd>
<dt><tt>scales</tt> : list of ints</dt>
<dd></dd>
</dl>
Expand Down
111 changes: 0 additions & 111 deletions docs/ONNX_Runtime_Mobile_NNAPI_perf_considerations.md

This file was deleted.

Loading

0 comments on commit 7bbd049

Please sign in to comment.