Before sending your pull requests, make sure you followed this list.
- Read contributing guidelines.
- Read Code of Conduct.
- Ensure you have signed the Contributor License Agreement (CLA).
- Check if my changes are consistent with the guidelines.
- Changes are consistent with the Coding Style.
- Run Unit Tests.
We'd love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles.
Please fill out either the individual or corporate Contributor License Agreement (CLA).
- If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an individual CLA.
- If you work for a company that wants to allow you to contribute your work, then you'll need to sign a corporate CLA.
Follow either of the two links above to access the appropriate CLA and instructions for how to sign and return it. Once we receive it, we'll be able to accept your pull requests.
NOTE: Only original source code from you and other people that have signed the CLA can be accepted into the main repository.
If you have improvements to TensorFlow, send us your pull requests! For those just getting started, Github has a how to.
TensorFlow team members will be assigned to review your pull requests. Once the
pull requests are approved and pass continuous integration checks, a TensorFlow
team member will apply ready to pull
label to your change. This means we are
working on getting your pull request submitted to our internal repository. After
the change has been submitted internally, your pull request will be merged
automatically on GitHub.
If you want to contribute, start working through the TensorFlow codebase, navigate to the Github "issues" tab and start looking through interesting issues. If you are not sure of where to start, then start by trying one of the smaller/easier issues here i.e. issues with the "good first issue" label and then take a look at the issues with the "contributions welcome" label. These are issues that we believe are particularly well suited for outside contributions, often because we probably won't get to them right now. If you decide to start on an issue, leave a comment so that other people know that you're working on it. If you want to help out, but not alone, use the issue comment thread to coordinate.
Before sending your pull request for review, make sure your changes are consistent with the guidelines and follow the TensorFlow coding style.
- Include unit tests when you contribute new features, as they help to a) prove that your code works correctly, and b) guard against future breaking changes to lower the maintenance cost.
- Bug fixes also generally require unit tests, because the presence of bugs usually indicates insufficient test coverage.
- Keep API compatibility in mind when you change code in core TensorFlow, e.g., code in tensorflow/core and tensorflow/python. TensorFlow has passed version 1.0 and hence cannot make non-backward-compatible API changes without a major release. Reviewers of your pull request will comment on any API compatibility issues.
- When you contribute a new feature to TensorFlow, the maintenance burden is (by default) transferred to the TensorFlow team. This means that the benefit of the contribution must be compared against the cost of maintaining the feature.
- Full new features (e.g., a new op implementing a cutting-edge algorithm) typically will live in tensorflow/addons to get some airtime before a decision is made regarding whether they are to be migrated to the core.
- As every PR requires several CPU/GPU hours of CI testing, we discourage submitting PRs to fix one typo, one warning,etc. We recommend fixing the same issue at the file level at least (e.g.: fix all typos in a file, fix all compiler warning in a file, etc.)
Include a license at the top of new files.
- C/C++ license example
- Python license example
- Java license example
- Go license example
- Bash license example
- HTML license example
- JavaScript/TypeScript license example
Bazel BUILD files also need to include a license section, e.g., BUILD example.
Changes to TensorFlow C++ code should conform to Google C++ Style Guide.
Use clang-tidy
to check your C/C++ changes. To install clang-tidy
on ubuntu:16.04, do:
apt-get install -y clang-tidy
You can check a C/C++ file by doing:
clang-format <my_cc_file> --style=google > /tmp/my_cc_file.cc
diff <my_cc_file> /tmp/my_cc_file.cc
Changes to TensorFlow Python code should conform to Google Python Style Guide
Use pylint
to check your Python changes. To install pylint
and check a file
with pylint
against TensorFlow's custom style definition:
pip install pylint
pylint --rcfile=tensorflow/tools/ci_build/pylintrc myfile.py
Note pylint --rcfile=tensorflow/tools/ci_build/pylintrc
should run from the
top level tensorflow directory.
- Google Java Style Guide
- Google JavaScript Style Guide
- Google Shell Style Guide
- Google Objective-C Style Guide
If you have Docker installed on your system, you can perform a sanity check on your changes by running the command:
tensorflow/tools/ci_build/ci_build.sh CPU tensorflow/tools/ci_build/ci_sanity.sh
This will catch most license, Python coding style and BUILD file issues that may exist in your changes.
There are two ways to run TensorFlow unit tests.
-
Using tools and libraries installed directly on your system.
Refer to the CPU-only developer Dockerfile and GPU developer Dockerfile for the required packages. Alternatively, use the said Docker images, e.g.,
tensorflow/tensorflow:devel
andtensorflow/tensorflow:devel-gpu
for development to avoid installing the packages directly on your system (in which case remember to change directory from/root
to/tensorflow
once you get into the running container sobazel
can find thetensorflow
workspace).Once you have the packages installed, you can run a specific unit test in bazel by doing as follows:
If the tests are to be run on GPU, add CUDA paths to LD_LIBRARY_PATH and add the
cuda
option flagexport LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH" export flags="--config=opt --config=cuda -k"
For example, to run all tests under tensorflow/python, do:
bazel test ${flags} //tensorflow/python/...
-
Using Docker and TensorFlow's CI scripts.
# Install Docker first, then this will build and run cpu tests tensorflow/tools/ci_build/ci_build.sh CPU bazel test //tensorflow/...
See TensorFlow Builds for details.