Skip to content

facebookexperimental/free-threading-benchmarking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Faster CPython Benchmark Infrastructure

πŸ”’ ▢️ START A BENCHMARK RUN

Results

Here are some recent and important revisions. πŸ‘‰ Complete list of results.

Most recent pystats on main (22a4421)

linux x86_64 (linux)

date fork/ref hash/flags vs. 3.12.6: vs. 3.13.0rc2: vs. base:
2025-01-17 python/d4544cb232dee5f836a6 d4544cb (NOGIL) 1.071x ↓
πŸ“„πŸ“ˆ
1.103x ↓
πŸ“„πŸ“ˆ
1.145x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-17 python/d4544cb232dee5f836a6 d4544cb 1.091x ↑
πŸ“„πŸ“ˆ
1.048x ↑
πŸ“„πŸ“ˆ
2025-01-16 python/b44ff6d0df9ec2d60be6 b44ff6d (NOGIL) 1.062x ↓
πŸ“„πŸ“ˆ
1.096x ↓
πŸ“„πŸ“ˆ
1.132x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 python/b44ff6d0df9ec2d60be6 b44ff6d 1.081x ↑
πŸ“„πŸ“ˆ
1.038x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/d05140f9f77d7dfc753d d05140f (NOGIL) 1.073x ↓
πŸ“„πŸ“ˆ
1.103x ↓
πŸ“„πŸ“ˆ
1.129x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/d05140f9f77d7dfc753d d05140f 1.074x ↑
πŸ“„πŸ“ˆ
1.034x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/6e4f64109b0eb6c9f1b5 6e4f641 1.067x ↑
πŸ“„πŸ“ˆ
1.023x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/6e4f64109b0eb6c9f1b5 6e4f641 (NOGIL) 1.097x ↓
πŸ“„πŸ“ˆ
1.128x ↓
πŸ“„πŸ“ˆ
1.148x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/b52de22ac345ad8583bc b52de22 1.051x ↑
πŸ“„πŸ“ˆ
1.010x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/b52de22ac345ad8583bc b52de22 (NOGIL) 1.102x ↓
πŸ“„πŸ“ˆ
1.134x ↓
πŸ“„πŸ“ˆ
1.142x ↓
πŸ“„πŸ“ˆπŸ§ 

linux x86_64 (vultr)

date fork/ref hash/flags vs. 3.12.6: vs. 3.13.0rc2: vs. base:
2025-01-17 python/d4544cb232dee5f836a6 d4544cb (NOGIL) 1.060x ↓
πŸ“„πŸ“ˆ
1.090x ↓
πŸ“„πŸ“ˆ
1.135x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-17 python/d4544cb232dee5f836a6 d4544cb 1.087x ↑
πŸ“„πŸ“ˆ
1.048x ↑
πŸ“„πŸ“ˆ
2025-01-16 python/211f41316b7f205d18eb 211f413 1.088x ↑
πŸ“„πŸ“ˆ
1.050x ↑
πŸ“„πŸ“ˆ
1.003x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 python/211f41316b7f205d18eb 211f413 (NOGIL) 1.046x ↓
πŸ“„πŸ“ˆ
1.076x ↓
πŸ“„πŸ“ˆ
2025-01-16 python/b44ff6d0df9ec2d60be6 b44ff6d (NOGIL) 1.047x ↓
πŸ“„πŸ“ˆ
1.077x ↓
πŸ“„πŸ“ˆ
1.126x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 python/b44ff6d0df9ec2d60be6 b44ff6d 1.090x ↑
πŸ“„πŸ“ˆ
1.052x ↑
πŸ“„πŸ“ˆ
2025-01-16 python/f48702dade921beed3e2 f48702d 1.091x ↑
πŸ“„πŸ“ˆ
1.053x ↑
πŸ“„πŸ“ˆ
1.007x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 python/3893a92d956363fa2443 3893a92 1.099x ↑
πŸ“„πŸ“ˆ
1.060x ↑
πŸ“„πŸ“ˆ
2025-01-16 mpage/aa_test_2 f1b75b3 1.084x ↑
πŸ“„πŸ“ˆ
1.046x ↑
πŸ“„πŸ“ˆ
1.013x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 mpage/aa_test_1 de82556 1.092x ↑
πŸ“„πŸ“ˆ
1.054x ↑
πŸ“„πŸ“ˆ
1.000x ↑
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 mpage/aa_test_0 54bb905 1.080x ↑
πŸ“„πŸ“ˆ
1.042x ↑
πŸ“„πŸ“ˆ
1.007x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 mpage/gh_115999_specialize 65a6ad3 1.088x ↑
πŸ“„πŸ“ˆ
1.050x ↑
πŸ“„πŸ“ˆ
1.001x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-16 mpage/gh_115999_specialize 65a6ad3 (NOGIL) 1.059x ↓
πŸ“„πŸ“ˆ
1.090x ↓
πŸ“„πŸ“ˆ
1.013x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/d05140f9f77d7dfc753d d05140f (NOGIL) 1.053x ↓
πŸ“„πŸ“ˆ
1.084x ↓
πŸ“„πŸ“ˆ
1.127x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/d05140f9f77d7dfc753d d05140f 1.083x ↑
πŸ“„πŸ“ˆ
1.045x ↑
πŸ“„πŸ“ˆ
2025-01-15 mpage/gh_115999_specialize 73aa5a2 1.107x ↑
πŸ“„πŸ“ˆ
1.068x ↑
πŸ“„πŸ“ˆ
1.021x ↑
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 mpage/gh_115999_specialize 73aa5a2 (NOGIL) 1.058x ↓
πŸ“„πŸ“ˆ
1.088x ↓
πŸ“„πŸ“ˆ
1.004x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 mpage/gh_115999_specialize 3cedaed (NOGIL) 1.053x ↓
πŸ“„πŸ“ˆ
1.084x ↓
πŸ“„πŸ“ˆ
1.000x ↑
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 mpage/gh_115999_specialize 3cedaed 1.102x ↑
πŸ“„πŸ“ˆ
1.063x ↑
πŸ“„πŸ“ˆ
1.016x ↑
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/6e4f64109b0eb6c9f1b5 6e4f641 1.110x ↑
πŸ“„πŸ“ˆ
1.071x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/6e4f64109b0eb6c9f1b5 6e4f641 (NOGIL) 1.059x ↓
πŸ“„πŸ“ˆ
1.089x ↓
πŸ“„πŸ“ˆ
1.153x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-15 python/b52de22ac345ad8583bc b52de22 1.107x ↑
πŸ“„πŸ“ˆ
1.068x ↑
πŸ“„πŸ“ˆ
2025-01-15 python/b52de22ac345ad8583bc b52de22 (NOGIL) 1.066x ↓
πŸ“„πŸ“ˆ
1.096x ↓
πŸ“„πŸ“ˆ
1.156x ↓
πŸ“„πŸ“ˆπŸ§ 
2025-01-14 python/1c13c56a34fc4c4d8969 1c13c56 (NOGIL) 1.167x ↓
πŸ“„πŸ“ˆ
1.194x ↓
πŸ“„πŸ“ˆ
2025-01-13 python/3efe28a40b136164f0d3 3efe28a (NOGIL) 1.167x ↓
πŸ“„πŸ“ˆ
1.195x ↓
πŸ“„πŸ“ˆ
2025-01-11 python/553cdc6d6856c1b4539a 553cdc6 (NOGIL) 1.160x ↓
πŸ“„πŸ“ˆ
1.187x ↓
πŸ“„πŸ“ˆ
1.002x ↓
πŸ“„πŸ“ˆπŸ§ 
2024-12-20 python/128cc47fbd44e3e09c50 128cc47 (NOGIL) 1.191x ↓
πŸ“„πŸ“ˆ
1.218x ↓
πŸ“„πŸ“ˆ
1.003x ↓
πŸ“„πŸ“ˆπŸ§ 
2024-12-20 python/78ffba4221dcb2e39fd5 78ffba4 (NOGIL) 1.190x ↓
πŸ“„πŸ“ˆ
1.217x ↓
πŸ“„πŸ“ˆ

* indicates that the exact same versions of pyperformance was not used.

Longitudinal speed improvement

Improvement of the geometric mean of key merged benchmarks, computed with pyperf compare. The results have a resolution of 0.01 (1%).

Configuration speed improvement

Documentation

Running benchmarks from the GitHub web UI

Visit the πŸ”’ benchmark action and click the "Run Workflow" button.

The available parameters are:

  • fork: The fork of CPython to benchmark. If benchmarking a pull request, this would normally be your GitHub username.
  • ref: The branch, tag or commit SHA to benchmark. If a SHA, it must be the full SHA, since finding it by a prefix is not supported.
  • machine: The machine to run on. One of linux-amd64 (default), windows-amd64, darwin-arm64 or all.
  • benchmark_base: If checked, the base of the selected branch will also be benchmarked. The base is determined by running git merge-base upstream/main $ref.
  • pystats: If checked, collect the pystats from running the benchmarks.

To watch the progress of the benchmark, select it from the πŸ”’ benchmark action page. It may be canceled from there as well. To show only your benchmark workflows, select your GitHub ID from the "Actor" dropdown.

When the benchmarking is complete, the results are published to this repository and will appear in the complete table. Each set of benchmarks will have:

  • The raw .json results from pyperformance.
  • Comparisons against important reference releases, as well as the merge base of the branch if benchmark_base was selected. These include
    • A markdown table produced by pyperf compare_to.
    • A set of "violin" plots showing the distribution of results for each benchmark.

The most convenient way to get results locally is to clone this repo and git pull from it.

Running benchmarks from the GitHub CLI

To automate benchmarking runs, it may be more convenient to use the GitHub CLI. Once you have gh installed and configured, you can run benchmarks by cloning this repository and then from inside it:

gh workflow run benchmark.yml -f fork=me -f ref=my_branch

Any of the parameters described above are available at the commandline using the -f key=value syntax.

Collecting Linux perf profiling data

To collect Linux perf sampling profile data for a benchmarking run, run the _benchmark action and check the perf checkbox. Follow this by a run of the _generate action to regenerate the plots.

License

This repo is licensed under the BSD 3-Clause License, as found in the LICENSE file.

About

Benchmark results for free-threaded builds of Python

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published