Skip to content

Compute Jacobi polynomial from order 0 to N efficiently in NumPy / PyTorch / JAX / array API

License

Notifications You must be signed in to change notification settings

34j/jacobi-poly

Jacobi Poly

CI Status Documentation Status Test coverage percentage

uv Ruff pre-commit

PyPI Version Supported Python versions License


Documentation: https://jacobi-poly.readthedocs.io

Source Code: https://github.com/34j/jacobi-poly


Compute Jacobi polynomial from order 0 to N efficiently in NumPy / PyTorch / JAX / array API

Installation

Install this via pip (or your favourite package manager):

pip install jacobi-poly

Usage

from jacobi_poly import jacobi_all, gegenbauer_all, legendre_all
import torch

# Both CPU (Numba) and CUDA (Numba for CUDA) are supported.
torch.set_default_device("cuda")

x = torch.asarray(1.0)  # the points to evaluate at
alpha = torch.asarray(2.0)
beta = torch.asarray(3.0)
n_end = 4  # from order 0 to order 3

# Jacobi polynomial from order 0 to 3
jacobi = jacobi_all(x, alpha=alpha, beta=beta, n_end=n_end)
print(jacobi)
# tensor([ 1.,  3.,  6., 10.], device='cuda:0')

# Gegenbauer polynomial from order 0 to 3
gegenbauer = gegenbauer_all(x, alpha=alpha, n_end=n_end)
print(gegenbauer)
# tensor([ 1.0000,  4.0000, 10.0000, 20.0000], device='cuda:0')

# Generalized Legendre polynomial from order 0 to 3
# If ndim == 3, same as Legendre polynomial
legendre = legendre_all(x, n_end=n_end, ndim=3)
print(legendre)
# tensor([1., 1., 1., 1.], device='cuda:0')

Contributors ✨

Thanks goes to these wonderful people (emoji key):

This project follows the all-contributors specification. Contributions of any kind welcome!

Credits

Copier

This package was created with Copier and the browniebroke/pypackage-template project template.

About

Compute Jacobi polynomial from order 0 to N efficiently in NumPy / PyTorch / JAX / array API

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Contributors 2

  •  
  •