Skip to content

[pre-commit.ci] pre-commit autoupdate #865

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

pre-commit-ci[bot]
Copy link
Contributor

@pre-commit-ci pre-commit-ci bot commented Apr 21, 2025

hameerabbasi
hameerabbasi previously approved these changes Apr 21, 2025
@hameerabbasi
Copy link
Collaborator

cc @mtsokol @willow-ahrens Seems like we're ignoring keepdims here; assuming it's True. Was this related to the change discussed in the dev meeting two weeks ago?

Copy link

codspeed-hq bot commented Apr 21, 2025

CodSpeed Performance Report

Merging #865 will degrade performances by 20.03%

Comparing pre-commit-ci-update-config (70ca6ca) with main (0044437)

Summary

⚡ 1 improvements
❌ 1 (👁 1) regressions
✅ 338 untouched benchmarks

Benchmarks breakdown

Benchmark BASE HEAD Change
test_index_fancy[side=100-rank=1-format='coo'] 1.4 ms 1.1 ms +17.66%
👁 test_index_slice[side=100-rank=2-format='gcxs'] 1.9 ms 2.4 ms -20.03%

@willow-ahrens
Copy link
Collaborator

We never supported keepdims, a few days ago we merged a change which we thought would support keepdims, now it seems that this is broken, though only in the lazy case. I'll investigate.

@willow-ahrens
Copy link
Collaborator

By the way: Should these finch tests be moved to finch-tensor-python, or at least called from CI there?

@willow-ahrens
Copy link
Collaborator

I cannot reproduce the failing case locally. I am getting:

>>> A = finch.lazy(finch.Tensor(numpy.eye(5)))
>>> B = finch.lazy(finch.Tensor(sps.csr_matrix(numpy.eye(5))))
>>> finch.compute(finch.sum(finch.add(A, B), axis=0)).todense()
array([2., 2., 2., 2., 2.])

@willow-ahrens
Copy link
Collaborator

Is the problem just that we need to cut a new release of finch-tensor-python?

@mtsokol
Copy link
Collaborator

mtsokol commented Apr 21, 2025

I see that the new release arrived. I cleared CI caches and re-triggered jobs.

@willow-ahrens
Copy link
Collaborator

was that the fix then? Perhaps I should have bumped the breaking version number with the last finch.jl release. It was a bugfix but a breaking one.

@mtsokol
Copy link
Collaborator

mtsokol commented Apr 21, 2025

was that the fix then? Perhaps I should have bumped the breaking version number with the last finch.jl release. It was a bugfix but a breaking one.

Yes, new release of finch-tensor-python fixed it! The previous version of finch-tensor-python was installed previously but newest Finch.jl so Python layer was missing the keepdims handling (I'm surprised that it installed newest Finch.jl as we also hardcode Finch version).

Numba Array API error is unrelated IMO, for Finch Array API I need to adjust skips and xfails file.

mtsokol
mtsokol previously approved these changes Apr 21, 2025
@mtsokol
Copy link
Collaborator

mtsokol commented Apr 21, 2025

Additional Numba and Finch Array API test failures are due to new hypothesis version IMO.

@mtsokol
Copy link
Collaborator

mtsokol commented Apr 22, 2025

A few edge cases need to be resolved first to make Array API Finch job green (described in finch-tensor/Finch.jl#743)

@pre-commit-ci pre-commit-ci bot force-pushed the pre-commit-ci-update-config branch from 9f9575e to a62552e Compare April 28, 2025 18:36
@pre-commit-ci pre-commit-ci bot force-pushed the pre-commit-ci-update-config branch from a62552e to dd9b9eb Compare May 5, 2025 18:46
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.11.5 → v0.11.9](astral-sh/ruff-pre-commit@v0.11.5...v0.11.9)
@pre-commit-ci pre-commit-ci bot force-pushed the pre-commit-ci-update-config branch from dd9b9eb to 70ca6ca Compare May 12, 2025 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants