Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[discussion][donotmerge]: Copy Python implementation for float::div_euclid #133485

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tesuji
Copy link
Contributor

@tesuji tesuji commented Nov 26, 2024

cuviper pointed out a lisensing issue about copying Python code, which as I understand is PSF owned (GPL compatible).
This PR cannot be merged as is. I let this PR open in the meantime for more discussions.


This is a breaking change trying to fix https://internals.rust-lang.org/t/bug-rounding-error-that-break-the-ensurance-of-f32-div-euclid/21917.

In summary, Rust float::div_euclid and Python divmod disagrees with each others:

Python Rust This PR
div_eulid(11.0, 1.1) 9.0 10.0 9.0
div_eulid(-11.0, 1.1) -10.0 -9.0 -10.0
div_eulid(11.0, -1.1) -10.0 -10.0 -10.0
div_eulid(-11.0, -1.1) 9.0 11.0 9.0
div_eulid(0.5, 1.1) 0.0 0.0 0.0
div_eulid(-0.5, 1.1) -1.0 -1.0 -1.0
div_eulid(0.5, -1.1) -1.0 0.0 -1.0
div_eulid(-0.5, -1.1) 0.0 1.0 0.0

Personally I think the Python's behavior is more correct. Because given real numbers a and b,
q is euclidean division of a and b, and r is the eulidean remainder of them.
q*b + r should be equal to a.
Take the first example, 11.0 and 1.1. Currectly with Rust 10.0*1.1 + 1.0999998 > 11.0 + 1 > 11.0.

FIXME: The Python link uses "fmod " to get more exact remander. But this PR only uses raw % operator.
Is this a problem in practice?

Reference:

r? @ghost

FIXME: The Python link uses "fmod " to get more exact remander. But this PR
only uses raw % operator. Is this a problem in practice?

Reference:
* <https://github.com/python/cpython/blob/3.13/Objects/floatobject.c#L662>.
@rustbot
Copy link
Collaborator

rustbot commented Nov 26, 2024

r? @cuviper

rustbot has assigned @cuviper.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Nov 26, 2024
@tesuji tesuji changed the title breaking change: Reimplements float::div_euclid by copying Python implementation breaking change: Copy Python implementation for float::div_euclid Nov 26, 2024
@Urgau Urgau added the I-libs-api-nominated Nominated for discussion during a libs-api team meeting. label Nov 26, 2024
@Noratrieb
Copy link
Member

https://doc.rust-lang.org/std/primitive.f32.html#method.div_euclid the documentation clearly states that this method is precise to a rounded infinite-precision result, so its current implementation is just wrong and the new implementation is current. libs-api can have a look, but this seems very correct to me.

@traviscross
Copy link
Contributor

traviscross commented Nov 26, 2024

To inline the problem a bit further:

fn main() {
    let (lhs, rhs) = (11.0f64, 1.1f64);
    // From the docs of `div_euclid`:
    //
    // This computes the integer `n`...
    let n = lhs.div_euclid(rhs);
    let r = lhs.rem_euclid(rhs);
    dbg!(lhs, rhs, n, r);
    // lhs = 11.0
    // rhs = 1.1
    // n = 10.0
    // r = 1.0999999999999992
    //
    // ...such that `self = n * rhs + self.rem_euclid(rhs)`.
    assert_eq!(lhs, n * rhs + r);
    //~^ PANIC
    // assertion `left == right` failed
    //  left: 11.0
    // right: 12.1
}

Playground link

That is, the behavior contradicts what our documentation states (and what good mathematical sense would suggest) about the invariant that div_euclid and rem_euclid hold in relation to each other.

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

https://doc.rust-lang.org/std/primitive.f32.html#method.div_euclid the documentation clearly states that this method is precise to a rounded infinite-precision result, so its current implementation is just wrong and the new implementation is current. libs-api can have a look, but this seems very correct to me.

The documentation is not necessarily wrong because rounding from an infinite result can include rounding up. It's definitely bad that div/rem aren't in sync though.

I'm worried about the license implications of blatantly copying Python too. If that's indeed a problem, we may need someone else to do a clean-room fix.

@traviscross
Copy link
Contributor

Agreed. Over in...

...it was suggested that:

...it should be possible to achieve a consistent behavior by just calculating one euclidean function based on the other one. E.g.

pub fn rem_euclid(self, rhs: f64) -> f64 {
    self - self.div_euclid(rhs) * rhs
}

We could do something like that.

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

On rounding:

  • 1.1f32 is 1.10000002384185791015625
  • Wolfram Alpha says 11 divided by that is 9.9999997832558418782008370876130822005470839295152332671071558192...
  • This round up to 10f32! The closest smaller value would be 9.99999904632568359375.

@traviscross
Copy link
Contributor

traviscross commented Nov 26, 2024

This paper has an extensive analysis of this problem:

There is a companion paper:

@tczajka
Copy link

tczajka commented Nov 26, 2024

Personally I think the Python's behavior is more correct.

Are you going for "more correct" or actually correct in all cases? I believe that the proposed (Python) implementation only works correctly in some cases: if the integer result can be exactly represented (such as in the example of 9). But it doesn't necessarily round correctly when the result can't be exactly represented (very large integer results). More care (and a proof) is necessary to make sure of correct rounding in all cases.

@tczajka
Copy link

tczajka commented Nov 26, 2024

  • 1.1f32 is 1.10000002384185791015625
  • Wolfram Alpha says 11 divided by that is 9.9999997832558418782008370876130822005470839295152332671071558192...
  • This round up to 10f32! The closest smaller value would be 9.99999904632568359375.

The documentation guarantees "the rounded infinite-precision result.". Therefore the correct behavior is unambiguously 9f32:

  1. Compute floor(11 / 1.10000002384185791015625) = floor(9.99999978.....) = 9 in infinite precision.
  2. Round the infinite-precision result, 9, to the nearest representable value, which is 9f32. No rounding necessary in this case because 9 can be exactly represented.

@Neutron3529
Copy link
Contributor

Neutron3529 commented Nov 26, 2024

Maybe we need an unsafe, since the function will not always yield results we want to have.

Plus, since it is documented, 0<=rem<divisor.abs(), div_eulid(-11.0, -1.1) should be 10 rather than 9, since with infinite precision, { -1.1f64 as f128 * 10f64 as f128 } < { -11.0f64 as f128 } = true, and -1.1f64 * 9 > -11.0. Otherwise, calling div_euclid for integers will yields inconsistant results.

@tczajka
Copy link

tczajka commented Nov 26, 2024

Python divmod doesn't implement Euclidean division, it implements flooring division which is different for negative numbers. (-0.5).div_euclid(-1.1) should be 1.0, not 0.0.

@traviscross
Copy link
Contributor

traviscross commented Nov 26, 2024

Having analyzed this a bit, and having looked at the Rust implementation in core (but notably, not the code in this PR or that for any other implementation), and having read through the paper I cited above, my feeling is increasingly that any fix other than...

pub fn rem_euclid(self, rhs: f64) -> f64 {
    self - self.div_euclid(rhs) * rhs
}

...should be accompanied by a (preferably machine-checked) proof of correctness.

@Neutron3529

This comment was marked as duplicate.

@Neutron3529
Copy link
Contributor

Neutron3529 commented Nov 26, 2024

Having analyzed this a bit, and having looked at the Rust implementation in core (but notably, not the code in this PR or that for any other implementation), and having read through the paper I cited above, my feeling is increasingly that any fix other than...

pub fn rem_euclid(self, rhs: f64) -> f64 {
    self - self.div_euclid(rhs) * rhs
}

...should be accompanied by a (preferably machine-checked) proof of correctness.

I suppose we should modify div_euclid/rem_euclid by a little:

If we want to ensure self == self.div_euclid(div) * div + self.rem_euclid(div)
We have the following issues:

  1. accuracy issue occurs, self.rem_euclid(div) have fewer significant bits now
  2. cannot ensure 0<=self.rem_euclid(div)<div.abs()

I have another idea, mathematically, ensure self == self.div_euclid(div) * div + self.rem_euclid(div), equals to ensure (self - self.rem_euclid(div)) / div == self.div_euclid(div), which is equivlent to current definations for integers, and this defination defines the return value of self.div_euclid(div).

What's more, since the calculation of self.div_euclid(div) involves calculate self.rem_euclid(div) first, it might be better allowing return the remainder as well, which leads to a pub fn divmod(self, div:Self) -> (Self, Self).

Seems introducing such divmod requires an ACP.


impl:

pub fn div_euclid(self, div:Self) -> Self { self.divmod_euclid(div).0 }
pub fn divmod_euclid(self, div:Self) -> (Self, Self) {
    // keep rem_euclid as-is
    let rem = self.rem_euclid(div);
    (((self - rem)/div).round(), rem) // the `.round()` might not be necessary, but I cannot sure.
}

@traviscross
Copy link
Contributor

traviscross commented Nov 26, 2024

Yes, in my analysis I'd walked down this same path, done this same rearrangement of terms, and then upon hitting...

(((self - rem)/div).round(), rem) // the `.round()` might not be necessary, but I cannot be sure.

...decided that I wasn't sure either, but from having read that paper knew the question was subtle, and so that's why I say it feels like it calls for a proof.

The point about losing bits is a real consideration, though, and would deserve its own analysis.

@tczajka
Copy link

tczajka commented Nov 26, 2024

This paper has an extensive analysis of this problem:

The paper only addresses the easy case, where the inputs are positive and the result is small enough that it can be exactly represented in floating point.

Summary of the paper: even in that case you don't always get correct results if you do floating point division (we already know this from the example here), unless you change the rounding mode first.

But there is a relatively easy solution for this case: scale mantissas to be integers, and use integer division (u128 / u64 -> u64)!

A somewhat harder case is when the resulting integer is larger than 2^53 (or larger than 2^64), in which case getting the rounding correct is a little bit more subtle, but this is still doable using an integer u128 / u64 -> u64 division and a possible adjustment by 1.

There are additional cases for negative numbers.

I can implement this (with proofs).

@tesuji tesuji changed the title breaking change: Copy Python implementation for float::div_euclid [discussion] Copy Python implementation for float::div_euclid Nov 26, 2024
@tesuji tesuji changed the title [discussion] Copy Python implementation for float::div_euclid [discussion][donotmerge]: Copy Python implementation for float::div_euclid Nov 26, 2024
@rustbot
Copy link
Collaborator

rustbot commented Nov 26, 2024

Failed to set assignee to ghost: invalid assignee

Note: Only org members with at least the repository "read" role, users with write permissions, or people who have commented on the PR may be assigned.

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

  • 1.1f32 is 1.10000002384185791015625
  • Wolfram Alpha says 11 divided by that is 9.9999997832558418782008370876130822005470839295152332671071558192...
  • This round up to 10f32! The closest smaller value would be 9.99999904632568359375.

The documentation guarantees "the rounded infinite-precision result.". Therefore the correct behavior is unambiguously 9f32:

1. Compute floor(11 /  1.10000002384185791015625) = floor(9.99999978.....) = 9 in infinite precision.

Oh. 🤦 Yes, I neglected the floor.

On the other hand, rem_euclid actually gets the true result:

  • 11 - 9 * 1.10000002384185791015625 = 1.09999978542327880859375
  • 11f32.rem_euclid(1.1) prints 1.0999998, but the raw 0x3f8ccccb is precisely 1.09999978542327880859375

I'm sure there are other inputs that would have some rounding error, but I wonder if this is generally more accurate than div_euclid?

So regarding this suggested change:

pub fn rem_euclid(self, rhs: f64) -> f64 {
    self - self.div_euclid(rhs) * rhs
}

The current rem_euclid is the simpler of the two, and if that's really more accurate, maybe we should leave that and define this the other way:

pub fn div_euclid(self, rhs: f64) -> f64 {
    (self - self.rem_euclid(rhs)) / rhs
}

(might need to round as well in case that is slightly off in either direction)

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

@rustbot author
(since they've tagged "[discussion][donotmerge]" now)

@rustbot rustbot added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 26, 2024
@joshtriplett
Copy link
Member

cc @BartMassey for help with floating point precision and confusion.

@joshtriplett
Copy link
Member

👍 for computing one in terms of the other to guarantee by construction that they always have the expected relationship.

Big shrug for the change to the observed behavior of div_euclid. The current one seems broken, the Python one also seems differently broken, and in both cases I think using these methods on floating-point (rather than integer) values seems deeply questionable.

@Amanieu
Copy link
Member

Amanieu commented Nov 26, 2024

We discussed this in the libs-api meeting and concluded that we definitely need div_euclid and rem_euclid to give consistent results as per the relationship described in the documentation.

As such, we are happy with a breaking change here that makes the result "more correct".

@rfcbot fcp merge

@Amanieu Amanieu removed the I-libs-api-nominated Nominated for discussion during a libs-api team meeting. label Nov 26, 2024
@rfcbot
Copy link

rfcbot commented Nov 26, 2024

Team member @Amanieu has proposed to merge this. The next step is review by the rest of the tagged team members:

Concerns:

Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@rfcbot rfcbot added proposed-final-comment-period Proposed to merge/close by relevant subteam, see T-<team> label. Will enter FCP once signed off. disposition-merge This issue / PR is in PFCP or FCP with a disposition to merge it. labels Nov 26, 2024
@tczajka
Copy link

tczajka commented Nov 26, 2024

👍 for computing one in terms of the other to guarantee by construction that they always have the expected relationship.

This doesn't actually work (or at least, I don't see why it would work, haven't looked for an explicit counterexample).

The guarantee currently in the docs, as I interpret it, is this:

Define round(x) = the closest representable number to x.

For given real numbers a, b define real numbers q, r such that a = b * q + r, q is an integer, 0 <= r < |b|. That's the "infinite precision" part. Then:

rem_euclid(a, b) = round(r)
div_euclid(a, b) = round(q)

Currently, rem_euclid satisfies this. Proof: the first step in rem_euclid is %, which is always exact (no rounding at all). The second step is a single addition, which does one round after addition. Hence, there is at most one round at the end of the computation, as required.

div_euclid does not currently satisfy it.

The proposed alternative implementation in terms of rem_euclid also does not satisfy it. I think? Or does it? It seems unlikely it would satisfy it, given that it involves three operations, each of which rounds (rem_euclid, subtraction, division). It is possible that I'm wrong and that it might actually work, with the different errors somehow canceling out, but if so, that would be very subtle and interesting and would require a careful analysis. To me it seems more likely that it doesn't work.

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

Incurring several roundings in div_euclid would still be better than the current discrepancy.

@tczajka
Copy link

tczajka commented Nov 26, 2024

Incurring several roundings in div_euclid would still be better than the current discrepancy.

I agree that it's an improvement. It could be replaced again later with a fully correct algorithm if this one still has bad cases (I give it maybe a 25% chance that it might be always correct).

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

pub fn div_euclid(self, rhs: f64) -> f64 {
    (self - self.rem_euclid(rhs)) / rhs
}

(might need to round as well in case that is slightly off in either direction)

I played a bit with quickcheck and found cases that do need rounding to get an integer result.
(these are in f32)

[src/main.rs:5:5] x = 1160.0
[src/main.rs:5:5] y = 1.4927723
[src/main.rs:6:5] x.div_euclid(y) = 777.0
[src/main.rs:7:5] x.rem_euclid(y) = 0.11589122
[src/main.rs:8:5] (x - x.rem_euclid(y)) / y = 777.00006

This one is right on the edge:

[src/main.rs:5:5] x = 1.3880658e27
[src/main.rs:5:5] y = 2.4548464e20
[src/main.rs:6:5] x.div_euclid(y) = 5654390.0
[src/main.rs:7:5] x.rem_euclid(y) = 2.1400623e20
[src/main.rs:8:5] (x - x.rem_euclid(y)) / y = 5654389.5

And there are probably more errors once you get past the manitissa's integer accuracy. Not necessarily worse than the status quo though, since that's attempting +/-1.0 adjustments that are also bad past the mantissa size.

Also, that subtraction can overflow, like with x = f32::MIN and y = x / 1.5:

[src/main.rs:5:5] x = -3.4028235e38
[src/main.rs:5:5] y = -2.268549e38
[src/main.rs:6:5] x.div_euclid(y) = 2.0
[src/main.rs:7:5] x.rem_euclid(y) = 1.1342745e38
[src/main.rs:8:5] (x - x.rem_euclid(y)) / y = inf

I did not look at any non-normal inputs yet.

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

For the FCP: I'm ok with changing the implementation, but probably not this exact PR due to licensing.

@rfcbot reviewed
@rfcbot concern We should not copy Python-licensed code

@cuviper
Copy link
Member

cuviper commented Nov 26, 2024

(Trying again without the edit...)

@rfcbot reviewed
@rfcbot concern We should not copy Python-licensed code

@tczajka
Copy link

tczajka commented Nov 26, 2024

I think that even after this is merged, there needs to be a separate issue that tracks that there is a discrepancy between what the documentation guarantees and what is implemented -- the documentation specifies perfect rounding in the "Precision" section for div_euclid, unlike some other operations like ln, sin, etc that just say "Unspecified precision". The issue should also track the possible unexpected overflow.

Consider loosening the guarantee in the docs for now. But I think it would be useful to try to eventually get that guarantee for this operation. It is feasible to implement without very complicated algorithms like what you'd need for perfect rounding in ln, sin, etc.

@the8472
Copy link
Member

the8472 commented Nov 26, 2024

the documentation specifies perfect rounding in the "Precision" section for div_euclid

Is that impossible / hard to achieve? Or does it conflict with the other guarantee?

@tczajka
Copy link

tczajka commented Nov 26, 2024

Is that impossible / hard to achieve? Or does it conflict with the other guarantee?

No, there is no conflict, and it's possible. It's just that the proposed one-liner above doesn't do it because of a sequence of approximations.

If nobody else solves it, I'm planning to give it a try later on.

@theemathas
Copy link
Contributor

What should the result of div_euclid be if the real-number-result is an integer which cannot be represented exactly with an f32?

@BartMassey
Copy link
Contributor

What should the result of div_euclid be if the real-number-result is an integer which cannot be represented exactly with an f32?

Great question.

Give me a couple of days to talk to my floating-point-expert friends and try to figure out what might be best. I think there's no big panic on this (?), so let's try to get it as right as we can on the first try rather than just merging a potential improvement, I think?

@tczajka
Copy link

tczajka commented Nov 27, 2024

What should the result of div_euclid be if the real-number-result is an integer which cannot be represented exactly with an f32?

The integer should get rounded, the same way a u64 gets rounded when converted to f32. Specifically: the closest representable number should be selected, and if there is a tie then the one with even mantissa should be selected.

@BartMassey
Copy link
Contributor

Just a note that I've been working on this for the last few days, but American Thanksgiving and a couple of other things have gotten in the way, and the project has turned into more of a thing than I anticipated. Will try to get something written up tomorrow.

@BartMassey
Copy link
Contributor

Sigh. I'm working on a writeup of my findings so far, but today just uncovered a bunch more rabbit holes. Very brief summary, probably full of errors:

  • Neither div_euclid nor rem_euclid are likely currently correct due to double-rounding and mis-rounding. In the running example with 11.0 and 1.1 it's div_euclid that gets the rounding wrong, but rem_euclid can too (as documented in the manual).
  • If we want div_euclid to be perfect, the most straightforward way is to do integer computations and then convert back to floats appropriately: probably manually, since LLVM and thus Rust doesn't support IEEE rounding modes, which are needed for this.
  • LLVM's fmod implementation already does integer math, so we shouldn't lose performance for rem_euclid; we could just hack up that or one of several other such implementations for the Euclidean case. We almost certainly would lose performance for div_euclid, since we would be substituting a fast(ish) FPU instruction for a bunch of integer math.
  • Another way to get a correct div_euclid is to just do the floating division with appropriate rounding. This is fairly straightforward if Rust / LLVM will allow controlling the rounding mode; I haven't figured out how to get it right without this. Because double-rounding just using (n/d).floor() vs (n/d).ceil() with round-to-nearest is insufficient: it may be too late by the time the outer operator gets to it.
  • At first glance it looks to me that the Python fdivmod implementation is not actually Euclidean. Certainly the negative remainder given negative numerator and divisor is a problem, but also there's a round-to-nearest step in there.

More in a while. I wouldn't recommend doing anything until we have an implementation that likely does the right thing. It's sat this long; it can sit a while longer.

@traviscross
Copy link
Contributor

traviscross commented Dec 2, 2024

This quandry goes even deeper. I found a 2001 paper on this subject by Daan Leijen (of Koka), "Division and Modulus for Computer Scientists". In it, he gives a proof of correctness for the algorithm for div_euclid and rem_euclid ("Algorithm E").

On that basis, I implemented the algorithm in Rust here:

But then, much to my surprise, given that the proof looks reasonable to me, and given that my original encoding of it in Rust seemed correct and well founded, it didn't actually solve the problem.

Why?

In my initial encoding of this algorithm in Rust, I relied on our documentation for impl Rem for f64 which says clearly that:

The remainder has the same sign as the dividend and is computed as: x - (x / y).trunc() * y.

And yet, this is trivially false:

fn main() {
    let (x, y) = (11f64, 1.1f64);
    assert_eq!(x - (x / y).trunc() * y, x % y);
    //~^ PANIC
    // assertion `left == right` failed
    // left: 0.0
    // right: 1.0999999999999992
}

@quaternic
Copy link

In my encoding of this algorithm in Rust, I relied on our documentation for impl Rem for f64 which says clearly that:

The remainder has the same sign as the dividend and is computed as: x - (x / y).trunc() * y.

Yes, that's misleading and leaves out the crucial qualification of

... as if computed without intermediate rounding.

Compare to how the equivalent fmod is documented: https://en.cppreference.com/w/cpp/numeric/math/fmod

The floating-point remainder of the division operation x / y calculated by this function is exactly the value x - iquot * y, where iquot is x / y with its fractional part truncated.

And later, in the Notes section

The expression x - std::trunc(x / y) * y may not equal std::fmod(x, y), when the rounding of x / y to initialize the argument of std::trunc loses too much precision (example: x = 30.508474576271183309, y = 6.1016949152542370172).

@quaternic
Copy link

quaternic commented Dec 2, 2024

@BartMassey

Neither div_euclid nor rem_euclid are likely currently correct due to double-rounding and mis-rounding. In the running example with 11.0 and 1.1 it's div_euclid that gets the rounding wrong, but rem_euclid can too (as documented in the manual).

I believe the current rem_euclid to be correct, actually. If the intended semantic follows the usual rule of "exact result rounded to the return type", then the edge case where (-tiny).rem_euclid(large) == large is just a consequence of necessarily having to round large - tiny. (That's the only case where the exact remainder isn't necessarily representable.)

If we want div_euclid to be perfect, the most straightforward way is to do integer computations and then convert back to floats appropriately: probably manually, since LLVM and thus Rust doesn't support IEEE rounding modes, which are needed for this.

I agree that this is the best way forward.

LLVM's fmod implementation already does integer math, so we shouldn't lose performance for rem_euclid; we could just hack up that or one of several other such implementations for the Euclidean case. We almost certainly would lose performance for div_euclid, since we would be substituting a fast(ish) FPU instruction for a bunch of integer math.

It might not be too bad For div_euclid; the current implementation unconditionally computes self % rhs which calls fmod, although LLVM can reuse a single fmod-call for both if div_euclid and rem_euclid are called together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
disposition-merge This issue / PR is in PFCP or FCP with a disposition to merge it. proposed-final-comment-period Proposed to merge/close by relevant subteam, see T-<team> label. Will enter FCP once signed off. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.