-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ia der term #1137
base: master
Are you sure you want to change the base?
Ia der term #1137
Conversation
Pull Request Test Coverage Report for Build 12155700400Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
@CarterDW it seems like flake8 is failing, and the coverage (i.e. lines of code tested by new unit tests) has fallen a bit. I may be able to look at the actual implementation towards the end of the week. |
Attempted reformatting to fit flake8 format
Attempted to better fit flake8 formatting requirements
Im not sure about the coverage but I went through and reformatted the code so hopefully it should pass flake8 now |
Changed the statements from lines 192-203 so that it fits flake8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @CarterDW and @jablazek (and apologies for the delay!). Just a couple of comments.
Also, we probably want to add a benchmark test, the same way we did for the previous FastPT terms.
BTW, I remember before FastPT couldn't compute all the galaxy x IA perturbative terms. Is that still the case? I remember we have a TODO for this :-)
pyccl/nl_pt/ept.py
Outdated
# ak power spectrum | ||
pksa = {} | ||
if 'nonlinear' in [self.ak2_pk_kind]: | ||
pksa['nonlinear'] = np.array([cosmo.nonlin_matter_power( | ||
self.k_s, a)for a in self.a_s]) | ||
if 'linear' in [self.ak2_pk_kind]: | ||
pksa['linear'] = np.array([cosmo.linear_matter_power(self.k_s, a) | ||
for a in self.a_s]) | ||
if 'pt' in [self.ak2_pk_kind]: | ||
if 'linear' in pksa: | ||
pka = pksa['linear'] | ||
else: | ||
pka = np.array([cosmo.linear_matter_power(self.k_s, a) | ||
for a in self.a_s]) | ||
pka += self._g4T*self.one_loop_dd[0] | ||
pksa['pt'] = pka | ||
self.pk_ak = pksa[self.ak2_pk_kind] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I missed something, but wouldn't you achieve the same just adding self.ak2_pk_kind
to the previous if-else statement, and then doing
self.pk_ak = pks[self.ak2_pk_kind]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pyccl/nl_pt/ept.py
Outdated
usefptk (::obj:`bool`):) if `True``, will use the FASTPT IA k2 | ||
term instead of the CCL IA k2 term |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not require that people have the latest version of FastPT? My first reaction is that this adds unnecessary complexity that will need to be deprecated afterwards anyway (with the headache it entails, since it changes the API).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are two issues. One, for the basic k^2 term, we don't need FAST-PT, and we want the flexibility to choose the P_delta from the options CCL has access to. There may be future versions of this term that need some calculation to be done in FAST-PT.
Two, more generally, do we want to demand the latest (or fairly recent) FAST-PT for all CCL ept users? Probably good in the long term. Right now, FAST-PT is also changing rapidly, and is actually a bit behind in some functionality, so we don't expect all CCL users to actually have access to the bleeding edge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I see, thanks @jablazek .
Then, how about:
- Remove the
usefptk
option from this function. - Allow for
ak2_pk_kind
to take an additional option (e.g.'pt_IA_k2'
), in which case the calculator uses the fast-pt prediction.
Then, if users select this option, you check if the fast-pt version allows for it and throw an error if it doesn't. My main aim here is not polluting the API with arguments that we may need to deprecate soon-ish, since they become a headache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As my message above, I've implemented what I meant in a new commit here. Please take a look and feel free to disagree with me!
Thanks @damonge! (and Happy New Year!) Indeed, the nl bias x IA terms have been a goal for a long time, and we now have them! Thanks to some work from @CarterDW , as well as from the UTD group and a former student working with me, we have a couple of independent FAST-PT implementations of (most of) these terms. There is an active dev branch of FAST-PT where we are doing the cross checks. Regarding benchmarks, we can generate the Pks from FAST-PT at a given cosmology and redshift (or two) and then store as a static dat file for comparison. It's not independent, since they both use FAST-PT, but it does test the plumbing and API. |
Great news! Happy to have those terms in CCL when you think they're ready! Regarding benchmarks: yep, that sounds good. I agree it's not independent, but it's a good test nevertheless, in case we modify anything further down the line (e.g. interpolation/extrapolation schemes or whatever) that changes the accuracy of the calculation. This is what we did for all the other fast-pt terms (in fact, you can probably build on this benchmark test script and on the script that was used to generate the associated data from fastpt). |
(and Happy New Year!) |
Apologies for not seeing this email before, for the moment I think we’re
shelving this PR, as we work to also add in galaxy bias x IA terms and try
and fix some strange results, thanks!
…On Tue, Jun 18, 2024 at 3:49 AM David Alonso ***@***.***> wrote:
@CarterDW <https://github.com/CarterDW> @jablazek
<https://github.com/jablazek> can I check what the status is? Thanks!
—
Reply to this email directly, view it on GitHub
<#1137 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBRHUDGLPX4TCY4FUUGXXHDZH7YA3AVCNFSM6AAAAABJPR7G2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZVGU2TINBTGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
OK, thanks! |
Hi @damonge . Sorry for the slow updates. A bit more info: we did our code comparison between the two implementations of the NL bias x IA terms. Most of them agreed, with two exceptions. We have been tracking down the differences and should have that resolved soon. However, @CarterDW , is there a reason to not finish the PR for the derivative term and then do a separate one for the NL bias terms and the t_ij terms? |
The main concern I would have is the behavior of the derivative term, since
that is one of the terms I’ve been looking at as the cause of some
divergences in the tracers when it is set to a non zero value.
I will try and do some more debugging this week and if I can sort that
then I agree, doing just the derivative term should provide no issues.
…On Tue, Jun 18, 2024 at 6:48 AM Jonathan Blazek ***@***.***> wrote:
Hi @damonge <https://github.com/damonge> . Sorry for the slow updates. A
bit more info: we did our code comparison between the two implementations
of the NL bias x IA terms. Most of them agreed, with two exceptions. We
have been tracking down the differences and should have that resolved soon.
However, @CarterDW <https://github.com/CarterDW> , is there a reason to
not finish the PR for the derivative term and then do a separate one for
the NL bias terms and the t_ij terms?
—
Reply to this email directly, view it on GitHub
<#1137 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBRHUDFVWY7MXLRFPUM22S3ZIANA7AVCNFSM6AAAAABJPR7G2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZVHEYDQNBSHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@CarterDW , we sorted out the unexpected high-k behavior , right? |
OK, great! |
Hello @damonge ! I am new to writing unit tests/benchmarks and wanted to ask you a question. I am trying to write tests to help close out this PR associated with the derivative terms @CarterDW is working on adding in. If I wrote something along the lines of this (using the current update, I know we don't want to continue with the useftptk argument, but just hypothetically): def test_ept_derivative_eq(): Would this be considered a unit test? My idea was just to compare two pks that will both use the Pak2 derivative term in their corrections, and comparing the built in CCL vs built in FAST-PT term. Would this be rigorous enough or would something along the lines of the script you linked above using data from an edited version of this with the derivative coded in be more rigorous and necessary? Hello, happy Thanksgiving! @jablazek @CarterDW and I met to discuss and further clarify what are we looking to do here. We had some questions we wanted to ask for clarification about the benchmarks/data/codes/pt_bm.py that was linked above:
|
Just to add to @dderienzo 's comment above. I've heard different opinions on whether it is better to use Plin, P1loop, of P_NL (e.g. from HMCode) for the derivative term. We are coding in all three options, but it looks like only testing one? |
@jablazek @dderienzo @CarterDW Regarding how it'd be best to test this: can I check how fastpt calculates the derivative term? Is it literally just pk_linear * k^2? Or is there more to it? If the former, I would say that a) we don't need a new benchmark for this (we can just have a unit test that checks that indeed ccl does k^2*pk when told to), and b) we should simplify the api to not allow using fastpt to calculate the term itself (just for simplicity, not because the fastpt calculation is wrong). Does this make sense? Sorry if I'm missing something. |
I think the PR is almost ready (pending the unit test above), but you will need to sych it with the latest version of the master branch before we can merge it due to conflicts (I've checked and these should be easy to resolve, but I can't do it because this lives in a fork). |
@damonge : right now, FAST-FT just returns k^2*Plin as the derivative term. This was intended as a placeholder to allow more complex terms (e.g. coming from EFT, etc) that would be calculated in FAST-PT. Obviously, for just Plin, there is no need to go through FAST-PT, but we thought that we would add this while making the broader changes to allow future functionality. If you prefer to simplify the API for now and add back complexity if/when needed, that would be fine too. Either way, I agree that we don't need to benchmark Plin. |
To the other question: I'm still a bit confused as to how "independent" pt_bm.py is intended to be. Is it manually re-run when there are intentional updates (either to CCL or FAST-PT) to create a new baseline, or is it always re-run with the latest version of everything (in which case it would miss, e.g., a bug introduced into FAST-PT). |
@jablazek Sounds good. In that case I would remove the fast-pt option, and we can always add it back in the future if fastpt starts doing anything fancier. I'm happy to do that as an edit to my previous commit if that's easier. The benchmark scripts are almost never run. These were run to generate the original benchmark files and, unless there are significant changes to the codes they depend on (e.g. faspt in this case), they are never run again. |
I implemented in pyccl/ept.py a new IA k2 derivative term, adding it to the existing auto and cross correlations with a ck constant defined in pyccl/tracers.py based on the pre-existing c1 constant. The user can either choose to use an in-built CCL k2 term, or call FAST-PT and use the newly implemented FAST-PT derivative term. This is set to use CCL's term by default so older versions of FAST-PT can still be used with ept.py.
I also added in pyccl/ept.py functionality to check if the installed FAST-PT can be imported, and then checks if the installed FAST-PT has the newest derivative term if the user wishes to use the FAST-PT derivative term