Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alternative way to adjust area issues in FATES twostream #1310

Merged
merged 7 commits into from
Feb 19, 2025

Conversation

rgknox
Copy link
Contributor

@rgknox rgknox commented Jan 13, 2025

Description:

Its possible that there is more horizontal area taken up by the cohorts than there is ground, which is simply a result of numerical imprecision in the rest of FATES. If this is true, then we need to somehow force the total area of the two-stream scattering elements to be exactly 1 and not slightly more. One way is to just chop off some area of the largest scattering element (simple way). This is simple, yet does not conserve total scattering depth. The other (which we already allow) is to chop off that area but also increase the LAI+SAI. The latter is conservative, but is creating indexing problems when when the LAI and SAI is increased enough such that it uses more depth bins than FATES anticipated.

Collaborators:

@jennykowalcz

Expectation of Answer Changes:

Good chance there will be very slight differences in two-stream tests. There should be no scientifically meaningful impacts to results.

Checklist

If this is your first time contributing, please read the CONTRIBUTING document.

All checklist items must be checked to enable merging this pull request:

Contributor

  • The in-code documentation has been updated with descriptive comments

Integrator

  • FATES PASS/FAIL regression tests were run
  • Evaluation of test results for answer changes was performed and results provided

Test Results:

TBD

CTSM (or) E3SM (specify which) test hash-tag:

CTSM (or) E3SM (specify which) baseline hash-tag:

FATES baseline hash-tag:

Test Output:

@jennykowalcz jennykowalcz linked an issue Jan 14, 2025 that may be closed by this pull request
Copy link
Contributor

@jennykowalcz jennykowalcz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. I tested this PR with one of the cases that encountered this error. Without this fix, the case fails with

219: forrtl: severe (408): fort: (2): Subscript #3 of the array PARPROF_PFT_DIR_Z has value 12 which is greater than the upper bound of 11

but with this fix runs successfully well past this point.

Copy link
Contributor

@glemieux glemieux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, although I have one question about a possible necessary check against a negative area.

! call endrun(msg=errMsg(sourcefile, __LINE__))
! Test out a simpler way to correct area errors
if(do_simple_area_correct) then
twostr%scelg(ican,icolmax)%area = twostr%scelg(ican,icolmax)%area - (canopy_frac(ican)-1._r8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The canopy_frac(ican) here is going to be greater than 1 at this point in the code correct? Is it possible that the value of canopy_frac(ican)-1._r8 could be greater than area? If so we might end up with a negative final area and should have a limiter against this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The area being used here is from the element that has the largest area of the group. The group sums to 1. In a worst case scenario, lets say we had 1000 cohorts all sharing one layer, then the area of the largest element would be no less than 0.001. So in this case, the error would have to be larger than 0.001. Therefore, I don't think this should be a problem, but it shouldn't hurt to add in a check and complain/fail gracefully if this does happen.

Copy link
Contributor Author

@rgknox rgknox Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added in a check. I put it behind a debug for now. I can run some tests to see if they trigger, and then set the debug flags back to false so that they don't slow down the run. Note that we really should not have layers that are that overfull, this is because we try to make the cohorts fit correctly during the ppa demotion/promotion scheme. These methods here try to make the area convergence tolerance even tighter, so that it doesn't introduce energy balance errors into the model.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

30 day f45 test generated no errors on the checks , switching developer debug flag back to false

@rgknox
Copy link
Contributor Author

rgknox commented Feb 19, 2025

Tests pass on derecho:

./cs.status.fails
0219-103038de_gnu: 10 tests
    FAIL ERS_D_Ld20.f45_f45_mg37.I2000Clm50FatesRs.derecho_gnu.clm-FatesColdTwoStreamNoCompFixedBioGeo COMPARE_base_rest (EXPECTED FAILURE)
    FAIL PEM_D_Ld20.5x5_amazon.I2000Clm50FatesRs.derecho_gnu.clm-FatesColdSeedDisp COMPARE_base_modpes (EXPECTED FAILURE)
    FAIL SMS_D_Ld3.f09_g17.I2000Clm60FatesSpCruRsGs.derecho_gnu.clm-FatesColdSatPhen_prescribed RUN time=87 (EXPECTED FAILURE)
    PASS SMS_Ld10_D_Mmpi-serial.CLM_USRDAT.I1PtClm60Fates.derecho_gnu.clm-FatesPRISM--clm-NEON-FATES-YELL SHAREDLIB_BUILD time=123 (UNEXPECTED: expected FAIL)
    PASS SMS_Ld10_D_Mmpi-serial.CLM_USRDAT.I1PtClm60Fates.derecho_gnu.clm-FatesPRISM--clm-NEON-FATES-YELL RUN time=101 (UNEXPECTED: expected FAIL)

 
0219-103038de_int: 38 tests
    FAIL ERP_P256x2_Ld30.f45_f45_mg37.I2000Clm60FatesRs.derecho_intel.clm-mimicsFatesCold RUN time=44 (EXPECTED FAILURE)
    PEND ERP_P256x2_Ld30.f45_f45_mg37.I2000Clm60FatesRs.derecho_intel.clm-mimicsFatesCold COMPARE_base_rest
    FAIL ERS_D_Ld20.f45_f45_mg37.I2000Clm50FatesRs.derecho_intel.clm-FatesColdTwoStream COMPARE_base_rest (EXPECTED FAILURE)
    FAIL ERS_Ld60.f45_f45_mg37.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesColdST3 RUN time=26 (EXPECTED FAILURE)
    FAIL PVT_Lm3.f45_f45_mg37.I2000Clm50FatesCruRsGs.derecho_intel.clm-FatesLUPFT RUN time=1233 (EXPECTED FAILURE)
    PASS SMS_Ld10_D_Mmpi-serial.CLM_USRDAT.I1PtClm60Fates.derecho_intel.clm-FatesFireLightningPopDens--clm-NEON-FATES-NIWO SHAREDLIB_BUILD time=97 (UNEXPECTED: expected FAIL)

 
0219-103038de_nvh: 1 test

 
========================================================================
Non-PASS results for select phases:
TPUTCOMP non-passes: 0
MEMCOMP non-passes: 0

@rgknox rgknox merged commit 655d784 into NGEET:main Feb 19, 2025
1 check was pending
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

Error message about PARPROF_PFT_DIR_Z
3 participants