Skip to content

Boundary Condition Paches #819

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 43 commits into from
May 5, 2025
Merged

Conversation

okBrian
Copy link
Contributor

@okBrian okBrian commented Apr 14, 2025

Description

Added Boundary Condition Patches (Rectangle, Circle, and Line Segment).

Type of change

  • New feature (non-breaking change which adds functionality)

Scope

  • This PR comprises a set of related changes with a common goal

How Has This Been Tested?

  • Tested with custom test cases that test the different boundary patches in each direction locally with GCC 14.2.1
  • Used the example 2D_jet to test in Phoenix, Frontier, and Locally.

Test Configuration:

  • GCC 14.2.1
  • GT Phoenix
  • Frontier

Checklist

  • I have added comments for the new code
  • I added Doxygen docstrings to the new code
  • I have made corresponding changes to the documentation (docs/)
  • I have added regression tests to the test suite so that people can verify in the future that the feature is behaving as expected
  • I have added example cases in examples/ that demonstrate my new feature performing as expected.
    They run to completion and demonstrate "interesting physics"
  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers
  • Checked that the code compiles using CRAY compilers
  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • Enclosed the new feature via nvtx ranges so that they can be identified in profiles
  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
    link to nsys report too large for the pr
  • Ran an Omniperf profile using ./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR.
  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

Copy link

codecov bot commented Apr 14, 2025

Codecov Report

Attention: Patch coverage is 29.71666% with 1017 lines in your changes missing coverage. Please review.

Project coverage is 43.15%. Comparing base (068da2c) to head (9e68edd).
Report is 6 commits behind head on master.

Files with missing lines Patch % Lines
src/common/m_boundary_common.fpp 20.65% 673 Missing and 7 partials ⚠️
src/pre_process/m_boundary_conditions.fpp 38.33% 100 Missing and 11 partials ⚠️
src/common/m_mpi_common.fpp 7.29% 89 Missing ⚠️
src/pre_process/m_patches.fpp 58.82% 22 Missing and 6 partials ⚠️
src/simulation/m_boundary_conditions.fpp 59.09% 14 Missing and 4 partials ⚠️
src/post_process/m_data_input.f90 67.56% 0 Missing and 12 partials ⚠️
src/pre_process/m_checker.fpp 33.33% 10 Missing and 2 partials ⚠️
src/pre_process/m_initial_condition.fpp 42.10% 9 Missing and 2 partials ⚠️
src/simulation/m_rhs.fpp 27.27% 0 Missing and 8 partials ⚠️
src/simulation/m_bubbles_EL_kernels.fpp 0.00% 6 Missing and 1 partial ⚠️
... and 12 more
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #819      +/-   ##
==========================================
- Coverage   43.63%   43.15%   -0.48%     
==========================================
  Files          66       68       +2     
  Lines       19835    20262     +427     
  Branches     2433     2427       -6     
==========================================
+ Hits         8655     8745      +90     
- Misses       9688    10054     +366     
+ Partials     1492     1463      -29     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@sbryngelson sbryngelson marked this pull request as ready for review April 16, 2025 14:13
@sbryngelson sbryngelson requested review from a team and sbryngelson as code owners April 16, 2025 14:13
@sbryngelson
Copy link
Member

some suggestions:

Reviewing GitHub PR

1  API & data‑layout changes

File Change Comments
m_boundary_common.fpp New optional code‑path for bc_type == -17 (ghost‑cell extrapolation vs Dirichlet) gated by #ifdef MFC_PRE_PROCESS The behavioural difference between simulation and pre‑process is now hidden in two nested #ifdef/elseif blocks (≈ 6 places). Consider collapsing this behind a small pure function, e.g. logical function is_ghost_patch(bc_code, build_phase) to avoid N compile‑time branches and keep the call‐sites clean. ([GitHub][1])
m_boundary_common.fpp Addition of macro templates COLOR_FUNC_* and heavy macro nesting  Gives nice brevity, but the code now relies on four layers of textual substitution (#:def → C‑preprocessor → OpenACC & Fortran). In practice this is hard to debug with optimisation + GPU offload. A generic elemental/pure procedure with an assumed‑rank c_div argument would be easier to profile and keeps the OpenACC loop separate from the boundary logic.
m_derived_types.fpp Removed vel, alpha_rho, alpha, pres from the patch/plane derived‑type  Double‑check every caller; several post‑processing kernels (e.g. statistics routines) still index these components on master. If they are truly redundant, mark the commit message breaking‐API and bump the module major version. ([GitHub][1])
m_boundary_common.fpp / s_populate_capillary_buffers Function now receives bc_type and uses helper scalars bcxb, bcxe, bcyb… Nice step toward removing the long cascade of elseifs. However the new scalars are implicitly imported globals—make them arguments or members of a boundary‑info derived‐type so that the routine stays pure and testable. ([GitHub][1])

2  Maintainability & readability

  1. Replace magic numbers with an enumerator module
    -1 ↔ periodic, -2 ↔ slip, -16 ↔ no‑slip, -17 ↔ ghost‑patch is now sprinkled in ~40 places. Add an enum, bind(c) or at least integer, parameter :: BC_PERIODIC = -1, … to a central header. It eliminates comment rot and helps the debugger show symbolic names.

  2. Reduce pre‑processor surface area
    The project already uses OpenACC, MPI and metaprogramming macros. Each new #ifdef multiplies test cases (SIMULATION × PRE_PROCESS × GPU/CPU × DIM). Where possible:

    select case (phase)
    case (PHASE_PREPROCESS)
       call handle_ghost()
    case default
       call handle_dirichlet()
    end select

    This keeps the logic inside the language and tools like fpm/f18 can still inline and remove dead code.

  3. Vector‑friendly access
    The new macro builds lines like

    c_divs(i)%sf(-j,k,l) = -c_divs(i)%sf(j-1,k,l)

    At run‑time that is fine, but on GPUs the negative index forces an address‑check. A simple local pointer such as

    real(wp), pointer :: left(:,:,:)
    left => c_divs(i)%sf
    left(-j,k,l) = -left(j-1,k,l)

    lets the compiler expose aligned loads/stores.


3  Correctness & safety

  • Intent declarations – bc_type is intent(in) in the new signature but is also dereferenced on the device. If an external routine ever writes to it, race conditions appear because there is no update device/host. Add intent(in) explicitly and compile with -fcheck=all once.
  • Ghost‑patch extrapolation order – The macro PRIM_GHOST_CELL_EXTRAPOLATION_BC copies cell 0 into all j ghost cells; previously you used linear extrapolation from cell 1. Validate high‑order capillary terms—the curvature operator is sensitive to constant padding.
  • Range checks – Several loops now run from -buff_size to m+buff_size on arrays with base‑1 allocations. Enable -fcheck=bounds in CI to avoid silent OOB writes.

5  Suggested follow‑ups

  1. Unit‑test the new patch type: A 1‑D linear advection with bc = -17 should conserve the step function within machine epsilon.
  2. Refactor boundary handling into a generically typed strategy object so the simulation loop calls call patch_handler%apply(q). That will let you add future BCs (sponge, inflow) without re‑touching the hot loops.

@wilfonba
Copy link
Contributor

wilfonba commented May 5, 2025

@sbryngelson I think this is more or less ready to merge. A lot of the suggestions from whatever LLM model you ran this through are only relevant to old commits. The bcx[y,z]b[e] -> derived type suggestion sounds worthwhile, but I think it's better suited for another PR. I've opened issue #828 to track this.

@sbryngelson sbryngelson merged commit 8230af1 into MFlowCode:master May 5, 2025
40 of 45 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants