Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elm-pb example with relaxing phi #3081

Open
wants to merge 3 commits into
base: next
Choose a base branch
from
Open

Elm-pb example with relaxing phi #3081

wants to merge 3 commits into from

Conversation

bendudson
Copy link
Contributor

@bendudson bendudson commented Mar 5, 2025

Adds an option phi_boundary_relax. If set to true, then the radial boundary conditions on the potential phi are relaxed over a given timescale towards zero gradient.

Adapted from Hermes-3
(https://github.com/boutproject/hermes-3/blob/master/src/vorticity.cxx#L261).

Adds an option phi_boundary_relax. If set to true, then the radial
boundary conditions on the potential phi are relaxed over a given
timescale towards zero gradient.

Adapted from Hermes-3
(https://github.com/boutproject/hermes-3/blob/master/src/vorticity.cxx#L261).
@bendudson bendudson marked this pull request as draft March 5, 2025 01:07
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy made some suggestions

philocal += phi(mesh->xstart, j, k);
}
}
MPI_Comm comm_inner = mesh->getYcomm(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "MPI_Comm" is directly included [misc-include-cleaner]

examples/elm-pb/elm_pb.cxx:23:

+ #include <mpi.h>

}
MPI_Comm comm_inner = mesh->getYcomm(0);
int np;
MPI_Comm_size(comm_inner, &np);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "MPI_Comm_size" is directly included [misc-include-cleaner]

              MPI_Comm_size(comm_inner, &np);
              ^

MPI_Comm comm_inner = mesh->getYcomm(0);
int np;
MPI_Comm_size(comm_inner, &np);
MPI_Allreduce(&philocal, &phivalue, 1, MPI_DOUBLE, MPI_SUM, comm_inner);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "MPI_Allreduce" is directly included [misc-include-cleaner]

              MPI_Allreduce(&philocal, &phivalue, 1, MPI_DOUBLE, MPI_SUM, comm_inner);
              ^

MPI_Comm comm_inner = mesh->getYcomm(0);
int np;
MPI_Comm_size(comm_inner, &np);
MPI_Allreduce(&philocal, &phivalue, 1, MPI_DOUBLE, MPI_SUM, comm_inner);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "MPI_SUM" is directly included [misc-include-cleaner]

              MPI_Allreduce(&philocal, &phivalue, 1, MPI_DOUBLE, MPI_SUM, comm_inner);
                                                                 ^

@@ -1494,7 +1681,11 @@
// Jpar
Field3D B0U = B0 * U;
mesh->communicate(B0U);
ddt(Jpar) = -Grad_parP(B0U, loc) / B0 + eta * Delp2(Jpar);
if (laplace_perp) {
ddt(Jpar) = -Grad_parP(B0U, loc) / B0 + eta * Laplace_perp(Jpar);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

warning: no header providing "ddt" is directly included [misc-include-cleaner]

        ddt(Jpar) = -Grad_parP(B0U, loc) / B0 + eta * Laplace_perp(Jpar);
        ^

@bendudson bendudson marked this pull request as ready for review March 5, 2025 20:50
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant