Skip to content

Commit

Permalink
Skip writing data if no output is provided
Browse files Browse the repository at this point in the history
  • Loading branch information
fsimonis committed Feb 6, 2025
1 parent 815f28d commit ca74675
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 14 deletions.
1 change: 1 addition & 0 deletions changelog-entries/225.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
- Added support for solver `B` to skip writing data when `precice-aste-run` is run without `--output` flag.
9 changes: 8 additions & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,14 @@ precice-aste-run -p A --mesh fine_mesh --data "dummyData"
precice-aste-run -p B --mesh coarse_mesh --data "mappedData" --output mappedMesh
```

While the example above executes the mapping in serial, `precice-aste-run` can be executed in parallel (using MPI). However, this requires a partitioned mesh (one per parallel rank). In order to decompose a single mesh appropriately, the tools `precice-aste-partition` and `precice-aste-join` can be used.
Exporting output mesh on participant B is optional and can be disabled by omitting the `--output` and `--data` flags from the command line. This is useful for repeatedly running the appication to collect multiple runtime measurements as it speeds up the process and avoids wear of the storage.

```bash
precice-aste-run -p A --mesh fine_mesh --data "dummyData"
precice-aste-run -p B --mesh coarse_mesh # no output mesh
```

While the examples above execute the mapping in serial, `precice-aste-run` can be executed in parallel (using MPI). However, this requires a partitioned mesh (one per parallel rank). In order to decompose a single mesh appropriately, the tools `precice-aste-partition` and `precice-aste-join` can be used.

{% tip %}
If you want to reproduce a specific setup of your solvers, you can use the [export functionality](https://precice.org/configuration-export.html#enabling-exporters) of preCICE and use the generated meshes directly in `precice-aste-run`. If you run your solver in parallel, preCICE exports the decomposed meshes directly, so that no further partitioning is required.
Expand Down
30 changes: 17 additions & 13 deletions src/modes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -274,21 +274,25 @@ void aste::runMapperMode(const aste::ExecutionContext &context, const OptionMap

// Write out results in same format as data was read
if (asteConfiguration.participantName == "B") {
auto meshname = asteConfiguration.asteInterfaces.front().meshes.front();
auto filename = fs::path(options["output"].as<std::string>());
if (context.rank == 0 && fs::exists(filename)) {
if (context.isParallel() && !filename.parent_path().empty()) {
auto dir = filename.parent_path();
fs::remove_all(dir);
fs::create_directory(dir);
} else if (!context.isParallel()) {
fs::remove(filename);
if (options["output"].empty()) {
ASTE_INFO << "Not writing results as no output was provided";
} else {
auto meshname = asteConfiguration.asteInterfaces.front().meshes.front();
auto filename = fs::path(options["output"].as<std::string>());
if (context.rank == 0 && fs::exists(filename)) {
if (context.isParallel() && !filename.parent_path().empty()) {
auto dir = filename.parent_path();
fs::remove_all(dir);
fs::create_directory(dir);
} else if (!context.isParallel()) {
fs::remove(filename);
}
}
MPI_Barrier(MPI_COMM_WORLD);
//
ASTE_INFO << "Writing results to " << options["output"].as<std::string>();
meshname.save(asteConfiguration.asteInterfaces.front().mesh, options["output"].as<std::string>());
}
MPI_Barrier(MPI_COMM_WORLD);
//
ASTE_INFO << "Writing results to " << options["output"].as<std::string>();
meshname.save(asteConfiguration.asteInterfaces.front().mesh, options["output"].as<std::string>());
}
preciceInterface.finalize();
return;
Expand Down

0 comments on commit ca74675

Please sign in to comment.