Skip to content

Commit

Permalink
Change defaults for loop experiments back to max_num_embs=float('inf')
Browse files Browse the repository at this point in the history
  • Loading branch information
jackraymond committed Dec 19, 2024
1 parent 3e435cc commit 771159d
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 16 deletions.
9 changes: 4 additions & 5 deletions tutorial_code/example1_1_fm_loop_balancing.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ def main(
coupling: float = -0.2,
num_iters: int = 100,
num_iters_unshimmed_flux: int = 10,
max_num_emb: int = 1,
max_num_emb: float = float('Inf'),
use_cache: bool = True,
) -> None:
"""Main function to run example.
Expand All @@ -264,10 +264,9 @@ def main(
of flux_biases. Defaults to 100.
num_iters_unshimmed_J (int): Number of iterations without shimming of
couplings. Defaults to 200.
max_num_emb (int): Maximum number of embeddings to use per programming.
Published tutorial data uses several parallel embeddings, but this
tutorial uses 1 (max_num_emb=1) by default to bypass the otherwise
slow search process.
max_num_emb (float): Maximum number of embeddings to use per programming.
Published tutorial data uses the maximum number the process can
accommodate.
use_cache (bool): When True embeddings and data are read from
(and saved to) local directories, repeated executions can reuse
collected data. When False embeddings and data are recalculated on
Expand Down
11 changes: 5 additions & 6 deletions tutorial_code/example1_2_fm_loop_correlations.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ def main(
num_iters: int = 300,
num_iters_unshimmed_flux: int = 100,
num_iters_unshimmed_J: int = 200,
max_num_emb: int = 1,
max_num_emb: float = float('Inf'),
use_cache: bool = True,
) -> None:
"""Main function to run example.
Expand All @@ -259,10 +259,9 @@ def main(
num_iters (int): Number of sequential programmings.
num_iters_unshimmed_flux (int): Number of sequential programmings without flux shimming.
num_iters_unshimmed_J (int): Number of sequential programmings without J shimming.
max_num_emb (int): Maximum number of embeddings to use per programming.
Published tutorial data uses several parallel embeddings, but this
tutorial uses 1 (max_num_emb=1) by default to bypass the otherwise
slow search process.
max_num_emb (float): Maximum number of embeddings to use per programming.
Published tutorial data uses the maximum number the process can
accommodate.
use_cache (bool): When True embeddings and data are read from
(and saved to) local directories, repeated executions can reuse
collected data. When False embeddings and data are recalculated on
Expand Down Expand Up @@ -309,4 +308,4 @@ def main(


if __name__ == "__main__":
main(use_cache=False)
main()
9 changes: 4 additions & 5 deletions tutorial_code/example2_2_frustrated_loop_anneal.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def main(
num_iters: int = 300,
num_iters_unshimmed_flux: int = 100,
num_iters_unshimmed_J: int = 200,
max_num_emb: int = 1,
max_num_emb: float = float('Inf'),
use_cache: bool = True,
) -> None:
"""Main function to run example
Expand All @@ -296,10 +296,9 @@ def main(
iteratrions that doesn't shim flux_biases. Defaults to 100.
num_iters_unshimmed_J (int): option to specify number of iterations
that doesn't shim alpha_J. Defaults to 200.
max_num_emb (int): Maximum number of embeddings to use per programming.
Published tutorial data uses several parallel embeddings, but this
tutorial uses 1 (max_num_emb=1) by default to bypass the otherwise
slow search process.
max_num_emb (float): Maximum number of embeddings to use per programming.
Published tutorial data uses the maximum number the process can
accommodate.
use_cache (bool): When True embeddings and data are read from
(and saved to) local directories, repeated executions can reuse
collected data. When False embeddings and data are recalculated on
Expand Down

0 comments on commit 771159d

Please sign in to comment.