Skip to content

Commit 764b624

Browse files
authored
fix some typos (#12265)
Signed-off-by: co63oc <[email protected]>
1 parent 6682956 commit 764b624

File tree

11 files changed

+21
-21
lines changed

11 files changed

+21
-21
lines changed

examples/research_projects/geodiff/geodiff_molecule_conformation.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1760,7 +1760,7 @@
17601760
"clip_local = None\n",
17611761
"clip_pos = None\n",
17621762
"\n",
1763-
"# constands for data handling\n",
1763+
"# constants for data handling\n",
17641764
"save_traj = False\n",
17651765
"save_data = False\n",
17661766
"output_dir = \"/content/\""

examples/research_projects/multi_subject_dreambooth_inpainting/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Please note that this project is not actively maintained. However, you can open an issue and tag @gzguevara.
44

5-
[DreamBooth](https://huggingface.co/papers/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. This project consists of **two parts**. Training Stable Diffusion for inpainting requieres prompt-image-mask pairs. The Unet of inpainiting models have 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself).
5+
[DreamBooth](https://huggingface.co/papers/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. This project consists of **two parts**. Training Stable Diffusion for inpainting requires prompt-image-mask pairs. The Unet of inpainiting models have 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself).
66

77
**The first part**, the `multi_inpaint_dataset.ipynb` notebook, demonstrates how make a 🤗 dataset of prompt-image-mask pairs. You can, however, skip the first part and move straight to the second part with the example datasets in this project. ([cat toy dataset masked](https://huggingface.co/datasets/gzguevara/cat_toy_masked), [mr. potato head dataset masked](https://huggingface.co/datasets/gzguevara/mr_potato_head_masked))
88

src/diffusers/guiders/frequency_decoupled_guidance.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ def project(v0: torch.Tensor, v1: torch.Tensor, upcast_to_double: bool = True) -
6161
def build_image_from_pyramid(pyramid: List[torch.Tensor]) -> torch.Tensor:
6262
"""
6363
Recovers the data space latents from the Laplacian pyramid frequency space. Implementation from the paper
64-
(Algorihtm 2).
64+
(Algorithm 2).
6565
"""
6666
# pyramid shapes: [[B, C, H, W], [B, C, H/2, W/2], ...]
6767
img = pyramid[-1]

src/diffusers/hooks/faster_cache.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,11 +54,11 @@ class FasterCacheConfig:
5454
Attributes:
5555
spatial_attention_block_skip_range (`int`, defaults to `2`):
5656
Calculate the attention states every `N` iterations. If this is set to `N`, the attention computation will
57-
be skipped `N - 1` times (i.e., cached attention states will be re-used) before computing the new attention
57+
be skipped `N - 1` times (i.e., cached attention states will be reused) before computing the new attention
5858
states again.
5959
temporal_attention_block_skip_range (`int`, *optional*, defaults to `None`):
6060
Calculate the attention states every `N` iterations. If this is set to `N`, the attention computation will
61-
be skipped `N - 1` times (i.e., cached attention states will be re-used) before computing the new attention
61+
be skipped `N - 1` times (i.e., cached attention states will be reused) before computing the new attention
6262
states again.
6363
spatial_attention_timestep_skip_range (`Tuple[float, float]`, defaults to `(-1, 681)`):
6464
The timestep range within which the spatial attention computation can be skipped without a significant loss
@@ -90,7 +90,7 @@ class FasterCacheConfig:
9090
from the conditional branch outputs.
9191
unconditional_batch_skip_range (`int`, defaults to `5`):
9292
Process the unconditional branch every `N` iterations. If this is set to `N`, the unconditional branch
93-
computation will be skipped `N - 1` times (i.e., cached unconditional branch states will be re-used) before
93+
computation will be skipped `N - 1` times (i.e., cached unconditional branch states will be reused) before
9494
computing the new unconditional branch states again.
9595
unconditional_batch_timestep_skip_range (`Tuple[float, float]`, defaults to `(-1, 641)`):
9696
The timestep range within which the unconditional branch computation can be skipped without a significant

src/diffusers/hooks/pyramid_attention_broadcast.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,15 +45,15 @@ class PyramidAttentionBroadcastConfig:
4545
spatial_attention_block_skip_range (`int`, *optional*, defaults to `None`):
4646
The number of times a specific spatial attention broadcast is skipped before computing the attention states
4747
to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times (i.e.,
48-
old attention states will be re-used) before computing the new attention states again.
48+
old attention states will be reused) before computing the new attention states again.
4949
temporal_attention_block_skip_range (`int`, *optional*, defaults to `None`):
5050
The number of times a specific temporal attention broadcast is skipped before computing the attention
5151
states to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times
52-
(i.e., old attention states will be re-used) before computing the new attention states again.
52+
(i.e., old attention states will be reused) before computing the new attention states again.
5353
cross_attention_block_skip_range (`int`, *optional*, defaults to `None`):
5454
The number of times a specific cross-attention broadcast is skipped before computing the attention states
5555
to re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times (i.e.,
56-
old attention states will be re-used) before computing the new attention states again.
56+
old attention states will be reused) before computing the new attention states again.
5757
spatial_attention_timestep_skip_range (`Tuple[int, int]`, defaults to `(100, 800)`):
5858
The range of timesteps to skip in the spatial attention layer. The attention computations will be
5959
conditionally skipped if the current timestep is within the specified range.
@@ -305,7 +305,7 @@ def _apply_pyramid_attention_broadcast_hook(
305305
block_skip_range (`int`):
306306
The number of times a specific attention broadcast is skipped before computing the attention states to
307307
re-use. If this is set to the value `N`, the attention computation will be skipped `N - 1` times (i.e., old
308-
attention states will be re-used) before computing the new attention states again.
308+
attention states will be reused) before computing the new attention states again.
309309
current_timestep_callback (`Callable[[], int]`):
310310
A callback function that returns the current inference timestep.
311311
"""

src/diffusers/modular_pipelines/flux/denoise.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ def description(self) -> str:
220220
return (
221221
"Denoise step that iteratively denoise the latents. \n"
222222
"Its loop logic is defined in `FluxDenoiseLoopWrapper.__call__` method \n"
223-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
223+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
224224
" - `FluxLoopDenoiser`\n"
225225
" - `FluxLoopAfterDenoiser`\n"
226226
"This block supports both text2image and img2img tasks."

src/diffusers/modular_pipelines/modular_pipeline.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ class ModularPipelineBlocks(ConfigMixin, PushToHubMixin):
229229
Base class for all Pipeline Blocks: PipelineBlock, AutoPipelineBlocks, SequentialPipelineBlocks,
230230
LoopSequentialPipelineBlocks
231231
232-
[`ModularPipelineBlocks`] provides method to load and save the defination of pipeline blocks.
232+
[`ModularPipelineBlocks`] provides method to load and save the definition of pipeline blocks.
233233
234234
<Tip warning={true}>
235235
@@ -1418,7 +1418,7 @@ def set_progress_bar_config(self, **kwargs):
14181418
# YiYi TODO:
14191419
# 1. look into the serialization of modular_model_index.json, make sure the items are properly ordered like model_index.json (currently a mess)
14201420
# 2. do we need ConfigSpec? the are basically just key/val kwargs
1421-
# 3. imnprove docstring and potentially add validator for methods where we accpet kwargs to be passed to from_pretrained/save_pretrained/load_components()
1421+
# 3. imnprove docstring and potentially add validator for methods where we accept kwargs to be passed to from_pretrained/save_pretrained/load_components()
14221422
class ModularPipeline(ConfigMixin, PushToHubMixin):
14231423
"""
14241424
Base class for all Modular pipelines.

src/diffusers/modular_pipelines/node_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -384,14 +384,14 @@ def __init__(self, blocks, category=DEFAULT_CATEGORY, label=None, **kwargs):
384384
# pass or create a default param dict for each input
385385
# e.g. for prompt,
386386
# prompt = {
387-
# "name": "text_input", # the name of the input in node defination, could be different from the input name in diffusers
387+
# "name": "text_input", # the name of the input in node definition, could be different from the input name in diffusers
388388
# "label": "Prompt",
389389
# "type": "string",
390390
# "default": "a bear sitting in a chair drinking a milkshake",
391391
# "display": "textarea"}
392392
# if type is not specified, it'll be a "custom" param of its own type
393393
# e.g. you can pass ModularNode(scheduler = {name :"scheduler"})
394-
# it will get this spec in node defination {"scheduler": {"label": "Scheduler", "type": "scheduler", "display": "input"}}
394+
# it will get this spec in node definition {"scheduler": {"label": "Scheduler", "type": "scheduler", "display": "input"}}
395395
# name can be a dict, in that case, it is part of a "dict" input in mellon nodes, e.g. text_encoder= {name: {"text_encoders": "text_encoder"}}
396396
inputs = self.blocks.inputs + self.blocks.intermediate_inputs
397397
for inp in inputs:

src/diffusers/modular_pipelines/stable_diffusion_xl/denoise.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -695,7 +695,7 @@ def description(self) -> str:
695695
return (
696696
"Denoise step that iteratively denoise the latents. \n"
697697
"Its loop logic is defined in `StableDiffusionXLDenoiseLoopWrapper.__call__` method \n"
698-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
698+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
699699
" - `StableDiffusionXLLoopBeforeDenoiser`\n"
700700
" - `StableDiffusionXLLoopDenoiser`\n"
701701
" - `StableDiffusionXLLoopAfterDenoiser`\n"
@@ -717,7 +717,7 @@ def description(self) -> str:
717717
return (
718718
"Denoise step that iteratively denoise the latents with controlnet. \n"
719719
"Its loop logic is defined in `StableDiffusionXLDenoiseLoopWrapper.__call__` method \n"
720-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
720+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
721721
" - `StableDiffusionXLLoopBeforeDenoiser`\n"
722722
" - `StableDiffusionXLControlNetLoopDenoiser`\n"
723723
" - `StableDiffusionXLLoopAfterDenoiser`\n"
@@ -739,7 +739,7 @@ def description(self) -> str:
739739
return (
740740
"Denoise step that iteratively denoise the latents(for inpainting task only). \n"
741741
"Its loop logic is defined in `StableDiffusionXLDenoiseLoopWrapper.__call__` method \n"
742-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
742+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
743743
" - `StableDiffusionXLInpaintLoopBeforeDenoiser`\n"
744744
" - `StableDiffusionXLLoopDenoiser`\n"
745745
" - `StableDiffusionXLInpaintLoopAfterDenoiser`\n"
@@ -761,7 +761,7 @@ def description(self) -> str:
761761
return (
762762
"Denoise step that iteratively denoise the latents(for inpainting task only) with controlnet. \n"
763763
"Its loop logic is defined in `StableDiffusionXLDenoiseLoopWrapper.__call__` method \n"
764-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
764+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
765765
" - `StableDiffusionXLInpaintLoopBeforeDenoiser`\n"
766766
" - `StableDiffusionXLControlNetLoopDenoiser`\n"
767767
" - `StableDiffusionXLInpaintLoopAfterDenoiser`\n"

src/diffusers/modular_pipelines/wan/denoise.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,7 +253,7 @@ def description(self) -> str:
253253
return (
254254
"Denoise step that iteratively denoise the latents. \n"
255255
"Its loop logic is defined in `WanDenoiseLoopWrapper.__call__` method \n"
256-
"At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n"
256+
"At each iteration, it runs blocks defined in `sub_blocks` sequentially:\n"
257257
" - `WanLoopDenoiser`\n"
258258
" - `WanLoopAfterDenoiser`\n"
259259
"This block supports both text2vid tasks."

0 commit comments

Comments
 (0)