Skip to content

Spider-Gwen SFW und NSFW #83

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 9 additions & 28 deletions built-in-nodes/loaders/checkpoint_loader.mdx
Original file line number Diff line number Diff line change
@@ -1,32 +1,13 @@
---
title: "Load Checkpoint"
---
Create a high-quality AI-generated short animation based on the following text prompt:

The Load Checkpoint node can be used to load a diffusion model, diffusion models are used to denoise latents. This node will also provide the appropriate VAE and CLIP model.
"A mysterious cyberpunk girl walks through a neon-lit alley at night, rain pouring down, with holograms flickering around her. She slowly turns to face the camera as a glowing drone hovers beside her."

## Inputs
- Style: Cinematic, highly detailed, anime-inspired visuals
- Format: 16:9 landscape, 768x432 resolution
- Duration: 4–6 seconds
- Mood: Moody, dramatic, futuristic
- Content: This is a SFW version. NSFW variants should follow the same scene layout but with adult-oriented detail, respecting platform content rules.

<ResponseField name="ckpt_name">
Use AnimateDiff with SDXL models or custom fine-tuned NSFW models. Add subtle camera motion, breathing animation, and smooth transitions between frames.


The name of the model.
</ResponseField>


## Outputs

<ResponseField name="MODEL">

The model used for denoising latents.

</ResponseField>

<ResponseField name="CLIP">

The CLIP model used for encoding text prompts.
</ResponseField>

<ResponseField name="VAE">

The VAE model used for encoding and decoding images to and from latent space.
</ResponseField>
Generate the keyframes using ComfyUI or AUTOMATIC1111 + ControlNet to guide motion (e.g., walking, head turns). Use a consistent seed for character stability. Finally, export the animation at 24fps using FFMPEG or Video Enhance AI for upscaling if needed.