Adding a Model Node
- Drag a Model node from the Node Palette
- Click the node to open the model selector
- Browse or search for a model
- Click to select and configure
Finding Models
Search
Type keywords in the search bar:- Model names: “flux”, “sdxl”, “stable-diffusion”
- Capabilities: “upscale”, “remove background”, “face swap”
- Categories: “image generation”, “video”, “audio”
Browse by Category
Models are organized into categories:- Image Generation: Create images from text prompts
- Image-to-Image: Transform existing images
- Upscaling: Increase image resolution
- Video: Generate or transform videos
- Audio: Speech, music, and sound
- Text: Language models for text processing
Popular Models
Image Generation
Image Generation
- Flux Schnell - Fast, high-quality image generation
- Flux Pro - Premium quality, more detail
- SDXL - Stable Diffusion XL base model
- Stable Diffusion 3 - Latest SD version
Image Enhancement
Image Enhancement
- Real-ESRGAN - General purpose upscaling
- Clarity Upscaler - Detail-preserving upscale
- GFPGAN - Face enhancement
Image Editing
Image Editing
- RemBG - Background removal
- Inpainting Models - Fill or replace regions
- ControlNet - Guided image generation
Video
Video
- Stable Video Diffusion - Image to video
- AnimateDiff - Animation generation
- Video Upscalers - Enhance video quality
Configuring Models
Model Parameters
Each model has unique parameters. Common ones include:| Parameter | Description | Typical Values |
|---|---|---|
prompt | Text description of desired output | Any text |
negative_prompt | What to avoid in the output | ”blurry, low quality” |
width / height | Output dimensions | 512-2048 pixels |
num_outputs | Number of images to generate | 1-4 |
guidance_scale | How closely to follow prompt | 1-20 (typically 7.5) |
num_inference_steps | Generation quality/speed | 20-100 |
seed | Random seed for reproducibility | Any integer |
Dynamic Inputs
Model inputs appear automatically based on the selected model:Connecting to Model Inputs
Instead of typing values directly, connect nodes to model inputs:Model Execution
Running Models
Models execute when:- All required inputs have data
- An Export node is run
- The workflow is triggered via webhook
Execution Status
Watch the node status indicator:- Gray: Not started
- Blue spinner: Running
- Green check: Completed
- Red X: Failed
Viewing Results
After execution:- Preview appears in the node
- Click preview to view full size
- Check inspector for run history
- Access outputs in connected nodes
Handling Errors
Common model errors:| Error | Cause | Solution |
|---|---|---|
| ”Invalid prompt” | Prompt too long or contains banned content | Shorten or modify prompt |
| ”Insufficient credits” | Account needs credits | Add credits in settings |
| ”Model timeout” | Generation took too long | Try simpler prompt or smaller size |
| ”NSFW content” | Content policy violation | Modify prompt content |
Chaining Models
Sequential Processing
Connect models in sequence for multi-stage processing:Parallel Processing
Create variations by branching:Image-to-Image Workflows
Use generated images as input for other models:Combining AI + Tools
Mix AI models with tool nodes:Advanced Techniques
ControlNet
Use ControlNet models for guided generation:- Canny: Edge-guided generation
- Depth: Depth map-guided
- Pose: Human pose-guided
- Scribble: Sketch-guided
Inpainting
Edit specific regions of images:Style Transfer
Apply artistic styles:Model Costs
Models consume credits based on:- Model complexity
- Output size and count
- Processing time