Skip to main content
Cordage integrates with Replicate to provide access to thousands of AI models. This guide covers how to find, configure, and chain models in your workflows.

Adding a Model Node

  1. Drag a Model node from the Node Palette
  2. Click the node to open the model selector
  3. Browse or search for a model
  4. Click to select and configure

Finding Models

Type keywords in the search bar:
  • Model names: “flux”, “sdxl”, “stable-diffusion”
  • Capabilities: “upscale”, “remove background”, “face swap”
  • Categories: “image generation”, “video”, “audio”

Browse by Category

Models are organized into categories:
  • Image Generation: Create images from text prompts
  • Image-to-Image: Transform existing images
  • Upscaling: Increase image resolution
  • Video: Generate or transform videos
  • Audio: Speech, music, and sound
  • Text: Language models for text processing
  • Flux Schnell - Fast, high-quality image generation
  • Flux Pro - Premium quality, more detail
  • SDXL - Stable Diffusion XL base model
  • Stable Diffusion 3 - Latest SD version
  • Real-ESRGAN - General purpose upscaling
  • Clarity Upscaler - Detail-preserving upscale
  • GFPGAN - Face enhancement
  • RemBG - Background removal
  • Inpainting Models - Fill or replace regions
  • ControlNet - Guided image generation
  • Stable Video Diffusion - Image to video
  • AnimateDiff - Animation generation
  • Video Upscalers - Enhance video quality

Configuring Models

Model Parameters

Each model has unique parameters. Common ones include:
ParameterDescriptionTypical Values
promptText description of desired outputAny text
negative_promptWhat to avoid in the output”blurry, low quality”
width / heightOutput dimensions512-2048 pixels
num_outputsNumber of images to generate1-4
guidance_scaleHow closely to follow prompt1-20 (typically 7.5)
num_inference_stepsGeneration quality/speed20-100
seedRandom seed for reproducibilityAny integer

Dynamic Inputs

Model inputs appear automatically based on the selected model:
[Model: Flux Schnell]
    ├── prompt (required)
    ├── aspect_ratio
    ├── num_outputs
    ├── output_format
    └── output_quality

Connecting to Model Inputs

Instead of typing values directly, connect nodes to model inputs:
[Text Input: "landscape"] ──prompt──> [Flux Model]
[Image Input: reference.jpg] ──image──> [Img2Img Model]
[Number: 1024] ──width──> [Model]
This enables dynamic, reusable workflows.

Model Execution

Running Models

Models execute when:
  1. All required inputs have data
  2. An Export node is run
  3. The workflow is triggered via webhook

Execution Status

Watch the node status indicator:
  • Gray: Not started
  • Blue spinner: Running
  • Green check: Completed
  • Red X: Failed

Viewing Results

After execution:
  • Preview appears in the node
  • Click preview to view full size
  • Check inspector for run history
  • Access outputs in connected nodes

Handling Errors

Common model errors:
ErrorCauseSolution
”Invalid prompt”Prompt too long or contains banned contentShorten or modify prompt
”Insufficient credits”Account needs creditsAdd credits in settings
”Model timeout”Generation took too longTry simpler prompt or smaller size
”NSFW content”Content policy violationModify prompt content

Chaining Models

Sequential Processing

Connect models in sequence for multi-stage processing:
[Text] ──> [Generate Model] ──> [Upscale Model] ──> [Export]
Each model’s output becomes the next model’s input.

Parallel Processing

Create variations by branching:
                    ┌──> [Style A Model] ──> [Export A]
[Text Input] ──────>│
                    └──> [Style B Model] ──> [Export B]
Both models process the same prompt simultaneously.

Image-to-Image Workflows

Use generated images as input for other models:
[Text] ──> [Base Model] ──> [Enhancement Model] ──> [Final Model] ──> [Export]

Combining AI + Tools

Mix AI models with tool nodes:
[Text] ──> [Model] ──> [Crop] ──> [Resize] ──> [Export]

Advanced Techniques

ControlNet

Use ControlNet models for guided generation:
[Reference Image] ──control_image──> [ControlNet Model]
[Prompt Text] ──prompt──────────────>        │

                                        [Output]
ControlNet types:
  • Canny: Edge-guided generation
  • Depth: Depth map-guided
  • Pose: Human pose-guided
  • Scribble: Sketch-guided

Inpainting

Edit specific regions of images:
[Source Image] ──image──> [Inpainting Model]
[Mask Image] ──mask──────>        │
[Edit Prompt] ──prompt──>         │

                            [Output]

Style Transfer

Apply artistic styles:
[Content Image] ──> [Style Model] ──> [Output]
[Style Image] ────>

Model Costs

Models consume credits based on:
  • Model complexity
  • Output size and count
  • Processing time
Check the model card for pricing information before running.
Use faster models like Flux Schnell during development, then switch to higher-quality models for final outputs.

Best Practices

Test with small outputs - Use lower resolution during testing to save credits.
Use seeds for reproducibility - Set a seed when you want consistent results.
Chain strategically - Not every workflow needs multiple models. Start simple.
Check model documentation - Each model has unique capabilities and limitations.