Skip to main content
Automatically generate new video content from prompts, scripts, or topics using AI. AI Content Generation is designed for creators who want to produce video from scratch — no camera or footage required.

How It Works

The AI Content Generation tile takes your text prompt or topic and produces a complete video with visuals, narration, and structure. You define the subject matter, style, and format, and Mosaic handles the rest — generating scenes, selecting or creating visuals, and assembling the final output. AI Content Generation UI

Input & Settings

Prompt / Topic

Describe what the video should be about. Examples:
  • “5 tips for productivity while working from home”
  • “A 60-second explainer on how solar panels work”
  • “An Instagram Reel about the benefits of meditation”
Be specific about the subject, audience, and intended platform for best results.

Style

Choose the visual and tonal style of the generated content. Options may include:
  • Educational — clean, informative, structured
  • Social Media — fast-paced, bold, attention-grabbing
  • Cinematic — high production feel, dramatic pacing
  • Corporate — professional, polished, brand-safe

Duration

Set the target length of the generated video. Common options:
  • Short (15–30s) — TikTok, Reels, Shorts
  • Medium (60–90s) — Instagram, LinkedIn
  • Long (2–5 min) — YouTube, training content

Voice & Narration

Choose whether to include AI-generated narration and select a voice style. Options:
  • No narration — visuals and text only
  • Auto-narrate — AI writes and speaks the script
  • Custom script — you provide the narration text

Usage Recommendations

Use AI Content Generation to:
  • Rapidly produce social media content at scale
  • Create explainer videos from blog posts or articles
  • Generate video drafts for brainstorming and iteration
  • Build educational or training content without filming
AI Content Generation works great when combined with:
  • Captions (add subtitles for accessibility and engagement)
  • AI Music (add a background score)
  • Reframe (adapt output to different aspect ratios)
  • Destination (publish directly to social platforms)

API Info

  • Node ID: d898f2b1-3151-4231-a4d4-dd4d5a020b05

Node params

ParamTypeRequiredDefaultNotes
promptstringYes""Primary generation instruction.
model"kling-2.1-master" | "kling-1.6" | "helix" | "helix-fast" | "veo-3.1" | "veo-3.1-fast" | "veo-3" | "veo-3-fast" | "sora-2" | "sora-2-pro"NoUI default "kling-2.1-master"Video model selector.
aspect_ratio"16:9" | "9:16"No"16:9"Current UI restricts to these two ratios.
length_targetnumber (seconds)Nomodel-dependentModel-specific increments/ranges apply.
seed_image_uristringNounset”First frame” style/structure guide.
last_frame_imagestringConditional/model-limitedunsetSupported for Veo 3.1 paths; UI gates by model.
reference_imagesstring[]Conditional/model-limitedunsetUp to 3 reference images (UI).
sora_reference_imagestringConditional/model-limitedunsetSora-specific image guidance field.

Parameter groups

  • Generation core: prompt, model, aspect_ratio, length_target
  • Image guidance (model-specific): seed_image_uri, last_frame_image, reference_images, sora_reference_image

Model constraints

  • Veo 3/3.1 families: length_target uses 8-second increments, typically 8-64.
  • Kling families: length_target uses 5-second increments.
  • Sora families: planner path supports 4-second increments up to 60 seconds.
  • Legacy aliases helix and helix-fast are normalized to veo-3.1 and veo-3.1-fast.

Scenario requirements

  • Keep image-mode inputs mutually consistent:
    • Reference-images mode conflicts with first/last frame mode in UI.
    • Sora paths use sora_reference_image rather than seed_image_uri.

Example

{
  "prompt": "A cinematic product teaser with dramatic lighting and macro shots",
  "model": "veo-3.1",
  "aspect_ratio": "16:9",
  "length_target": 24,
  "seed_image_uri": "gs://bucket/seed-frame.png"
}