SOP: Content & Video Production Workflow

SOP for SOP: Content & Video Production Workflow

SOP: Content & Video Production Workflow

Status: DRAFT (V1.0)

Last Updated: 2025-12-22
Owner: Nqobizitha Mlilo


🎬 Core Video Stack

Our video production is built on a high-fidelity pipeline for editing, motion graphics, and audio.

  1. DaVinci Resolve: Primary nonlinear editor (NLE) for assembly and color grading.
    • Fusion Integration: Moving primary Motion Graphics and VFX work into the Fusion page for node-based control.
  2. Cavalry: Used for real-time 2D procedural motion graphics.
  3. Blender: Used for 3D elements or hybrid 2D/3D visual effects within video content.

🎵 Audio & Voice Pipeline

Audio is treated as a core production element equal to visuals.

  1. Ableton Live: Used for music production, soundtrack scoring, and final audio narration processing.
  2. AI Voice Synthesis:
    • ElevenLabs: Primary tool for high-end AI narration and character voices.
    • Google Cloud TTS: Secondary tool for scalable voice content requirements.

🔄 Production Lifecycle

  1. Scripting & Audio First: naration is generated (ElevenLabs) or recorded, then timed in Ableton to create the "heartbeat" of the video.
  2. Visual Asset Assembly: Bringing in Moho/Blender animations or live-action footage.
  3. Editing (Resolve): Assembling the visual story on the timeline.
  4. Motion Graphics (Fusion/Cavalry): Layering in typography, trackers, and VFX overlays.
  5. Color & Export: Final grading in Resolve to ensure the "Nafuna Look" is consistent across platforms.

🤖 AI Video Production Pipeline

For Music Videos (Internal & Client), we use a cutting-edge AI generation pipeline that prioritizes consistency and cost-efficiency.

1. Pre-Production: Weavy Storyboarding

  • Platform: Weavy (Primary storyboard orchestrator).
  • Models:
    • Flux: High-fidelity base images.
    • SeedDream / Nano Babana: Specialized image editing and style-consistent character models.
  • Goal: Create a locked visual sequence before moving to motion.

2. Production: AI Video Generation

  • Platform: RunningHub (Used for cost-effective credit management).
  • Models:
    • Wan 2.2: The "Minimum Standard" for standard movement and texture.
    • Wan 2.5: Used for complex shots and higher fidelity temporal consistency.
    • Wan Infinite Talk: Specialized for Lip-Sync and performance-driven shots (as used in Koji - Ndiye One).

3. Post-Production: High-End Refinement

  • Edit: Assembled in DaVinci Resolve.
  • Upscaling: Refined passes through Topaz or internal Resolve Superscale if required.
  • Overlay: Final lighting and texture overlays added in Fusion to blend AI generations with live-action or clean NLE plates.

📈 Quality Standards

  • Temporal Stability: No jittering or "AI hallucinations" in final delivery.
  • Audio Clarity: No video leaves without an Ableton-processed audio pass.
  • The "Nafuna Look": High-contrast, vibrant color grading established in Resolve.

How helpful was this article?