Changelog

Stay up to date with the latest changes and improvements to Dal Nulla.

February 8, 2026

🎨 Advanced Generation Controls

  • Per-model parameters: Image nodes now show an “Advanced Settings” panel with Steps, CFG Scale, Negative Prompt, Seed, and Strength controls — all model-aware (e.g. FLUX hides Negative Prompt, Gemini shows no advanced section)
  • Video seed control: Video nodes with Runware models now expose a Seed parameter for reproducible generations
  • Strength slider: When a source image is connected, control how much it influences the result (0 = identical, 1 = full creative freedom)

đź§© Model Catalog Expansion

  • 25+ image models: Added Kling 2.6 Pro, Kling 2.5 Turbo Pro/Standard, Runway Gen-4 Image, Runway Aleph, GPT Image 1 Mini, Seedream 4.5/4.0, Wan2.6 Image
  • 20+ video models: Added Hailuo 2.3 Fast, Video-01 Director, Seedance 1.0 Pro Fast, and updated specs for all existing models with accurate durations and aspect ratios
  • New providers: ByteDance (Seedream image + Seedance video), Alibaba (Wan2.6)

🤝 Real-Time Collaboration Upgrades

  • Remote selection & hover: See which nodes your collaborators are selecting (solid border) or hovering (dashed border), with their cursor color
  • Canvas comments: Pin comments directly on the canvas — create via toolbar or right-click menu. Features reply threads, resolve, delete, and drag to reposition. Synced in real-time via Yjs
  • Remote generation indicators: When a collaborator generates a node, you see their colored spinner and email — inspector buttons become read-only to prevent conflicts

✏️ Landing Page & UX

  • Redesigned Use-Cases section: Sticky scroll layout with active state indicators and fading screenshots
  • Redesigned Models section: Bento grid grouped by provider with color-coded model pills
  • Collaborative H1 animation: Dynamic typing effect simulating real-time collaboration on the homepage

February 7, 2026

🎨 Massive Model Expansion

  • 19 image models now available: Gemini, FLUX.2 (5 variants), FLUX.1.1 Pro/Ultra, Midjourney V7/V6.1/V6, GPT Image 1/1.5, DALL-E 3/2, Runway Gen-4 Image Turbo
  • 17 video models now available: Google Veo 3/3.1, xAI Grok, KlingAI v1/v1.5, Runway Gen-4 Turbo/Gen-4.5, MiniMax Hailuo 02/2.3/Video-01 Live, Seedance 1.0 Lite/Pro/1.5 Pro, Sora 2/2 Pro
  • Model selector on image and video nodes — pick your preferred model directly from the canvas
  • Smart inspector controls — duration, resolution, and aspect ratio options auto-adapt based on the selected model’s capabilities

🤝 Real-Time Collaboration

  • Live cursors: See collaborators’ cursors moving on the canvas in real-time
  • Share & invite: Invite team members by email — if they don’t have an account, they’ll receive an invite link
  • Project notifications: Dashboard shows pending invitations with project title, role, and inviter
  • Shared file access: Generated content in collaborative projects is visible to all members
  • Conflict-free editing: Yjs CRDT-based sync ensures no data loss when multiple users edit simultaneously

🚀 Workflow Engine Improvements

  • Text-to-text connections: Text nodes now properly chain — upstream text combines with the node’s own instructions
  • Image-to-image connections: Source images are now correctly passed through workflow pipelines
  • Parallel node execution: Fixed SSE parser to correctly handle multiple nodes generating simultaneously
  • Auto-refresh URLs: Signed URLs now auto-refresh every 45 minutes — no more broken images after long sessions

✏️ Editor & UX

  • Pricing page redesign with free plan section and usage estimates
  • Connection lines now use DOM-measured anchor points for pixel-perfect accuracy
  • Node width defaults fixed — Start nodes and other narrow nodes connect at the correct position
  • Mobile improvements: Fixed iOS auto-zoom on input focus, long-press behavior, sidebar sizing, and inspector panel height
  • Image/video history: Nodes now show the last generated content even after workflow cache is cleared

February 6, 2026

🎬 Workflow Runs & History

  • View past workflow runs with status and generated assets directly from the Run button
  • Nodes now show real-time loading state during workflow execution, even after page refresh
  • Email notification when a workflow completes or fails

đź”— Connections & Nodes

  • LinkedIn and Shopify nodes can now connect to all compatible node types
  • Video nodes now have a history carousel to browse and restore previous generations
  • Parallel execution for nodes that fork into multiple branches
  • Green glow effect on nodes that complete during a workflow run

✏️ Editor Improvements

  • Text nodes are now scrollable with trackpad
  • Smoother connection lines with DOM-based anchor positioning
  • Restored animated border on generating nodes

🤝 Multiplayer (Preview)

  • Foundation for real-time collaboration: project sharing, member invites, and sync infrastructure

February 2026

🎙️ AI Voice Nodes & Audio Support

We’ve introduced a powerful new way to control audio in your video projects.

  • AI Voice Nodes: Generate speech from text (TTS), clone voices from reference audio, and perform Speech-to-Speech (STS) dubbing for videos.
  • Audio Nodes: Upload or record audio directly in the editor to provide context for generations or use as a reference.
  • Video Dubbing: Connect a Video Node to an AI Voice Node to replace the voice while preserving the original performance timing.

🎨 Draw Nodes

  • Sketch to Image: You can now add Draw Nodes to the canvas. Sketch simple layouts or compositions and connect them to Image or Scene nodes to guide the visual structure of your generations.

📚 Documentation

  • Added comprehensive documentation for all new node types.
  • Updated the “Getting Started” and “Features” guides.

January 2026

🚀 Launch of Dal Nulla

  • Node-Based Editor: The core graph editor for non-linear video storytelling.
  • Scene Nodes: Text-to-Video generation with control over duration and aspect ratio.
  • Reference Consistency: Ability to link generated images as references for consistent characters.
  • AI Autofill: “Director Mode” to expand simple prompts into full storyboards.
  • Upscaler Nodes: AI enhancement for images and videos up to 4K.