Changelog

Stay up to date with the latest changes and improvements to Dal Nulla.

March 4, 2026

🔧 Reliability Improvements

  • Fixed empty string handling: Workflow nodes producing empty text results are no longer silently dropped — they now correctly propagate through the pipeline
  • Fixed media disappearing: Images and videos no longer vanish after long editing sessions. Signed URLs now last 7 days (was 1 hour), and failed URL refreshes no longer overwrite working URLs
  • Loop context events: Downstream nodes in fan-out loops now correctly report which loop iteration they belong to

March 3, 2026

🔄 Unified Model Registry

  • Single source of truth: All 26 video, 13 image, and 12 text models are now managed from a shared registry, ensuring consistent specs across the editor and backend
  • Runware provider removed: 15 Runware-exclusive models have been retired. All remaining models are served through Google and fal.ai providers

🛠️ Bug Fixes & Stability

  • Fixed canvas crash: The editor no longer crashes during rapid drag/zoom operations
  • Fixed “Run” button stuck: The workflow Run button now properly detects connection failures and stops after 3 retries instead of spinning forever
  • Fixed workflow errors swallowed: SSE workflow errors now correctly surface to the UI instead of being silently caught
  • Fixed template freeze: The “Script to Storyboard” template no longer freezes when applied
  • Fixed loop images: Images generated in loop mode now appear correctly in node history
  • Fixed silent connection failures: If/Else connection commits no longer fail silently
  • Fixed video-generator page: Videos now display correctly on the standalone video generator page

March 2, 2026

🔧 Workflow Engine Improvements

  • Data sync fix: Editing node content (prompts, settings) now correctly triggers re-sync to downstream connected nodes
  • Ghost connections fixed: 5 connection types that were visually connected but silently dropped data at runtime now work correctly: start/end frame references on video nodes, reference images on text nodes, audio on text nodes, and source video on image nodes
  • Orphan cleanup: Deleting a node now properly cleans up all references in downstream nodes, preventing corrupted workflows

⚡ Architecture

  • Unified workflow executor: The streaming and non-streaming executors now share a single codebase, eliminating 3,500 lines of duplicated logic and ensuring consistent behavior

February 28, 2026

🖼️ Avatar Library

  • Pre-made avatars: Browse 26 AI-generated avatars across 5 categories (people, fantasy, animals, abstract, professional)
  • Drag and drop: Drag avatars directly from the sidebar onto the canvas to create Image nodes
  • Inspector picker: Choose avatars directly from the Image node inspector panel

🔍 SEO & Landing Pages

  • Nano Banana 2 model page: New dedicated landing page for the Nano Banana 2 image model with showcase examples
  • Structured data: Added BreadcrumbList and FAQPage JSON-LD schemas across public pages for better search results

📝 Automated Blog

  • AI-powered blog: New automated blog generation system that creates articles from trending AI news, with email-based approval flow before publishing

February 27, 2026

🌗 Light & Dark Mode

  • Full theme support: The entire platform now supports light and dark modes with automatic system preference detection
  • Theme toggle: Switch between system, light, and dark themes from the footer or app sidebar
  • Design system: New CSS token system ensures consistent colors, backgrounds, and borders across all components

📊 Debug Analytics Dashboard

  • Generation tracking: All image, video, text, audio, and upscaler generations are now tracked with provider info
  • Team filtering: Exclude internal team members from analytics stats
  • Per-user averages: View average generations and credits per active user

📝 Text Node Enhancements

  • Max output tokens: Control the length of AI-generated text with a configurable token limit
  • Thinking budget: Enable and configure the AI reasoning/thinking budget for supported models (Gemini, Grok, OpenAI)

💰 Pricing Updates

  • Updated pricing margin to 1.40x across all models
  • Full pricing audit: corrected 8 model prices to match actual provider costs
  • Faster pricing page: loads instantly with static costs instead of server-fetched data

🏠 Homepage Refresh

  • Repositioned for performance marketing agencies: new hero, core pipeline pillars, and AI models bento grid

February 26, 2026

💰 New Pricing System

  • Unified pricing: All models now use a consistent margin formula with real-time cost tracking from providers
  • Actual cost tracking: Post-generation charges are now based on the real provider cost, not estimates

🖼️ Nano Banana 2

  • New free image model: Nano Banana 2 (powered by Gemini 3.1 Flash) is now available on the free plan with 1K/2K/4K resolution support

📱 Mirror Mode (Mobile)

  • Mobile companion: Open the same project on your phone while editing on desktop to see selected media fullscreen with download — perfect for quick previews and sharing

🔄 Model Update

  • Gemini 3.1 Pro: Upgraded from Gemini 3 Pro Preview (deprecated) to Gemini 3.1 Pro Preview with backward compatibility

February 25, 2026

🛠️ Bug Fixes

  • Fixed double generation: Clicking “Run” on any node no longer triggers duplicate API calls
  • Fixed token expiry: Generation functions now proactively refresh JWT tokens before they expire, eliminating intermittent “Expired token” errors
  • Video batch size: Scene/Video nodes now have a batch size dropdown (1-4 videos) in the inspector

February 18, 2026

🔌 MCP Server — Generate from CLI

  • Remote AI tools: New MCP server exposing 10 tools (generate image/video/text, enhance image, list models, credits, cost estimate, projects, workflows) for use with Claude Code, Cursor, VS Code, and other MCP-compatible editors
  • OAuth 2.1 auth: Connect with your Google login — no API keys needed

🔄 Loop Fan-out & Split Results

  • Fan-out mode: Loop iterations can now execute the entire downstream subgraph per iteration, not just collect results into an array
  • Split Results node: New node type to extract individual elements from collected loop results by index
  • Video extension: VEO videos can now be extended up to ~148 seconds (initial 8s + incremental extensions)

📖 API Reference Documentation

  • New docs pages: Added interactive API reference for Generate Image, Generate Video, and Generate Text endpoints with curl examples and response schemas

February 17, 2026

👥 Team Pricing & Billing

  • Team plans: New Team Pro ($100/mo) and Team Max ($250/mo) plans with shared credit pools
  • Team dashboard: Manage members, billing, usage, and projects from a unified team page
  • Role-based access: Owner, Admin, and Member roles with appropriate permissions for billing, member management, and project access
  • Per-member/project usage: Track credit consumption broken down by team member and project

🎬 Timeline Editor (Preview)

  • Story/Edit mode toggle: Switch between the node canvas and the timeline editor from the editor header

📖 Documentation Improvements

  • Nodes Reference: Added 8 new node types to the reference page (21 total), with a new “Social Export” category
  • Cookbook: Added 5 new workflow recipes (Character Consistency, Social Media Batch, Product Photography, API-Driven Slideshow, Voice-Over Documentary)

February 16, 2026

👋 Welcome Panel

  • Quick-start templates: New projects now show a welcome overlay with 7 clickable cards (Video, Image, Autofill, Shopify Images, Shopify Videos, Social Post, Blank Canvas) to get started instantly
  • Empty project start: New projects start with a clean canvas instead of a default scene node

🔒 Node Locking

  • Skip regeneration: Lock any generative node to preserve its current output during workflow runs — locked nodes reuse their cached result instead of calling the API again
  • Visual indicator: Locked nodes show a lock badge and subtle blue border

📐 Progressive Disclosure

  • Renamed nodes: Scene → Video, Reference Image → Image, Canvas Draw → Draw, Card Designer → Card, Network Image → URL Image, Network Video → URL Video, AI Voice → Voice, Audio File → Audio
  • Collapsible inspector: Video, Image, and Text node inspectors now use collapsible sections — essential controls are visible by default, advanced settings are tucked away

February 15, 2026

🎬 Auto Caption Editor

  • Editor mode: New timeline-based caption editor — AI transcribes your video (word-level timestamps), then edit captions in a visual timeline with drag-to-resize cue blocks
  • Style controls: Customize font, color, size, and position of captions
  • Export: Download captions as .srt or .vtt files

⚡ Canvas Performance

  • 60fps interactions: All mouse, touch, and wheel events are now throttled via requestAnimationFrame for smooth canvas performance with 100+ nodes
  • Viewport culling: Connection lines and nodes outside the visible area are no longer rendered, significantly reducing GPU load
  • GPU acceleration: Canvas transforms use hardware acceleration for smoother panning and zooming

February 14, 2026

⚡ Canvas Performance (continued)

  • Node rendering optimization: Batched DOM reads/writes, debounced resize observers, and memoized node dimensions for smoother interaction
  • Configurable quality: Performance settings allow trading visual quality for speed on lower-end devices

February 13, 2026

📤 Google Drive Export

  • Automatic upload: New Google Drive Export node — connect image, video, or text nodes to automatically upload workflow outputs to a Google Drive folder
  • Folder picker: Authenticate with Google and choose your destination folder, with optional subfolder creation
  • Naming patterns: Configure custom naming patterns for uploaded files

🎥 Video Input for Text & Image Nodes

  • Video understanding: Connect video nodes to text nodes for AI analysis and description of video content (Gemini-powered)
  • Video-to-image: Connect video nodes to image nodes to use video frames as references

🧠 AI If/Else Multimodal

  • Visual condition evaluation: AI If/Else nodes now accept images, videos, and audio as inputs — evaluate conditions based on visual or audio content, not just text

✏️ Text Node Editing

  • Edit generated text: Generated text is now directly editable — switch between Result and Prompt tabs, copy content to clipboard, and edits are preserved in history

🛠️ Bug Fixes

  • Fixed platform showing FREE plan even after purchasing a subscription (SQL JOIN fix)
  • Fixed image nodes stuck on “Loading…” during concurrent generation
  • Fixed text history saving resolved prompt instead of original
  • Fixed video-generator page to use model specs and access control
  • Fixed Video-to-Video “Run” button not sending video input
  • Fixed AI If/Else stale data when running workflows immediately after connecting nodes

February 12, 2026

🧩 New Models & Access Control

  • Riverflow 2.0 Pro & Fast: Two new image generation models — Pro for high-quality editing with text rendering, Fast for quick production use
  • PRO model badges: Free plan users can use Google models (Nano Banana, VEO, Gemini). All other models now require a Pro subscription, clearly marked with a PRO badge in the model selector

February 11, 2026

🃏 Card Designer

  • Full-screen visual editor: Double-click any Card Designer node to open a full-screen editor with click-to-select elements, drag to reposition, resize handles, and a floating style toolbar
  • Style controls: Change fonts (Google Fonts), colors, alignment, spacing, border radius, and opacity — all applied live in the preview
  • Reference image input: Connect Image, Draw, or Network Image nodes as visual references when generating card designs

🧠 AI Logic Branch

  • AI-powered branching: New node type that uses AI to evaluate conditions. Write a natural-language condition (e.g., “Does this text have positive sentiment?”) and the AI routes to TRUE or FALSE branch with reasoning

📋 Workflow Templates

  • Pre-built workflows: Right-click the canvas and choose from 5 ready-made templates: Script to Storyboard, Image to Animated Video, API Data to Visual, Shopify Product Cards, and Conditional A/B Content

🌐 Network Image Node

  • Display images from URLs: New node type that shows remote images from any URL — connect to JSON Parser, Fetch, or Text nodes to display external images on the canvas
  • Loop mode: Use multiple URLs from a Network Image to generate one image per URL in batch

🔊 Audio Toggle & Variables

  • Mute video audio: Toggle audio on/off per video node — muted videos skip audio generation for lower costs
  • Colored variable tags: Global variables ({{variableName}}) in prompt textareas now render as colored badges. Pink for valid variables, gray for unrecognized ones

📊 Workflow Runs History

  • Dedicated runs page: New /app/workflow-runs page listing all past workflow runs across all projects with status, timing, and media thumbnails

🔗 JSON Parser & Data Pipelines

  • Flatten & pick-index syntax: Use ..key[] to flatten nested arrays and ..key[N] to pick the Nth element — powerful for processing API data
  • Loop source: JSON Parser and Split Text can now feed Image, Video, and Text nodes in loop mode — generate one output per item

🛒 Shopify Improvements

  • Global variables in store URL: Use {{variableName}} in the Shopify store URL for dynamic data sources
  • Start node connection: Shopify nodes can now be triggered by the Start node in workflows

💰 Pricing Corrections

  • Corrected model pricing for all 56 models to ensure fair rates
  • Text generation cost reduced significantly per call
  • Unknown models now return a clear error instead of silently failing

🛠️ Bug Fixes

  • Fixed notification badge persisting after viewing shared projects
  • Fixed image/text history restore not working
  • Fixed Video and AI Voice being disabled in context menu when connecting nodes
  • Fixed workflow executor not picking up changes on rerun
  • Fixed login redirect showing empty project list until manual page refresh
  • Fixed video nodes stuck in “generating” after errors — Stop button now works correctly
  • Fixed video nodes getting stuck to cursor after dismissing context menu
  • Improved landing page performance (reduced layout shift, lazy-loaded components, optimized animations)

February 9, 2026

🔒 Viewer Mode

  • Read-only access: Viewers can browse the canvas, select nodes, and inspect settings — but cannot edit, drag, delete, duplicate, or generate
  • Full UI enforcement: All inputs, buttons, and controls are disabled for viewers. Undo/redo, keyboard shortcuts, context menu, and connection handles are hidden
  • “View Only” badge: A clear indicator in the editor header shows when you’re in view-only mode
  • Safe sharing: Project owners can now invite collaborators as viewers, perfect for client reviews and stakeholder previews

✏️ Canvas Improvements

  • Selection guide lines follow resize: Guide lines and corner dots now update in real-time when a node is resized, not just when moved or reselected
  • Fixed video node connection offset: Connection lines on video nodes now start from the correct position (was 80px off due to a width mismatch)

February 8, 2026

🎨 Advanced Generation Controls

  • Per-model parameters: Image nodes now show an “Advanced Settings” panel with Steps, CFG Scale, Negative Prompt, Seed, and Strength controls — all model-aware (e.g. FLUX hides Negative Prompt, Gemini shows no advanced section)
  • Video seed control: Video nodes with Runware models now expose a Seed parameter for reproducible generations
  • Strength slider: When a source image is connected, control how much it influences the result (0 = identical, 1 = full creative freedom)

🧩 Model Catalog Expansion

  • 25+ image models: Added Kling 2.6 Pro, Kling 2.5 Turbo Pro/Standard, Runway Gen-4 Image, Runway Aleph, GPT Image 1 Mini, Seedream 4.5/4.0, Wan2.6 Image
  • 20+ video models: Added Hailuo 2.3 Fast, Video-01 Director, Seedance 1.0 Pro Fast, and updated specs for all existing models with accurate durations and aspect ratios
  • New providers: ByteDance (Seedream image + Seedance video), Alibaba (Wan2.6)

🤝 Real-Time Collaboration Upgrades

  • Remote selection & hover: See which nodes your collaborators are selecting (solid border) or hovering (dashed border), with their cursor color
  • Canvas comments: Pin comments directly on the canvas — create via toolbar or right-click menu. Features reply threads, resolve, delete, and drag to reposition. Synced in real-time via Yjs
  • Remote generation indicators: When a collaborator generates a node, you see their colored spinner and email — inspector buttons become read-only to prevent conflicts

✏️ Landing Page & UX

  • Redesigned Use-Cases section: Sticky scroll layout with active state indicators and fading screenshots
  • Redesigned Models section: Bento grid grouped by provider with color-coded model pills
  • Collaborative H1 animation: Dynamic typing effect simulating real-time collaboration on the homepage

February 7, 2026

🎨 Massive Model Expansion

  • 19 image models now available: Gemini, FLUX.2 (5 variants), FLUX.1.1 Pro/Ultra, Midjourney V7/V6.1/V6, GPT Image 1/1.5, DALL-E 3/2, Runway Gen-4 Image Turbo
  • 17 video models now available: Google Veo 3/3.1, xAI Grok, KlingAI v1/v1.5, Runway Gen-4 Turbo/Gen-4.5, MiniMax Hailuo 02/2.3/Video-01 Live, Seedance 1.0 Lite/Pro/1.5 Pro, Sora 2/2 Pro
  • Model selector on image and video nodes — pick your preferred model directly from the canvas
  • Smart inspector controls — duration, resolution, and aspect ratio options auto-adapt based on the selected model’s capabilities

🤝 Real-Time Collaboration

  • Live cursors: See collaborators’ cursors moving on the canvas in real-time
  • Share & invite: Invite team members by email — if they don’t have an account, they’ll receive an invite link
  • Project notifications: Dashboard shows pending invitations with project title, role, and inviter
  • Shared file access: Generated content in collaborative projects is visible to all members
  • Conflict-free editing: Yjs CRDT-based sync ensures no data loss when multiple users edit simultaneously

🚀 Workflow Engine Improvements

  • Text-to-text connections: Text nodes now properly chain — upstream text combines with the node’s own instructions
  • Image-to-image connections: Source images are now correctly passed through workflow pipelines
  • Parallel node execution: Fixed SSE parser to correctly handle multiple nodes generating simultaneously
  • Auto-refresh URLs: Signed URLs now auto-refresh every 45 minutes — no more broken images after long sessions

✏️ Editor & UX

  • Pricing page redesign with free plan section and usage estimates
  • Connection lines now use DOM-measured anchor points for pixel-perfect accuracy
  • Node width defaults fixed — Start nodes and other narrow nodes connect at the correct position
  • Mobile improvements: Fixed iOS auto-zoom on input focus, long-press behavior, sidebar sizing, and inspector panel height
  • Image/video history: Nodes now show the last generated content even after workflow cache is cleared

February 6, 2026

🎬 Workflow Runs & History

  • View past workflow runs with status and generated assets directly from the Run button
  • Nodes now show real-time loading state during workflow execution, even after page refresh
  • Email notification when a workflow completes or fails

🔗 Connections & Nodes

  • LinkedIn and Shopify nodes can now connect to all compatible node types
  • Video nodes now have a history carousel to browse and restore previous generations
  • Parallel execution for nodes that fork into multiple branches
  • Green glow effect on nodes that complete during a workflow run

✏️ Editor Improvements

  • Text nodes are now scrollable with trackpad
  • Smoother connection lines with DOM-based anchor positioning
  • Restored animated border on generating nodes

🤝 Multiplayer (Preview)

  • Foundation for real-time collaboration: project sharing, member invites, and sync infrastructure

February 2026

🎙️ AI Voice Nodes & Audio Support

We’ve introduced a powerful new way to control audio in your video projects.

  • AI Voice Nodes: Generate speech from text (TTS), clone voices from reference audio, and perform Speech-to-Speech (STS) dubbing for videos.
  • Audio Nodes: Upload or record audio directly in the editor to provide context for generations or use as a reference.
  • Video Dubbing: Connect a Video Node to an AI Voice Node to replace the voice while preserving the original performance timing.

🎨 Draw Nodes

  • Sketch to Image: You can now add Draw Nodes to the canvas. Sketch simple layouts or compositions and connect them to Image or Scene nodes to guide the visual structure of your generations.

📚 Documentation

  • Added comprehensive documentation for all new node types.
  • Updated the “Getting Started” and “Features” guides.

January 2026

🚀 Launch of Dal Nulla

  • Node-Based Editor: The core graph editor for non-linear video storytelling.
  • Scene Nodes: Text-to-Video generation with control over duration and aspect ratio.
  • Reference Consistency: Ability to link generated images as references for consistent characters.
  • AI Autofill: “Director Mode” to expand simple prompts into full storyboards.
  • Upscaler Nodes: AI enhancement for images and videos up to 4K.