Text Nodes
Text Nodes generate or hold text content using AI. They are the most versatile node in Dal Nulla — they can write scripts, analyze images, transcribe audio, and process video content through multimodal AI capabilities.
What is a Text Node?
A Text Node is the linguistic engine of your workflow. It can operate in three modes: generating new text with AI, returning fixed text as-is, or passing through text received from upstream connections. Text Nodes accept multimodal inputs (images, audio, video) and output text that drives downstream nodes like Scenes, Reference Images, and AI Voice.
Inputs & Outputs
| Port | Direction | Type | Description |
|---|---|---|---|
input | Input | Text | Receives text from other Text Nodes, Concatenator, List Selector, JSON Parser, or Fetch |
source | Input | Image | Receives images from Reference Image, Draw, Upscaler, or Network Image nodes for multimodal analysis |
audio | Input | Audio | Receives audio from Audio Nodes for transcription or analysis |
sourceVideo | Input | Video | Receives video from Scene Nodes for video analysis |
| Output | Output | Text | Connects to Scene, Reference Image, AI Voice, other Text, Concatenator, If/Else, AI If/Else, HTML, LinkedIn, Shopify, TikTok, Instagram, Split Text, JSON Parser, Canvas, Network Image, Network Video, Google Drive Export |
Inspector Controls
Prompt Instruction
A text area where you write your prompt. In Generate mode, this is sent to the AI model along with any connected inputs. In Static mode, this text is returned exactly as written. Supports @tagName syntax to reference outputs from other tagged nodes.
Mode
Three options control how the node processes text:
- Static — Returns the prompt text exactly as written. Useful for fixed prompts, labels, or template text. Instant and free since no AI call is made.
- Generate — Sends the prompt to an AI model and returns the AI’s response. This is the default mode. Connected images, audio, or video are included as multimodal context.
- Original — Passthrough mode. Returns the text received from upstream connections without modification.
Model
Select the AI model used for text generation. Only applies when Mode is set to Generate. Different models offer trade-offs between speed, quality, and cost.
Tag
Assign a tag like @script-1 to this node. Other nodes can then reference this node’s output in their prompts by writing @script-1. Tags must be unique across the project.
Use Global Context
Toggle to include the project’s global context in the AI prompt. When enabled, any Text Nodes marked as global context will be prepended to this node’s prompt, ensuring narrative consistency across the project.
Count
Number of generations to produce in batch mode. Useful when you want multiple variations of the same prompt.
Loop Config
Configure loop execution for iterative generation. Allows the node to run multiple times with different inputs or parameters.
Available Models
| Model | Provider | Tier | Best For |
|---|---|---|---|
| Gemini 3 Flash Preview | Free | Fast everyday tasks, summaries, translations | |
| Gemini 3.1 Pro Preview | Free | Complex reasoning, long-form content | |
| Gemini 2.5 Pro | Free | Reliable fallback, stable outputs | |
| Grok 4.1 Fast | xAI | Pro | Quick creative writing |
| Grok 4.1 Reasoning | xAI | Pro | Complex analysis and reasoning |
| GPT-5.2 | OpenAI | Pro | Latest AI capabilities |
| GPT-5.1 | OpenAI | Pro | Advanced reasoning |
| GPT-5 | OpenAI | Pro | Next-gen text generation |
| GPT-5 Mini | OpenAI | Pro | Compact and fast |
| GPT-5 Nano | OpenAI | Pro | Ultra-fast responses |
| O3 | OpenAI | Pro | Deep reasoning tasks |
| O3 Deep Research | OpenAI | Pro | Research-grade analysis |
How to Use
- Add a Text Node to the canvas by right-clicking and selecting “Text Node” from the context menu, or by dragging it from the sidebar.
- Write your prompt instruction in the text area. Be specific about what you want the AI to produce.
- Select the mode: choose Generate for AI-powered text, Static for fixed text, or Original to pass through upstream text.
- Choose an AI model if using Generate mode. Free Gemini models are available for all users; Pro models require a subscription.
- Assign a tag (e.g.,
@script) if you want other nodes to reference this output in their prompts. - Connect the output to downstream nodes such as Scene, Reference Image, AI Voice, or other Text Nodes.
- Run the workflow or click Generate in the inspector to produce text.
Workflow Examples
Script-to-Video Pipeline
Text Node (prompt: “Write a 30-second script about a sunset over the ocean”) connects to a Scene Node. The Scene Node uses the generated script as its prompt to produce a matching video clip.
Multimodal Image Analysis
Reference Image (photo of a building) connects to a Text Node via the source port. The Text Node prompt says “Describe this building’s architectural style in detail.” The AI analyzes the image and generates a description. That description then connects to another Scene Node to create a video about the building.
Audio Transcription Pipeline
Audio Node (recorded interview) connects to a Text Node via the audio port. The Text Node prompt says “Transcribe this audio and format it as a screenplay.” The transcribed and formatted text feeds into downstream nodes.
Tips & Best Practices
- Use Static mode for fixed text that does not need AI generation — it is instant and free.
- Assign tags (
@script-1,@description) to reference text output in other nodes’ prompts using the@tagNamesyntax. - For multimodal analysis, connect images, audio, or video to the appropriate input ports — the AI will “see” or “hear” the content automatically.
- Use Global Context to share project-wide information (character descriptions, style guides) with the text generation.
- Chain Text Nodes: use one for brainstorming, another for refining, and a third for formatting. Each step can use a different model.
- Connect multiple Text Nodes to a single Scene or Reference Image node to combine multiple perspectives (e.g., character description + location description).
- When writing prompts for Generate mode, be specific and detailed. Vague prompts lead to generic outputs.
Troubleshooting
Empty output
Check that Mode is set to “Generate” (not “Static” with an empty prompt). Verify that a model is selected in the inspector.
Model errors or timeouts
Free models (Gemini) have rate limits. If you receive errors, wait a moment and retry, or switch to a different model. Pro models generally have higher rate limits.
Tag not resolving
Make sure you are using the @tagName syntax (with the @ symbol) and that the tag is unique across the project. Duplicate tags will show a red warning in the inspector.
Multimodal not working
Ensure the source connection is going to the correct input port: images to source, audio to audio, video to sourceVideo. Not all models support multimodal inputs — use a vision-capable model.
AI hallucinations or inaccurate output
Provide clear, specific instructions in the prompt. Add constraints like “Only describe what you see” for image analysis, or “Do not invent information” for factual tasks.
See Also
- Scenes — Use text output to drive video generation
- Reference Images — Use text output as image prompts
- AI Voice Nodes — Convert text to speech
- Global Context — Project-wide text context
- Prompting Guide — Tips for writing effective prompts
- Prompt Concatenator — Combine multiple text sources
- Models — Full list of available AI models