Getting Started with AI Agents
AI Agents are in early alpha. Node interfaces, behaviors, and APIs are subject to change.
What you will learn
In this guide you'll build a working AI chatbot from scratch using three nodes. By the end you'll understand the core agent pattern and be ready to add tools, memory, and more.
Prerequisites
- An API key from OpenRouter, OpenAI, or another supported provider
- A XGENIA project open in the editor
Step 1: Add an LLM Provider
The LLM Provider node tells the agent which AI model to use.
- Open your component and add a new LLM Provider node from the AI Agents category in the node picker.
- In the property panel, configure:
- Provider — Select your provider (e.g.
OpenRouter) - Model — Enter a model identifier (e.g.
anthropic/claude-sonnet-4) - API Key — Paste your API key
- Provider — Select your provider (e.g.
The node outputs an LLM Config object that other nodes consume.
Step 2: Add an Agent
The Agent node is the brain of your setup.
- Add an AI Agent node from the AI Agents category.
- Connect the LLM Provider's LLM Config output → Agent's LLM Config input.
- Set the Agent's Instructions to describe your assistant's behavior:
You are a friendly assistant that helps users with their questions.
Keep responses concise and helpful.
The node outputs an Agent object once it has a valid LLM config.
Step 3: Add Agent Chat
The Agent Chat node sends messages and receives responses.
- Add an Agent Chat node from the AI Agents category.
- Connect the Agent's Agent output → Agent Chat's Agent input.
- Wire a Text Input node's value → Agent Chat's Message input.
- Wire a Button node's Click signal → Agent Chat's Send signal.
- Connect Agent Chat's Response output → a Text node to display the reply.
Your basic flow looks like this:
[Text Input] ─── Message ──→ [Agent Chat] ──→ [Text Display]
[Button] ──── Click/Send ──→ [Agent Chat]
[LLM Provider] → [Agent] ──→ [Agent Chat]
Step 4: Test It
- Preview your app
- Type a message in the text input
- Click the button
- The AI response should appear in the text display
Enabling Streaming
For a real-time typing effect, set the Agent Chat's Streaming input to true. Connect the Partial Response output to your text display to show tokens as they arrive.
Adding a Tool
Tools give the agent abilities beyond just generating text. To add a simple tool:
- Add an Agent Tool node into the same component as your Agent.
- Set the tool's Name (e.g.
get_time) and Description (e.g.Returns the current time). - Define the Parameters Schema as a JSON Schema (or leave minimal for no-parameter tools).
With Auto-Discover Tools enabled (the default), the Agent automatically finds Tool nodes in the same component — no manual wiring needed.
When the agent calls the tool:
- The Tool node fires its Execute signal
- Your logic processes the request and sets the Result input
- Fire the Done signal to return the result to the agent
Next Steps
- LLM Provider — Advanced model configuration
- Agent — Instructions, tools, and retries
- Agent Tool — Build custom tools with JSON Schema
- Memory — Persist conversations across sessions
- Workflow — Chain agents into pipelines