Skip to main content

Getting Started with AI Agents

Early Alpha Version

AI Agents are in early alpha. Node interfaces, behaviors, and APIs are subject to change.

What you will learn

In this guide you'll build a working AI chatbot from scratch using three nodes. By the end you'll understand the core agent pattern and be ready to add tools, memory, and more.

Prerequisites

  • An API key from OpenRouter, OpenAI, or another supported provider
  • A XGENIA project open in the editor

Step 1: Add an LLM Provider

The LLM Provider node tells the agent which AI model to use.

  1. Open your component and add a new LLM Provider node from the AI Agents category in the node picker.
  2. In the property panel, configure:
    • Provider — Select your provider (e.g. OpenRouter)
    • Model — Enter a model identifier (e.g. anthropic/claude-sonnet-4)
    • API Key — Paste your API key

The node outputs an LLM Config object that other nodes consume.

Step 2: Add an Agent

The Agent node is the brain of your setup.

  1. Add an AI Agent node from the AI Agents category.
  2. Connect the LLM Provider's LLM Config output → Agent's LLM Config input.
  3. Set the Agent's Instructions to describe your assistant's behavior:
You are a friendly assistant that helps users with their questions.
Keep responses concise and helpful.

The node outputs an Agent object once it has a valid LLM config.

Step 3: Add Agent Chat

The Agent Chat node sends messages and receives responses.

  1. Add an Agent Chat node from the AI Agents category.
  2. Connect the Agent's Agent output → Agent Chat's Agent input.
  3. Wire a Text Input node's value → Agent Chat's Message input.
  4. Wire a Button node's Click signal → Agent Chat's Send signal.
  5. Connect Agent Chat's Response output → a Text node to display the reply.

Your basic flow looks like this:

[Text Input] ─── Message ──→ [Agent Chat] ──→ [Text Display]
[Button] ──── Click/Send ──→ [Agent Chat]
[LLM Provider] → [Agent] ──→ [Agent Chat]

Step 4: Test It

  1. Preview your app
  2. Type a message in the text input
  3. Click the button
  4. The AI response should appear in the text display

Enabling Streaming

For a real-time typing effect, set the Agent Chat's Streaming input to true. Connect the Partial Response output to your text display to show tokens as they arrive.

Adding a Tool

Tools give the agent abilities beyond just generating text. To add a simple tool:

  1. Add an Agent Tool node into the same component as your Agent.
  2. Set the tool's Name (e.g. get_time) and Description (e.g. Returns the current time).
  3. Define the Parameters Schema as a JSON Schema (or leave minimal for no-parameter tools).

With Auto-Discover Tools enabled (the default), the Agent automatically finds Tool nodes in the same component — no manual wiring needed.

When the agent calls the tool:

  1. The Tool node fires its Execute signal
  2. Your logic processes the request and sets the Result input
  3. Fire the Done signal to return the result to the agent

Next Steps

  • LLM Provider — Advanced model configuration
  • Agent — Instructions, tools, and retries
  • Agent Tool — Build custom tools with JSON Schema
  • Memory — Persist conversations across sessions
  • Workflow — Chain agents into pipelines