Agent Memory
Early Alpha Version
AI Agents are in early alpha. Node interfaces, behaviors, and APIs are subject to change.
The Agent Memory node provides conversation persistence for agents. It stores messages per thread, enabling multi-turn conversations that survive across sessions.
Inputs
| Input | Type | Default | Description |
|---|---|---|---|
| Thread ID | String | — | Identifier for the conversation thread. Leave empty to auto-generate. |
| Max Messages | Number | 50 | Maximum number of messages to keep per thread |
| Max Token Estimate | Number | 0 | Maximum estimated token count for the context window (0 = unlimited) |
| Storage Key | String | xgenia-agent-memory | Key used for local storage persistence |
| Add Message | Object | — | A message object to add with role (user or assistant) and content fields |
| Add | Signal | — | Trigger to add the message to the thread |
| Clear | Signal | — | Clear all messages in the current thread |
| Retrieve | Signal | — | Load messages from storage for the current thread |
Outputs
| Output | Type | Description |
|---|---|---|
| Messages | Array | All messages in the current thread |
| Message Count | Number | Total number of messages in the thread |
| Thread ID | String | The active thread identifier |
| Context Window | Array | Messages trimmed to fit within the token estimate limit |
| Updated | Signal | Fires whenever the message list changes |
| Error | String | Error message if an operation failed |
Thread Management
Each conversation lives in a thread identified by a Thread ID:
- Explicit ID: Set the Thread ID to a known value (e.g. a user ID) to maintain separate conversations per user
- Auto-generated: Leave Thread ID empty and the node generates a unique ID automatically
Context Window
When Max Token Estimate is set above 0, the Context Window output provides a trimmed version of the message history that fits within the token budget. This is useful for keeping API costs down with long conversations.
The token estimation uses a simple heuristic (characters ÷ 3) to approximate token count.
Typical Setup
[Agent Chat] ─── Response ───→ [Memory] ← Add Signal
↑ │
└──── Context Window ──────────┘
- After the agent responds, add both the user message and assistant response to Memory
- Before the next message, retrieve the conversation history
- Pass the Context Window as conversation context to maintain continuity
Usage Tips
- Messages are persisted to local storage by default, so they survive page reloads
- Use different Storage Key values if you have multiple independent memory stores
- The Clear signal is useful for "New Conversation" buttons
- Max Messages prevents unbounded storage growth in long-running conversations