Setting Up HuggingFace
Before using any ML features in XGENIA, you need a HuggingFace API key.
What you will learn in this guide​
- How to create a HuggingFace account and generate an API token
- How to configure the token in XGENIA
- How the AI assistant uses the token
Step 1: Get a HuggingFace Token​
- Go to huggingface.co/settings/tokens
- Click New token
- Give it a name (e.g. "XGENIA")
- Select Write access (required for AutoTrain)
- Copy the token
Step 2: Add the Token in XGENIA​
Open the Chat Panel settings (gear icon in the AI chat), then navigate to Providers. You will see a HuggingFace section:
- Paste your API key into the HuggingFace API Key field
- Click Verify to confirm it works
- The status indicator will turn green when verified
The token is stored securely in your local XGENIA settings and is never sent to any server except HuggingFace.
Step 3: Verify via AI Chat​
You can ask the AI assistant to check your token:
Check if my HuggingFace key is configured
The AI will call check_hf_token and report:
- Whether a key is configured
- Whether it has been verified
- Your HuggingFace username
Token Scopes​
| Feature | Required Scope |
|---|---|
| Model inference (predictions) | Read |
| AutoTrain (model training) | Write |
| Dataset uploads | Write |
For full ML capabilities, use a Write token.
ML Coordinator Server (Optional)​
For advanced features like data analysis and retention prediction, the ML Coordinator server must be running:
cd packages/xgenia-ml-server/ml-coordinator
npm install
npm start
The server starts on http://localhost:3001 and uses your HuggingFace token for API calls. You can also pass the token via environment variable:
HF_ACCESS_TOKEN=hf_your_token npm start
Docker (Alternative)​
cd packages/xgenia-ml-server
HF_ACCESS_TOKEN=hf_your_token docker-compose up