Skip to main content

Setting Up HuggingFace

Before using any ML features in XGENIA, you need a HuggingFace API key.

What you will learn in this guide​

  • How to create a HuggingFace account and generate an API token
  • How to configure the token in XGENIA
  • How the AI assistant uses the token

Step 1: Get a HuggingFace Token​

  1. Go to huggingface.co/settings/tokens
  2. Click New token
  3. Give it a name (e.g. "XGENIA")
  4. Select Write access (required for AutoTrain)
  5. Copy the token

Step 2: Add the Token in XGENIA​

Open the Chat Panel settings (gear icon in the AI chat), then navigate to Providers. You will see a HuggingFace section:

  1. Paste your API key into the HuggingFace API Key field
  2. Click Verify to confirm it works
  3. The status indicator will turn green when verified

The token is stored securely in your local XGENIA settings and is never sent to any server except HuggingFace.

Step 3: Verify via AI Chat​

You can ask the AI assistant to check your token:

Check if my HuggingFace key is configured

The AI will call check_hf_token and report:

  • Whether a key is configured
  • Whether it has been verified
  • Your HuggingFace username

Token Scopes​

FeatureRequired Scope
Model inference (predictions)Read
AutoTrain (model training)Write
Dataset uploadsWrite

For full ML capabilities, use a Write token.

ML Coordinator Server (Optional)​

For advanced features like data analysis and retention prediction, the ML Coordinator server must be running:

cd packages/xgenia-ml-server/ml-coordinator
npm install
npm start

The server starts on http://localhost:3001 and uses your HuggingFace token for API calls. You can also pass the token via environment variable:

HF_ACCESS_TOKEN=hf_your_token npm start

Docker (Alternative)​

cd packages/xgenia-ml-server
HF_ACCESS_TOKEN=hf_your_token docker-compose up