Using aicentos with Hermes
Project Introduction
Hermes Agent is a general-purpose AI agent from Nous Research. It supports CLI chat, tool calling, memory, skills, gateways, and scheduled tasks. It can connect to officially supported cloud services or any OpenAI-compatible endpoint.
- Official Website: https://hermes-agent.nousresearch.com
- Documentation: https://hermes-agent.nousresearch.com/docs
- GitHub: https://github.com/NousResearch/hermes-agent
- Web3Hermes: https://web3hermes.com
Prerequisites
- aicentos API Key (Get from Console)
gitavailable on your machinepython3available on your machine
Install Hermes
Environment Requirements
- macOS / Linux / WSL2
- Windows can be installed through PowerShell, but WSL2 is still the recommended path
- The installer handles Python, Node.js, ripgrep, and ffmpeg automatically
Recommended: Install Web3Hermes
If you want a desktop-browser UI optimized for users in mainland China, install Web3Hermes first. It is a lightweight web UI based on Hermes Agent. Official README: Web3CZ/Web3Hermes.
git clone https://github.com/Web3CZ/Web3Hermes.git
cd Web3Hermes
python3 bootstrap.pyYou can also use the startup script from the project directory:
./start.shThe service starts at http://127.0.0.1:8787 by default.
Install Hermes Agent CLI
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bashirm https://res1.hermesagent.org.cn/install.ps1 | iexAfter installation, reload your shell config:
source ~/.zshrcIf you use bash, run:
source ~/.bashrcIf you use Windows PowerShell, just close and reopen the terminal.
Configure aicentos
Hermes officially recommends using hermes model for interactive setup. For aicentos, choose Custom endpoint because aicentos provides an OpenAI-compatible API.
If you want to complete the full post-install setup in one pass, you can also run:
hermes setupIf you only want to review or reconfigure tool permissions, run:
hermes toolsOption 1: Interactive setup with hermes model (Recommended)
Run:
hermes modelFill in the prompts like this:
- Provider:
Custom endpoint (self-hosted / VLLM / etc.) - API base URL:
https://www.aicentos.com/v1 - API key: your aicentos token
- Model name:
gpt-5.4 - Context length: use at least
65536
After setup, Hermes writes the model, provider, and endpoint into ~/.hermes/config.yaml.
Important
Hermes expects a model with at least 64K context for real multi-step agent workflows. For custom endpoints, choose a model and context window that meet or exceed that requirement.
Option 2: Edit the config file manually
Hermes uses ~/.hermes/config.yaml as its main config file. If the directory does not exist yet, create it first:
mkdir -p ~/.hermes
touch ~/.hermes/config.yaml
touch ~/.hermes/.envThen put your token into ~/.hermes/.env:
OPENAI_API_KEY=sk-your-aicentos-tokenThen write the following into ~/.hermes/config.yaml:
model:
default: gpt-5.4
provider: custom
base_url: https://www.aicentos.com/v1Tip
For custom endpoints, Hermes reads provider, default, and base_url from config.yaml. The API key can be written directly in config.yaml, or placed in ~/.hermes/.env as OPENAI_API_KEY like above. Using .env is recommended to avoid storing the key in plain text.
Switch Models
To use another model supported by aicentos, just change model.default or run hermes model again.
For example:
model:
default: claude-sonnet-4-5-20250929
provider: custom
base_url: https://www.aicentos.com/v1Note
This assumes the model is available through aicentos's OpenAI-compatible endpoint and meets Hermes' minimum context requirement. If you are not sure about the exact model ID, check the Supported Models page first, then fill in the default field.
Start Using Hermes
Start an interactive session:
hermesOr send a quick test message:
hermes chat -q "Reply in one sentence: https://www.aicentos.com/ is connected"If the configuration is already working, you can also switch to another configured model inside the session with /model.
Verify Configuration
Run these checks first:
hermes doctorhermes config checkhermes chat -q "Please reply with ok only" -QIf the last command returns a normal response, the aicentos integration is working.
FAQ
Why choose Custom endpoint?
Because Hermes treats any OpenAI-compatible API as provider: custom. aicentos fits that pattern, so you do not need a Hermes-specific adapter.
Why don't OPENAI_BASE_URL or LLM_MODEL work?
Hermes has removed support for those legacy environment variables. Model, provider, and endpoint settings now come from ~/.hermes/config.yaml.
What if the API key is configured but authentication still fails?
Check in this order:
- Make sure
base_urlishttps://www.aicentos.com/v1 - Make sure the token comes from the aicentos Console
- Make sure
model.defaultis a valid model ID such asgpt-5.4 - Make sure the model context is at least
65536 - Run
hermes config checkandhermes doctorfor the exact error
Can I install Hermes directly on Windows?
Yes, PowerShell installation is possible, but WSL2 is still the safer default for compatibility and a smoother Unix-style workflow.