Your Sovereign AI
Command Center

HQ is a self-hosted, Discord-styled project hub with LLM-powered chat, parallel agent tasks, per-channel memory, and full multi-provider AI routing — no SaaS lock-in.

Everything you need, nothing you don't

Built for power users who want full control over their AI stack.

💬

Multi-Channel LLM Chat

Discord-style channels per topic — #research, #backend, #agents. Each channel has its own AI context and system prompt.

🤖

Parallel Agent Tasks

Spawn up to 10 background AI agents simultaneously. Each runs an autonomous tool-use loop with web search, step tracking, and cancellation.

Multi-Provider AI (BYOK)

Groq, Gemini, Together AI, Claude OAuth, OpenRouter, Cerebras, SambaNova, Mistral, Ollama. Account Stacking: pool multiple free-tier keys per provider — HQ treats them as a single high-bandwidth stream and jumps to the next healthy key the moment one is throttled.

⏯️

Parallel Mode

Send one message to two models simultaneously and compare responses side-by-side. Great for evaluating outputs or getting a second opinion.

🧠

Infinite Project Context

Never repeat yourself. HQ's Handoff Protocol acts as a "Project Brain" that stays updated in real-time — distilling facts and decisions from your chat into a persistent state so the AI (and your team) always knows exactly where the project stands.

🔐

RBAC — Admin & Member Roles

Granular permissions system. Admins manage rate limits, restrict model access, view audit logs, and manage user roles from a 4-tab admin panel.

🌐

Web Browsing Agent

Every AI response has access to web tools: fetch any URL via Jina Reader, search via Tavily or DuckDuckGo. Autonomous 5-step tool loop.

📡

Stack Status Monitor

Real-time provider health indicator in the composer. Green = healthy, amber = rotating to fallback key, red = all keys rate-limited.

📋

/handoff Command

One slash command generates a complete handoff document summarizing the channel state, decisions made, and next steps — ready for a new model or collaborator.

📦

Claude & ChatGPT Migration

Escape context lock-in. Import your entire Claude or ChatGPT history — conversations, projects, and memories — into HQ in one click. Your past context, finally portable.

Pricing / License

OSS Core is free forever. Pro Command License unlocks Account Stacking, unlimited agents, and history import.

COMMUNITY
$0
Self-hosted · OSS Core · free forever
  • ✅ Unlimited projects & channels
  • ✅ All 9+ AI providers (BYOK)
  • ✅ Single-key rotation per provider
  • ✅ 2 parallel agents
  • ✅ Basic Handoff Protocol memory
  • ✅ Admin panel & RBAC
Try Demo →
MOST POPULAR
PRO COMMAND LICENSE
$15/mo
or $150/yr · per instance
  • ✅ Everything in Community
  • Account Stacking — pool up to 10 keys/provider
  • ✅ 10 parallel agents
  • ✅ Claude & ChatGPT History Import
  • ✅ Advanced RBAC & per-user model locks
  • ✅ Priority support
Get Pro →
STUDIO
Custom
Team & enterprise pricing
  • ✅ Everything in Pro
  • ✅ Centralized key management
  • ✅ Full audit logs & compliance export
  • ✅ Priority web-search API credits
  • ✅ Dedicated onboarding
  • ✅ SLA & custom deployment
Contact us →

You always pay AI providers directly. Typical Groq costs: ~$0.06/M tokens — a team of 5 spends ≈ $2–5/mo total.

Built on open standards

FrontendReact + Vite + TailwindCSS + shadcn/ui
BackendExpress 5 + TypeScript
DatabasePostgreSQL + Drizzle ORM
AuthJWT + bcrypt
AIGroq, Gemini, Together, Claude, OpenRouter, Ollama + more
DeploySelf-hosted on Replit

Ready to take command?

Try the live demo instantly — no account setup required.