An extensible AI Agent CLI coding tool that provides a unified AI coding experience — freely switchable LLM backends, a horizontally scalable plugin ecosystem, and MCP as the universal tool bus.
Context build → request construction → tool dispatch → result feedback. A complete, inspectable loop with sub-agent parallelism and prompt-injection detection.
Built-in drivers for Anthropic, OpenAI, and Ollama, plus an optional LiteLLM Proxy driver routing to 100+ models. Switch backends with a single /model command.
First-class plugins: Aider (Git-aware multi-file edits), Open Interpreter (code execution), smol-dev (project scaffolding). MCP tool bus exposes tools to any IDE via stdio or WebSocket.
xifan-mem auto-captures observations, performs three-step progressive retrieval, and generates session summaries. SQLite + FTS5 for local, private persistence.
Real-time token metering, per-session cost summaries, and budget limits with warn / pause / stop policies. Know what every request costs before you run it.
Four-tier tool permissions (L0 read → L1 write → L2 shell → L3 dangerous). Headless mode denies elevated tools by default; API keys are read from env only, never written to disk.
C4 Level 2 container view. L1 core (Node.js only) runs independently; L2 enhancements (Python-based) plug in as optional capability extensions.
L1 core layer runs on Node.js ≥ 18 alone — no Python environment required.
# npm npm install -g @xifan-coder/cli # or pnpm pnpm add -g @xifan-coder/cli
# Set API key via environment variable export ANTHROPIC_API_KEY=sk-ant-... # or OPENAI_API_KEY, or point at a local Ollama / LiteLLM Proxy # Generate project-level XIFAN.md xifan-coder init --config
# Interactive REPL xifan-coder # One-shot task xifan-coder "refactor src/auth.ts" # Use a specific model xifan-coder --model qwen2.5-coder # Headless mode for CI (denies write/shell by default) xifan-coder --headless "run tests and fix failures"
Build from source or contribute:
github.com/e2ec-it/XiFanCoderXiFan-Coder runs entirely on your local machine. Configuration, API keys, session history, and memory are stored locally — nothing is uploaded to E2E Consulting or any third-party telemetry service.
XiFan-Coder communicates only with the LLM endpoints and MCP servers you configure. No analytics, crash reporting, or telemetry SDKs are included.
If you have questions about this privacy policy, contact us at tech-support01@e2ec.biz
Last updated: April 7, 2026