An open-source orchestrator that gives Claude Code a persistent knowledge graph and code graph across your projects. The VCT Launcher is the hub that installs and connects everything. The Orchestrator wires up agents, skills, and hooks. Runs locally on your machine — open source, available now under AGPL-3.0.
Each tool is standalone and useful on its own - but they connect. The VCT Launcher is the hub: one place to install, activate, and launch everything. Tools share a common backend - a knowledge graph, a code graph, and a set of MCP servers - so the Orchestrator remembers context across sessions and connected tools reuse the same memory. Everything runs locally. Fully open source under AGPL-3.0.
What it does day to day.
Most AI coding assistants reset between sessions. The Orchestrator doesn't. Every decision and pattern you capture lands in a local knowledge graph — embedded on your machine, no cloud step. The graph is shared across projects, so the JWT pattern you documented last month is searchable from the codebase you opened this morning. When the session resets, a snapshot preserves your active state and restores it automatically.
Source files are parsed, not just embedded. Tree-sitter extracts an AST across 10+ languages and builds typed collections — functions, classes, modules, APIs, and cross-service calls (with protocol and confidence). Optional deep analysis (opt-in, requires a separate JVM component) adds control-flow and data-flow depth. Queryable via MCP: ask "what calls validate_token?" and you get a structured answer in milliseconds. Re-indexed automatically on every code edit, so it never drifts out of sync.
Shell commands are checked before execution, credentials are scanned on every write, and network fetches are guarded against internal IP access. Every subprocess starts with secrets stripped from its environment. The model cannot bypass any of this — enforcement is in the runner, not in the prompt. Before an Edit, the system pulls relevant patterns and decisions from the knowledge graph and injects them into context — just-in-time recall, not eager loading.
Weaviate runs in a container on localhost. Two embedding models — one for text, one for code — selected automatically based on your hardware. GPU gives you the larger code model; CPU falls back gracefully. Nothing crosses the wire to a third party unless you explicitly opt in. Secrets stay in a local vault, scoped per project: one command runs any subprocess with the right credentials injected — nothing echoed to the terminal.
$ vct exec --project orchestrator \
--secret openai_key=OPENAI_API_KEY \
-- python main.pyNo CLI setup scripts. No config files to edit. The wizard handles install, dependencies, and the first sync.
The base Orchestrator and VCT Launcher are free and open source — released, downloadable now. Pro adds RL-scored retrieval reranking at €19/mo. Everything else is coming soon — join the waitlist to be notified.
The things people ask us most often.