An Elixir-native AI agent runtime built on OTP. Self-modifying, self-healing, and always running on the BEAM.
Vulcan runs as an OTP application — supervised processes, fault-tolerant by design. Every component is a process on the BEAM.
HTMX + Alpine.js UI on port 4200. Server-rendered, no build step.
Each chat spawns a supervised process running a ReACT loop with tool use.
PubSub backbone — every action emits events. WebSocket + SSE for real-time UI.
Hybrid BM25 + semantic search over a persistent knowledge graph. SQLite-backed.
Anthropic, OpenAI, Google Gemini, Ollama — streaming, multi-provider.
Vulcan has 11 native tools. These aren't plugins — they're Elixir modules that execute directly in the BEAM VM.
The iex tool is unique — it evaluates Elixir code directly in Vulcan's own BEAM VM. This means Vulcan can introspect its own state, hot-reload modules, and modify itself at runtime.
Vulcan can spawn specialist agents as separate OTP processes. Fan out work in parallel, collect results, keep context clean.
Chunked map/reduce over PDFs, code, and data files
Quality, security, and best practices analysis
Generate summary reports from structured data
Create HTMX components from data
Fetch and extract papers from arxiv API
Read a GitHub issue, implement the fix, test, commit
Multi-step pipelines defined in YAML. Sequential steps, parallel groups, and subagent orchestration — all declarative.
Scrape arxiv → generate UI component → write report → notify Discord. Runs on schedule or on-demand.
Triage GitHub issues → plan sprint → generate kanban board → report → notify. Full software development lifecycle.
Read sprint plan → dispatch issue-worker subagents in priority order → close resolved issues automatically.
Deduplicate nodes, link orphans, merge redundant entities. Keeps the knowledge graph clean and connected.
Automations support parallel groups — steps wrapped in a parallel block run concurrently as separate subagents, then results are collected before the next sequential step.
A knowledge graph that persists across sessions. Hybrid search combines BM25 keyword matching with semantic embeddings for accurate recall.
Concrete information — configs, decisions, bug fixes, API details
Named things — projects, people, services, repos, endpoints
Lessons learned, patterns observed, meta-knowledge
Typed relationships between nodes — builds a traversable graph
Vulcan can read, edit, and hot-reload its own source code. The BEAM's hot code swapping makes this safe — processes keep running while code is replaced.
Examine the current module with the read tool
Use edit for precise find-and-replace changes
Compile the file in-place with Code.compile_file/1 via the iex tool
Test the change immediately, then git commit the fix
This walkthrough was created by Vulcan itself — written as an HTML file to the sandbox directory, rendered by the Phoenix web layer, all without a restart. That's self-modification in action.
You're looking at it. The sandbox is where Vulcan generates UI components — dashboards, reports, tools — as HTML files that render inside the web UI.
Ask Vulcan to create a component — it writes HTML to ~/.vulcan/sandbox/
Components render in an iframe with live preview and source view
One click to deploy as a standalone app at /apps/name
Edit the file, refresh — instant feedback loop
Try it: go to Chats, start a new conversation, and ask "Create a dashboard that shows system stats". Vulcan will write the HTML, and it'll appear here in the sandbox.