Declarative AI Pipelines
for the Command Line
Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel.
Version control everything.
Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel.
Version control everything.
brew install kris-hansen/comanda/comanda
go install github.com/kris-hansen/comanda@latest
Comanda is building a home for declarative, CLI-native AI processing. Workflows should live in plain text, be easy to version, and run where developers already work, in terminals, repos, and automation. The vision also includes multi-model and multimodal workflows, so text, images, audio, documents, and different model backends can participate in the same declarative pipeline.
Comanda treats workflows as structured declarations, not ad hoc piles of glue code. The goal is repeatable orchestration that feels closer to infrastructure as code than one-off prompting.
The job of the harness is not to compete with the model. It should provide structure, inputs, and execution control, then step aside so the LLM can reason and do the work without unnecessary abstraction or ceremony.
Comanda is a pioneer in CLI-native agent orchestration. It was introduced in late 2024, when most agent tooling was SDK-first, such as LangChain and LangGraph, and most CLI tools were still limited to prompt execution rather than structured workflows.
That history matters because Comanda was designed around a simple belief: powerful models should be orchestrated where real work already happens, in terminals, repos, scripts, CI, and operational environments, without forcing developers into heavyweight framework abstractions.
Comanda exists to give frontier models a clean execution surface. The harness should provide structure and repeatability, not become the main event. The model should do the work. The workflow should stay legible.
Agentic workflows should be predictable, declarative, and repeatable, like a Terraform plan, not a bag of scripts or a Python program drowning in opaque dependencies. Let the frontier models do the heavy lifting. Comanda gives them a clean, composable execution surface.
Run Claude Code, Codex, and Gemini CLI in parallel. Synthesize diverse AI perspectives into unified recommendations.
Define pipelines in version-controllable YAML. Share workflows with your team, run them in CI/CD.
Iterative refinement until the LLM decides work is complete. Quality gates, retries, and state management built in.
Works with pipes, redirects, and scripts. Process files, URLs, databases. Batch operations with wildcards.
Generate persistent code indexes. Multi-repo context for AI workflows. Compare and aggregate codebases.
Join developers using comanda to orchestrate AI pipelines.