Declarative AI Pipelines
for the Command Line

Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel.
Version control everything.

comanda — ~/projects
$

Quick Install

macOS (Homebrew)

brew install kris-hansen/comanda/comanda

Go Install

go install github.com/kris-hansen/comanda@latest

Download Binary

GitHub Releases →

Vision

Comanda is building a home for declarative, CLI-native AI processing. Workflows should live in plain text, be easy to version, and run where developers already work, in terminals, repos, and automation. The vision also includes multi-model and multimodal workflows, so text, images, audio, documents, and different model backends can participate in the same declarative pipeline.

Declarative by default

Comanda treats workflows as structured declarations, not ad hoc piles of glue code. The goal is repeatable orchestration that feels closer to infrastructure as code than one-off prompting.

Harnesses should get out of the way

The job of the harness is not to compete with the model. It should provide structure, inputs, and execution control, then step aside so the LLM can reason and do the work without unnecessary abstraction or ceremony.

About

A pioneer in CLI-native agent orchestration

Comanda is a pioneer in CLI-native agent orchestration. It was introduced in late 2024, when most agent tooling was SDK-first, such as LangChain and LangGraph, and most CLI tools were still limited to prompt execution rather than structured workflows.

That history matters because Comanda was designed around a simple belief: powerful models should be orchestrated where real work already happens, in terminals, repos, scripts, CI, and operational environments, without forcing developers into heavyweight framework abstractions.

What people use it for

  • Declarative data processing workflows
  • Orchestrating local models in air-gapped environments
  • Model comparison and evaluation across providers
  • Complex software engineering and codebase workflows

Why it exists

Comanda exists to give frontier models a clean execution surface. The harness should provide structure and repeatability, not become the main event. The model should do the work. The workflow should stay legible.

Agentic workflows should be predictable, declarative, and repeatable, like a Terraform plan, not a bag of scripts or a Python program drowning in opaque dependencies. Let the frontier models do the heavy lifting. Comanda gives them a clean, composable execution surface.

Why comanda?

🔀

Multi-Agent Orchestration

Run Claude Code, Codex, and Gemini CLI in parallel. Synthesize diverse AI perspectives into unified recommendations.

📄

YAML Workflows

Define pipelines in version-controllable YAML. Share workflows with your team, run them in CI/CD.

🔁

Agentic Loops

Iterative refinement until the LLM decides work is complete. Quality gates, retries, and state management built in.

Unix Philosophy

Works with pipes, redirects, and scripts. Process files, URLs, databases. Batch operations with wildcards.

📚

Codebase Indexing

Generate persistent code indexes. Multi-repo context for AI workflows. Compare and aggregate codebases.

Ready to Build Smarter Workflows?

Join developers using comanda to orchestrate AI pipelines.