π Multi-Agent Orchestration
Run Claude Code, OpenAI Codex, and Gemini CLI in parallel. Get diverse perspectives from multiple AI models and synthesize them into unified recommendations.
parallel-process:
claude-analysis:
input: STDIN
model: claude-code
action: "Analyze architecture and trade-offs"
output: $CLAUDE_RESULT
gemini-analysis:
input: STDIN
model: gemini-cli
action: "Identify patterns and best practices"
output: $GEMINI_RESULT
codex-analysis:
input: STDIN
model: openai-codex
action: "Focus on implementation structure"
output: $CODEX_RESULT
synthesize:
input: |
Claude: $CLAUDE_RESULT
Gemini: $GEMINI_RESULT
Codex: $CODEX_RESULT
model: claude-code
action: "Combine into unified recommendation"
output: STDOUT
π Agentic Loops
Iterative refinement until the LLM decides work is complete. Perfect for code generation, document writing, and any task that benefits from self-improvement.
implement:
agentic_loop:
max_iterations: 5
exit_condition: llm_decides
allowed_paths: [./src, ./tests]
tools: [Read, Write, Edit, Bash]
input: STDIN
model: claude-code
action: |
Iteration {{ loop.iteration }}.
Previous: {{ loop.previous_output }}
Implement, test, and refine. Say DONE when complete.
output: STDOUT
β‘ Parallel Processing
Run independent steps concurrently for faster workflows. Automatically waits for all parallel steps before continuing.
parallel-process:
gpt4:
input: NA
model: gpt-4o
action: "Write a function to parse JSON"
output: gpt4-solution.py
claude:
input: NA
model: claude-3-5-sonnet-latest
action: "Write a function to parse JSON"
output: claude-solution.py
compare:
input: [gpt4-solution.py, claude-solution.py]
model: gpt-4o-mini
action: "Compare these implementations"
output: STDOUT
π οΈ Tool Execution
Run shell commands, scripts, and CLIs within your workflows. Integrate with grep, jq, git, or any command-line tool.
get-diff:
tool: bash
input: "git diff HEAD~1"
output: $DIFF
review:
input: $DIFF
model: claude-code
action: "Review these changes for issues"
output: STDOUT
π Workflow Visualization
See the structure of any workflow with ASCII charts. Understand parallel branches, sequential steps, and data flow at a glance.
+================================================+
| WORKFLOW: multi-agent-review.yaml |
+================================================+
+------------------------------------------------+
| INPUT: STDIN |
+------------------------------------------------+
|
v
+================================================+
| PARALLEL: parallel-process (3 agents) |
+------------------------------------------------+
ββ claude-review
ββ gemini-review
ββ codex-review
+================================================+
|
v
+------------------------------------------------+
| synthesize (agentic_loop: max 3 iterations) |
+------------------------------------------------+
|
v
+------------------------------------------------+
| OUTPUT: STDOUT |
+------------------------------------------------+
π Multi-Provider Support
Connect to any LLM provider. Cloud APIs, local models via Ollama, or agentic coding toolsβall in the same workflow.
Cloud APIs
OpenAI, Anthropic, Google, X.AI, DeepSeek, Moonshot
Local Models
Ollama, vLLM, any OpenAI-compatible endpoint
Agentic Tools
Claude Code, OpenAI Codex, Gemini CLI
π Advanced I/O
Process files, URLs, databases, and images. Batch operations with wildcards. Chunking for large files.
File Processing
Wildcards (*.go), chunking, multi-file input
Web Scraping
Fetch URLs, take screenshots, extract content
Database
Read from and write to PostgreSQL
Vision
Analyze images and screenshots with vision models
π Server Mode
Turn any workflow into an HTTP API. Perfect for integrating comanda into your existing services and CI/CD pipelines.
$ comanda server
$ curl -X POST "http://localhost:8080/process?filename=review.yaml" \
-d '{"input": "code to review"}'