processor

package
v0.0.147 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 4, 2026 License: MIT Imports: 34 Imported by: 0

Documentation

Index

Constants

View Source
const (
	DefaultMaxIterations      = 10
	DefaultTimeoutSeconds     = 0 // 0 = no timeout (rely on max_iterations)
	DefaultContextWindow      = 5
	DefaultCheckpointInterval = 5 // Save state every N iterations
)

Default values for agentic loop safety

View Source
const (
	OutputSTDOUT = "STDOUT"
	InputSTDIN   = "STDIN"
	InputNA      = "NA"
)

Common I/O constants

View Source
const (
	// MaxMemorySizeBytes is the maximum size for memory file content (500KB)
	// This ensures memory can fit in model context windows along with prompts
	// Typical context windows: GPT-4: ~128K tokens (~512KB), Claude: ~200K tokens (~800KB)
	// We use 500KB to leave room for prompts and other content
	MaxMemorySizeBytes = 500 * 1024

	// MaxMemorySizeWarningBytes is the size at which we warn users (400KB)
	MaxMemorySizeWarningBytes = 400 * 1024
)
View Source
const EmbeddedLLMGuide = `# Comanda YAML DSL Guide (for LLM Consumption)

This guide specifies the YAML-based Domain Specific Language (DSL) for Comanda workflows, enabling LLMs to generate valid workflow files.

## ⚠️ CRITICAL RULES - READ BEFORE GENERATING ⚠️

**RULE 1 - execute_loops IGNORES top-level steps:**
When a workflow has ` + "`execute_loops:`" + `, ONLY the loops in the ` + "`loops:`" + ` block run. Any steps defined outside ` + "`loops:`" + ` are COMPLETELY IGNORED.

**RULE 2 - codebase-index with multi-loop workflows:**
If you need codebase-index AND multiple loops, the codebase-index MUST be inside a loop:
` + "```yaml" + `
loops:
  indexer:
    max_iterations: 1
    allowed_paths: [~/myproject, .]
    steps:
      index:
        step_type: codebase-index
        codebase_index:
          root: ~/myproject
          output:
            path: .comanda/INDEX.md
            store: repo

  analyzer:
    depends_on: [indexer]
    steps:
      analyze:
        input: .comanda/INDEX.md
        model: claude-code
        action: "Analyze the codebase"
        output: STDOUT

execute_loops:
  - indexer
  - analyzer
` + "```" + `

**RULE 3 - Output to files, not STDOUT, in multi-step/multi-loop workflows:**
When a workflow has multiple steps or loops that build on each other, use file outputs (e.g., ` + "`output: ./results.md`" + `) instead of ` + "`output: STDOUT`" + `. STDOUT is only useful when the next step uses ` + "`input: STDIN`" + `. For agentic loops that create documentation or artifacts, always write to files so later loops can read them.

**RULE 4 - Never put codebase-index as a top-level step with execute_loops:**
` + "```yaml" + `
# ❌ WRONG - index_step is IGNORED, $PROJECT_INDEX is never set!
index_step:
  step_type: codebase-index
  codebase_index:
    root: ~/myproject

loops:
  analyze:
    steps:
      step1:
        input: $PROJECT_INDEX  # ❌ This variable doesn't exist!
        ...

execute_loops:
  - analyze
` + "```" + `

## Overview

Comanda workflows consist of one or more named steps. Each step performs an operation. There are seven main types of steps:
1.  **Standard Processing Step:** Involves LLMs, file processing, data operations.
2.  **Generate Step:** Uses an LLM to dynamically create a new Comanda workflow YAML file.
3.  **Process Step:** Executes another Comanda workflow file (static or dynamically generated).
4.  **Agentic Loop Step:** Iteratively processes until an exit condition is met (for refinement, planning, autonomous tasks).
5.  **Multi-Loop Orchestration:** Coordinates multiple interdependent agentic loops with variable passing and creator/checker patterns.
6.  **Codebase Index Step:** Scans a repository and generates a compact Markdown index for LLM consumption.
7.  **qmd Search Step:** Searches local knowledge bases using qmd (BM25, vector, or hybrid search).

## Core Workflow Structure

A Comanda workflow is a YAML map where each key is a ` + "`step_name`" + ` (string, user-defined), mapping to a dictionary defining the step.

` + "```yaml" + `
# Example of a workflow structure
workflow_step_1:
  # ... step definition ...
another_step_name:
  # ... step definition ...
` + "```" + `

## 1. Standard Processing Step Definition

This is the most common step type.

**Basic Structure:**
` + "```yaml" + `
step_name:
  input: [input source]
  model: [model name]
  action: [action to perform / prompt provided]
  output: [output destination]
  type: [optional, e.g., "openai-responses"] # Specifies specialized handling
  batch_mode: [individual|combined] # Optional, for multi-file inputs
  skip_errors: [true|false] # Optional, for multi-file inputs
  # ... other type-specific fields for "openai-responses" like 'instructions', 'tools', etc.
` + "```" + `

**Key Elements:**
- ` + "`input`" + `: (Required for most, can be ` + "`NA`" + `) Source of data. See "Input Types".
- ` + "`model`" + `: (Required, can be ` + "`NA`" + `) LLM model to use. See "Models".
- ` + "`action`" + `: (Required for most) Instructions or operations. See "Actions".
- ` + "`output`" + `: (Required) Destination for results. See "Outputs".
- ` + "`type`" + `: (Optional) Specifies a specialized handler for the step, e.g., ` + "`openai-responses`" + `. If omitted, it's a general-purpose LLM or NA step.
- ` + "`batch_mode`" + `: (Optional, default: ` + "`combined`" + `) For steps with multiple file inputs, defines if files are processed ` + "`combined`" + ` into one LLM call or ` + "`individual`" + `ly.
- ` + "`skip_errors`" + `: (Optional, default: ` + "`false`" + `) If ` + "`batch_mode: individual`" + `, determines if processing continues if one file fails.

**OpenAI Responses API Specific Fields (used when ` + "`type: openai-responses`" + `):**
- ` + "`instructions`" + `: (string) System message for the LLM.
- ` + "`tools`" + `: (list of maps) Configuration for tools/functions the LLM can call.
- ` + "`previous_response_id`" + `: (string) ID of a previous response for maintaining conversation state.
- ` + "`max_output_tokens`" + `: (int) Token limit for the LLM response.
- ` + "`temperature`" + `: (float) Sampling temperature.
- ` + "`top_p`" + `: (float) Nucleus sampling (top-p).
- ` + "`stream`" + `: (bool) Whether to stream the response.
- ` + "`response_format`" + `: (map) Specifies response format, e.g., ` + "`{ type: \"json_object\" }`" + `.


## 2. Generate Step Definition (` + "`generate`" + `)

This step uses an LLM to dynamically create a new Comanda workflow YAML file.

**Structure:**
` + "```yaml" + `
step_name_for_generation:
  input: [optional_input_source_for_context, or NA] # e.g., STDIN, a file with requirements
  generate:
    model: [llm_model_for_generation, optional] # e.g., gpt-4o-mini. Uses default if omitted.
    action: [prompt_for_workflow_generation] # Natural language instruction for the LLM.
    output: [filename_for_generated_yaml] # e.g., new_workflow.yaml
    context_files: [list_of_files_for_additional_context, optional] # e.g., [schema.txt, examples.yaml]
` + "```" + `
**` + "`generate`" + ` Block Attributes:**
- ` + "`model`" + `: (string, optional) Specifies the LLM to use for generation. If omitted, uses the ` + "`default_generation_model`" + ` configured in Comanda. You can set or update this default model by running ` + "`comanda configure`" + ` and following the prompts for setting a default generation model.
- ` + "`action`" + `: (string, required) The natural language instruction given to the LLM to guide the workflow generation.
- ` + "`output`" + `: (string, required) The filename where the generated Comanda workflow YAML file will be saved.
- ` + "`context_files`" + `: (list of strings, optional) A list of file paths to provide as additional context to the LLM, beyond the standard Comanda DSL guide (which is implicitly included).
- **Note:** The ` + "`input`" + ` field for a ` + "`generate`" + ` step is optional. If provided (e.g., ` + "`STDIN`" + ` or a file path), its content will be added to the context for the LLM generating the workflow. If not needed, use ` + "`input: NA`" + `.

## 3. Process Step Definition (` + "`process`" + `)

This step executes another Comanda workflow file.

**Structure:**
` + "```yaml" + `
step_name_for_processing:
  input: [optional_input_source_for_sub_workflow, or NA] # e.g., STDIN to pass data to the sub-workflow
  process:
    workflow_file: [path_to_comanda_yaml_to_execute] # e.g., generated_workflow.yaml or existing_flow.yaml
    inputs: {key1: value1, key2: value2, optional} # Map of inputs to pass to the sub-workflow.
    # capture_outputs: [list_of_outputs_to_capture, optional] # Future: Define how to capture specific outputs.
` + "```" + `
**` + "`process`" + ` Block Attributes:**
- ` + "`workflow_file`" + `: (string, required) The path to the Comanda workflow YAML file to be executed. This can be a statically defined path or the output of a ` + "`generate`" + ` step.
- ` + "`inputs`" + `: (map, optional) A map of key-value pairs to pass as initial variables to the sub-workflow. These can be accessed within the sub-workflow (e.g., as ` + "`$parent.key1`" + `).
- **Note:** The ` + "`input`" + ` field for a ` + "`process`" + ` step is optional. If ` + "`input: STDIN`" + ` is used, the output of the previous step in the parent workflow will be available as the initial ` + "`STDIN`" + ` for the *first* step of the sub-workflow if that first step expects ` + "`STDIN`" + `.

## 4. Agentic Loop Step Definition (` + "`agentic_loop`" + ` / ` + "`agentic-loop`" + `)

Agentic loops enable iterative LLM processing until an exit condition is met. This is powerful for tasks that require refinement, multi-step reasoning, or autonomous decision-making.

**When to use agentic loops:**
- Iterative code improvement (analyze → fix → verify cycles)
- Multi-step planning and execution
- Tasks where the LLM decides when work is complete
- Refinement workflows (draft → improve → finalize)

### Inline Syntax (Single-Step Loop)

For simple iterative tasks with a single step:

` + "```yaml" + `
step_name:
  agentic_loop:
    max_iterations: 5           # Safety limit (default: 10)
    exit_condition: pattern_match  # or "llm_decides"
    exit_pattern: "COMPLETE"    # Regex pattern for pattern_match
  input: STDIN
  model: claude-code
  action: |
    Iteration {{ loop.iteration }}.
    Previous work: {{ loop.previous_output }}

    Continue improving. Say COMPLETE when done.
  output: STDOUT
` + "```" + `

### Block Syntax (Multi-Step Loop)

For complex loops with multiple sub-steps per iteration:

` + "```yaml" + `
agentic-loop:
  config:
    max_iterations: 5           # Safety limit (default: 10)
    timeout_seconds: 300        # Total timeout (default: 300)
    exit_condition: llm_decides # or "pattern_match"
    exit_pattern: "DONE"        # For pattern_match
    context_window: 3           # Past iterations to include (default: 5)

  steps:
    plan:
      input: STDIN
      model: claude-code
      action: |
        Iteration {{ loop.iteration }}.
        Previous: {{ loop.previous_output }}

        Plan next steps. Say DONE if complete.
      output: $PLAN

    execute:
      input: $PLAN
      model: claude-code
      action: "Execute the plan"
      output: STDOUT
` + "```" + `

**` + "`agentic_loop`" + ` Configuration:**

**Core Settings:**
- ` + "`max_iterations`" + `: (int, default: 10) Maximum iterations before stopping.
- ` + "`timeout_seconds`" + `: (int, default: 0) Total time limit in seconds. **0 = no timeout** (loop runs until max_iterations or exit condition).
- ` + "`exit_condition`" + `: (string) How to detect completion:
  - ` + "`llm_decides`" + `: Exits when output ends with "DONE", "COMPLETE", or "FINISHED" (case-insensitive). Works when these words appear at the very end of the output, at the end of any line, or as the entire output.
  - ` + "`pattern_match`" + `: Exits when output matches ` + "`exit_pattern`" + ` regex
- ` + "`exit_pattern`" + `: (string) Regex pattern for ` + "`pattern_match`" + ` condition.
- ` + "`context_window`" + `: (int, default: 5) Number of past iterations to include in context.

**File Access (REQUIRED for Claude Code):**
- ` + "`allowed_paths`" + `: (list, **REQUIRED for file operations**) Directories where Claude Code can use tools (Read, Write, Edit, Bash, etc.). **Without this, Claude Code runs in print-only mode and CANNOT read/write files.** When generating workflows, infer paths from: the ` + "`codebase_index.root`" + ` if present, file paths mentioned in the action, or use ` + "`[.]`" + ` (current directory) as fallback.
- ` + "`tools`" + `: (list, optional) Restrict available tools (e.g., ` + "`[Read, Glob, Grep]`" + ` for read-only access). If omitted, all tools are available.

**State Persistence (for Long-Running Loops):**
- ` + "`name`" + `: (string, **REQUIRED for stateful loops**) Unique identifier for the loop. Enables state persistence and resume capability.
- ` + "`stateful`" + `: (bool, default: false) Enable state persistence to ` + "`~/.comanda/loop-states/{name}.json`" + `. Allows resuming after interruption.
- ` + "`checkpoint_interval`" + `: (int, default: 5) Save state every N iterations. Lower values = more frequent saves = safer resume.

**Quality Gates (Automated Validation):**
- ` + "`quality_gates`" + `: (list, optional) Automated checks to run after each iteration. Each gate validates loop output and can trigger retry/abort/skip actions.

**Quality Gate Configuration:**
` + "```yaml" + `
quality_gates:
  - name: typecheck           # Gate name
    command: "npm run typecheck"  # Shell command to execute
    on_fail: retry            # Action: retry, abort, or skip
    timeout: 60               # Timeout in seconds
    retry:                    # Retry configuration (optional)
      max_attempts: 3         # Maximum retry attempts
      backoff_type: exponential  # linear or exponential
      initial_delay: 5        # Initial delay in seconds

  - name: security
    type: security            # Built-in gate type (syntax, security, test)
    on_fail: abort

  - name: tests
    command: "npm test"
    on_fail: skip             # Continue even if this fails
` + "```" + `

**Built-in Quality Gate Types:**
- ` + "`syntax`" + `: Checks for syntax errors (Python, JS, Go, etc.)
- ` + "`security`" + `: Scans for hardcoded secrets, security issues
- ` + "`test`" + `: Runs test commands with coverage reporting

**Quality Gate Actions (` + "`on_fail`" + `):**
- ` + "`retry`" + `: Retry the gate with backoff (exponential or linear)
- ` + "`abort`" + `: Stop loop immediately and save state as "failed"
- ` + "`skip`" + `: Log warning and continue to next iteration

**Template Variables in Actions:**
- ` + "`{{ loop.iteration }}`" + `: Current iteration number (1-based)
- ` + "`{{ loop.previous_output }}`" + `: Output from previous iteration
- ` + "`{{ loop.total_iterations }}`" + `: Maximum allowed iterations
- ` + "`{{ loop.elapsed_seconds }}`" + `: Seconds since loop started

**Example: Agentic Code Exploration**
` + "```yaml" + `
explore_codebase:
  agentic_loop:
    max_iterations: 3
    exit_condition: llm_decides
    allowed_paths: [./src, ./tests]
    tools: [Read, Glob, Grep]  # Read-only access
  input: STDIN
  model: claude-code
  action: |
    Explore the codebase and answer: {{ loop.previous_output }}
    Say DONE when you have the answer.
  output: STDOUT
` + "```" + `

**Example: Iterative Code Implementation**
` + "```yaml" + `
implement:
  agentic_loop:
    max_iterations: 3
    exit_condition: pattern_match
    exit_pattern: "SATISFIED"
    allowed_paths: [.]  # Required for file operations
  input: STDIN
  model: claude-code
  action: |
    Iteration {{ loop.iteration }}. Implement and improve the code.
    Previous: {{ loop.previous_output }}

    Add error handling, edge cases, tests.
    Say SATISFIED when production-ready.
  output: STDOUT
` + "```" + `

**Example: Plan and Build Loop**
` + "```yaml" + `
agentic-loop:
  config:
    max_iterations: 5
    exit_condition: llm_decides
    allowed_paths: [.]  # Required for file operations

  steps:
    plan:
      input: STDIN
      model: claude-code
      action: |
        Iteration {{ loop.iteration }}.
        Create/refine the implementation plan.
        Say DONE when ready to implement.
      output: $PLAN

    build:
      input: $PLAN
      model: claude-code
      action: "Generate code based on the plan"
      output: STDOUT
` + "```" + `

**Example: Long-Running Loop with State Persistence**
` + "```yaml" + `
agentic-loop:
  config:
    name: code-refactor-loop      # Required for stateful loops
    stateful: true                 # Enable state persistence
    max_iterations: 50
    timeout_seconds: 0             # No timeout - run until complete
    checkpoint_interval: 5         # Save every 5 iterations
    exit_condition: llm_decides
    allowed_paths: [./src]

    quality_gates:
    name: syntax-check
        type: syntax
        on_fail: retry
        retry:
          max_attempts: 3
          backoff_type: exponential
          initial_delay: 2

    name: tests
        command: "npm test"
        on_fail: abort              # Stop if tests fail
        timeout: 300

  steps:
    analyze:
      input: STDIN
      model: claude-code
      action: |
        Iteration {{ loop.iteration }} of {{ loop.total_iterations }}.
        Analyze code and refactor one module.
        Say DONE when all modules are refactored.
      output: STDOUT
` + "```" + `

**Example: Quality Gates with Retry**
` + "```yaml" + `
code_improvement:
  agentic_loop:
    name: improve-code
    stateful: true
    max_iterations: 10
    allowed_paths: [.]

    quality_gates:
    name: typecheck
        command: "npm run typecheck"
        on_fail: retry
        timeout: 60
        retry:
          max_attempts: 3
          backoff_type: exponential
          initial_delay: 5

    name: lint
        command: "npm run lint"
        on_fail: skip               # Non-critical, continue

  input: STDIN
  model: claude-code
  action: "Improve code quality. Say COMPLETE when done."
  output: STDOUT
` + "```" + `

**CRITICAL: When generating agentic_loop workflows with Claude Code:**
- **ALWAYS include ` + "`allowed_paths`" + `** - Claude Code CANNOT read/write files without it
- If the workflow uses ` + "`codebase_index`" + `, use the same ` + "`root`" + ` directory in ` + "`allowed_paths`" + `
- If the action mentions specific file paths or directories, include those directories
- When in doubt, use ` + "`allowed_paths: [.]`" + ` for current directory access
- Forgetting ` + "`allowed_paths`" + ` is a common error that causes "permission denied" failures
- For long-running tasks (hours/days), use ` + "`stateful: true`" + ` and ` + "`timeout_seconds: 0`" + `
- **Default to ` + "`claude-code`" + ` model for agentic workflows** - it provides the best tool use and autonomous capabilities

## 5. Multi-Loop Orchestration

Multi-loop orchestration enables complex autonomous workflows with multiple interdependent agentic loops. This is essential for creator/checker patterns, sequential processing pipelines, and complex task decomposition.

**When to use multi-loop orchestration:**
- Creator/checker validation workflows (loop A creates, loop B validates, rerun A if validation fails)
- Sequential data processing (collect → analyze → report)
- Complex task decomposition (break large task into specialized sub-loops)
- Workflows requiring variable passing between autonomous agents

### Named Loops Syntax

Define multiple named loops with dependencies and variable passing:

` + "```yaml" + `
loops:
  data-collector:
    name: data-collector
    stateful: true
    max_iterations: 10
    timeout_seconds: 0
    exit_condition: llm_decides
    allowed_paths: [.]
    output_state: $RAW_DATA         # Export result to variable

    steps:
    collect:
          input: STDIN
          model: claude-code
          action: "Collect data and say DONE when complete"
          output: STDOUT

  data-analyzer:
    name: data-analyzer
    depends_on: [data-collector]    # Wait for collector to complete
    input_state: $RAW_DATA           # Read collector's output
    stateful: true
    max_iterations: 5
    exit_condition: llm_decides
    allowed_paths: [.]
    output_state: $ANALYSIS          # Export analysis result

    steps:
    analyze:
          input: STDIN
          model: claude-code
          action: |
            Analyze this data: {{ loop.previous_output }}
            Say DONE when analysis is complete.
          output: STDOUT

  report-generator:
    name: report-generator
    depends_on: [data-analyzer]     # Wait for analyzer
    input_state: $ANALYSIS           # Read analyzer's output
    max_iterations: 1
    allowed_paths: [.]

    steps:
    generate:
          input: STDIN
          model: claude-code
          action: "Create final report based on analysis"
          output: STDOUT

# Execute loops in dependency order (topological sort)
execute_loops:
  - data-collector
  - data-analyzer
  - report-generator
` + "```" + `

**Key Configuration Fields:**
- ` + "`loops`" + `: (map) Named loop definitions
- ` + "`depends_on`" + `: (list) Loops that must complete before this one starts
- ` + "`input_state`" + `: (string) Variable to read as input (e.g., ` + "`$RAW_DATA`" + `)
- ` + "`output_state`" + `: (string) Variable to export result to (e.g., ` + "`$ANALYSIS`" + `)
- ` + "`execute_loops`" + `: (list) Simple execution order (dependencies override this)

### Creator/Checker Pattern

Advanced workflow pattern where a creator loop implements features and a checker loop validates them, with automatic rerun on failure:

` + "```yaml" + `
loops:
  # Creator loop: implements features
  feature-creator:
    name: feature-creator
    stateful: true
    max_iterations: 3
    timeout_seconds: 0
    exit_condition: llm_decides
    allowed_paths: [.]
    output_state: $CODE

    quality_gates:
    name: syntax
        type: syntax
        on_fail: abort

    steps:
    implement:
          input: STDIN
          model: claude-code
          action: |
            Iteration {{ loop.iteration }}.
            Implement the requested feature.
            Say DONE when implementation is complete.
          output: STDOUT

  # Checker loop: validates implementation
  code-checker:
    name: code-checker
    depends_on: [feature-creator]   # Wait for creator
    input_state: $CODE               # Read creator's output
    max_iterations: 1
    exit_condition: pattern_match
    exit_pattern: "^PASS"            # Exit when output starts with PASS

    steps:
    review:
          input: STDIN
          model: claude-code
          action: |
            Review this implementation: {{ loop.previous_output }}

            Check for:
            - Correctness
            - Edge cases
            - Code quality
            - Test coverage

            Output: PASS if acceptable, or FAIL with specific issues.
          output: STDOUT

# Advanced workflow with creator/checker relationship
workflow:
  creator:
    type: loop
    loop: feature-creator
    role: creator

  checker:
    type: loop
    loop: code-checker
    role: checker
    validates: creator               # This loop validates the creator
    on_fail: rerun_creator          # Auto-rerun creator if validation fails
` + "```" + `

**Workflow Node Configuration:**
- ` + "`type`" + `: Always ` + "`loop`" + ` for loop nodes
- ` + "`loop`" + `: Name of the loop to execute
- ` + "`role`" + `: Node role (` + "`creator`" + `, ` + "`checker`" + `, ` + "`finalizer`" + `)
- ` + "`validates`" + `: Name of the node this checker validates
- ` + "`on_fail`" + `: Action on validation failure:
  - ` + "`rerun_creator`" + `: Automatically rerun the creator loop (max 3 attempts)
  - ` + "`abort`" + `: Stop workflow immediately
  - ` + "`manual`" + `: Return for manual review

**Validation Logic:**
- Checker loop output is scanned for "PASS" or "FAIL"
- If "PASS" found → workflow continues
- If "FAIL" found and ` + "`on_fail: rerun_creator`" + ` → creator reruns with checker feedback
- Maximum 3 rerun attempts (prevents infinite loops)

### Dependency Graph Execution

Comanda automatically executes loops in correct order using topological sort:

1. **Build dependency graph** from ` + "`depends_on`" + ` relationships
2. **Detect cycles** - fail fast if circular dependencies exist
3. **Topological sort** using Kahn's algorithm
4. **Execute in order** - each loop waits for its dependencies

**Cycle Detection Example:**
` + "```yaml" + `
loops:
  loop-a:
    depends_on: [loop-b]

  loop-b:
    depends_on: [loop-a]

execute_loops:
  - loop-a
  - loop-b

# Error: dependency cycle detected: loop-a -> loop-b -> loop-a
` + "```" + `

### Variable Passing

Variables flow between loops via ` + "`input_state`" + ` and ` + "`output_state`" + `:

` + "```yaml" + `
loops:
  producer:
    output_state: $MY_DATA          # Write to variable

  consumer:
    depends_on: [producer]
    input_state: $MY_DATA            # Read from variable
` + "```" + `

**Variable Rules:**
- Variables are stored in memory during workflow execution
- ` + "`output_state`" + ` exports loop's final output to a variable
- ` + "`input_state`" + ` reads variable as loop input
- Variables persist across loop boundaries
- Missing variable causes error (check dependencies)

### Complete Example: Codebase Analysis Pipeline

` + "```yaml" + `
# Multi-loop workflow: analyze codebase → identify issues → generate report
loops:
  codebase-analyzer:
    name: codebase-analyzer
    stateful: true
    max_iterations: 20
    timeout_seconds: 0
    exit_condition: llm_decides
    allowed_paths: [/tmp/code]
    output_state: $ANALYSIS

    steps:
    analyze:
          input: NA
          model: claude-code
          action: |
            Analyze the codebase at /tmp/code.
            Find patterns, architecture issues, dependencies.
            Say DONE when analysis is complete.
          output: STDOUT

  tech-debt-finder:
    name: tech-debt-finder
    depends_on: [codebase-analyzer]
    input_state: $ANALYSIS
    stateful: true
    max_iterations: 10
    timeout_seconds: 0
    exit_condition: llm_decides
    allowed_paths: [/tmp/code]
    output_state: $TECH_DEBT

    steps:
    identify:
          input: STDIN
          model: claude-code
          action: |
            Based on this analysis: {{ loop.previous_output }}

            Identify tech debt and anti-patterns:
            - Code smells
            - Security issues
            - Performance bottlenecks
            - Maintainability problems

            Say DONE when complete.
          output: STDOUT

  report-writer:
    name: report-writer
    depends_on: [tech-debt-finder]
    input_state: $TECH_DEBT
    max_iterations: 1
    allowed_paths: [.]

    steps:
    write:
          input: STDIN
          model: claude-code
          action: |
            Create a comprehensive markdown report:
            {{ loop.previous_output }}

            Save to ./tech-debt-report.md
          output: STDOUT

execute_loops:
  - codebase-analyzer
  - tech-debt-finder
  - report-writer
` + "```" + `

**CRITICAL: When generating multi-loop workflows:**
- **Use ` + "`claude-code`" + ` as default model** for agentic capabilities
- **Always include ` + "`allowed_paths`" + `** for each loop that needs file access
- **Use ` + "`stateful: true`" + `** for long-running loops
- **Set ` + "`timeout_seconds: 0`" + `** for unlimited runtime
- **Use meaningful variable names** (` + "`$RAW_DATA`" + `, not ` + "`$OUTPUT`" + `)
- **Infer ` + "`allowed_paths`" + ` from user prompt** (e.g., "/tmp/code" → ` + "`allowed_paths: [/tmp/code]`" + `)
- **Default ` + "`checkpoint_interval: 5`" + `** for safety

### Using Codebase Index with Multi-Loop Workflows

**⚠️ CRITICAL:** When using ` + "`execute_loops:`" + `, **ONLY the loops are executed**. Top-level steps (outside the ` + "`loops:`" + ` block) are IGNORED.

❌ **WRONG** - Top-level codebase-index step will NOT run:
` + "```yaml" + `
# This step is IGNORED when using execute_loops!
index_core:
  step_type: codebase-index
  codebase_index:
    root: ~/my-project

loops:
  analyze:
    steps:
      step1:
        input: $MY_PROJECT_INDEX  # ❌ Variable never set!
        model: claude-code
        action: "Analyze the codebase"
        output: STDOUT

execute_loops:
  - analyze
` + "```" + `

✅ **CORRECT** - Reference the output file path directly:
` + "```yaml" + `
loops:
  analyze:
    allowed_paths: [~/my-project, .]
    steps:
      # First step: Generate the index
      index:
        step_type: codebase-index
        codebase_index:
          root: ~/my-project
          output:
            path: .comanda/MY_PROJECT_INDEX.md
            store: repo

      # Subsequent steps: Read the index file directly
      analyze:
        input: .comanda/MY_PROJECT_INDEX.md
        model: claude-code
        action: "Analyze this codebase index"
        output: STDOUT

execute_loops:
  - analyze
` + "```" + `

✅ **ALSO CORRECT** - Use input_state to pass index between loops:
` + "```yaml" + `
loops:
  indexer:
    max_iterations: 1
    allowed_paths: [~/my-project]
    output_state: $CODEBASE_INDEX
    steps:
      index:
        step_type: codebase-index
        codebase_index:
          root: ~/my-project
        output: STDOUT

  analyzer:
    depends_on: [indexer]
    input_state: $CODEBASE_INDEX
    allowed_paths: [~/my-project, .]
    steps:
      analyze:
        input: STDIN
        model: claude-code
        action: |
          Codebase index:
          {{ loop.previous_output }}

          Analyze the architecture.
        output: STDOUT

execute_loops:
  - indexer
  - analyzer
` + "```" + `


## 6. Codebase Index Step Definition (` + "`codebase_index`" + `)

This step scans a repository and generates a compact Markdown index optimized for LLM consumption. It supports multiple programming languages and exposes workflow variables for downstream steps.

**When to use codebase-index:**
- When you need to give an LLM context about a codebase structure
- Before code analysis, refactoring, or documentation tasks
- When building workflows that operate on unfamiliar repositories

**Structure:**
` + "```yaml" + `
step_name:
  step_type: codebase-index  # Alternative: use codebase_index block
  codebase_index:
    root: .                   # Repository path (default: current directory)
    output:
      path: .comanda/INDEX.md # Custom output path (optional)
      format: structured      # Output format: summary, structured, full
      store: repo             # Where to store: repo, config, or both
      encrypt: false          # Enable AES-256 encryption
    expose:
      workflow_variable: true # Export as workflow variables
      memory:
        enabled: true         # Register as memory source
        key: repo.index       # Memory key name
    adapters:                 # Per-language configuration (optional)
      go:
        ignore_dirs: [vendor, testdata]
        priority_files: ["cmd/**/*.go"]
    max_output_kb: 100        # Maximum output size in KB
    qmd:                      # qmd integration (optional)
      collection: myproject   # Register index as qmd collection
      context: "Project source code"  # Description for search relevance
      embed: false            # Run qmd embed after (slow, enables semantic search)
` + "```" + `

**` + "`codebase_index`" + ` Block Attributes:**
- ` + "`root`" + `: (string, default: ` + "`.`" + `) Repository path to scan.
- ` + "`output.path`" + `: (string, optional) Custom output file path. Default: ` + "`.comanda/<repo>_INDEX.md`" + `
- ` + "`output.format`" + `: (string, default: ` + "`structured`" + `) Output format: ` + "`summary`" + ` (compact 1-2KB for system prompts), ` + "`structured`" + ` (balanced with sections), or ` + "`full`" + ` (detailed with all symbols).
- ` + "`output.store`" + `: (string, default: ` + "`repo`" + `) Where to save: ` + "`repo`" + ` (in repository), ` + "`config`" + ` (~/.comanda/), or ` + "`both`" + `.
- ` + "`output.encrypt`" + `: (bool, default: false) Encrypt output with AES-256 GCM. Saves as ` + "`.enc`" + ` file. Requires ` + "`COMANDA_INDEX_KEY`" + ` environment variable.
- ` + "`expose.workflow_variable`" + `: (bool, default: true) Export index as workflow variables.
- ` + "`expose.memory.enabled`" + `: (bool, default: false) Register as a named memory source.
- ` + "`expose.memory.key`" + `: (string) Key name for memory access.
- ` + "`adapters`" + `: (map, optional) Per-language configuration overrides.
- ` + "`max_output_kb`" + `: (int, default: 100) Maximum size of generated index.
- ` + "`qmd.collection`" + `: (string, optional) Register index as a qmd collection with this name.
- ` + "`qmd.context`" + `: (string, optional) Description for the collection (improves search relevance).
- ` + "`qmd.embed`" + `: (bool, default: false) Run ` + "`qmd embed`" + ` after indexing (enables semantic search, slow).

**Workflow Variables Exported:**

After the step runs, these variables are available (where ` + "`<REPO>`" + ` is the uppercase repository name, e.g., ` + "`src`" + ` becomes ` + "`SRC`" + `):
- ` + "`$<REPO>_INDEX`" + `: Full Markdown content of the index
- ` + "`$<REPO>_INDEX_PATH`" + `: Absolute path to the saved index file
- ` + "`$<REPO>_INDEX_SHA`" + `: Hash of the index content
- ` + "`$<REPO>_INDEX_UPDATED`" + `: ` + "`true`" + ` if index was regenerated

**⚠️ CRITICAL: Referencing the Index in Subsequent Steps**

When a later step needs the codebase index content, you MUST use the exported variable — NOT the file path.

✅ **CORRECT** - Use the exported variable:
` + "```yaml" + `
index_codebase:
  step_type: codebase-index
  codebase_index:
    root: ./src

analyze_codebase:
  input: $SRC_INDEX          # ✅ Use the variable!
  model: claude-code
  action: "Analyze this codebase structure"
  output: STDOUT
` + "```" + `

❌ **WRONG** - Do NOT use the file path directly:
` + "```yaml" + `
index_codebase:
  step_type: codebase-index
  codebase_index:
    root: ./src
    output:
      path: .comanda/INDEX.md

analyze_codebase:
  input: .comanda/INDEX.md    # ❌ WRONG! Path may not resolve correctly
  model: claude-code
  action: "Analyze this codebase structure"
  output: STDOUT
` + "```" + `

**Why?** The ` + "`output.path`" + ` is relative to the repository root (when ` + "`store: repo`" + `), not the current working directory. The exported variable always contains the correct content regardless of where you run the workflow.

**Supported Languages:**
- **Go**: Uses AST parsing. Detection: ` + "`go.mod`" + `, ` + "`go.sum`" + `
- **Python**: Uses regex. Detection: ` + "`pyproject.toml`" + `, ` + "`requirements.txt`" + `, ` + "`setup.py`" + `
- **TypeScript/JavaScript**: Uses regex. Detection: ` + "`tsconfig.json`" + `, ` + "`package.json`" + `
- **Flutter/Dart**: Uses regex. Detection: ` + "`pubspec.yaml`" + `

**Example: Index and Analyze a Codebase**
` + "```yaml" + `
# Step 1: Generate codebase index
index_repo:
  step_type: codebase-index
  codebase_index:
    root: ./my-project
    expose:
      workflow_variable: true

# Step 2: Use the index for analysis
analyze_architecture:
  input: STDIN
  model: claude-code
  action: |
    Here is the codebase index:
    $MY_PROJECT_INDEX

    Analyze the architecture and suggest improvements.
  output: STDOUT
` + "```" + `

**Example: Minimal Usage**
` + "```yaml" + `
index_repo:
  step_type: codebase-index
  codebase_index:
    root: .
` + "```" + `

### Using the Index Registry

Indexes can be pre-captured using ` + "`comanda index capture`" + `, then loaded without regenerating:

**Loading from Registry:**
` + "```yaml" + `
load_index:
  codebase_index:
    use: myproject              # Load from registry
    max_age: 24h                # Warn if stale

load_multiple:
  codebase_index:
    use: [project1, project2]   # Load multiple indexes
    aggregate: true             # Create $AGGREGATED_INDEX
` + "```" + `

**Inline Index References (` + "`${INDEX:name}`" + `):**
` + "```yaml" + `
analyze:
  input: |
    Context: ${INDEX:myproject}
    Review the architecture.
  model: claude
  output: STDOUT
` + "```" + `


## 7. qmd Search Step Definition (` + "`qmd_search`" + `)

This step searches local knowledge bases using qmd, providing BM25, vector, or hybrid search capabilities.

**When to use qmd-search:**
- Retrieval-Augmented Generation (RAG) workflows
- Searching indexed codebases or documentation
- Finding relevant context before LLM processing

**Prerequisites:**
- Install qmd: ` + "`bun install -g @tobilu/qmd`" + `
- Create a collection: ` + "`qmd collection add ./docs --name docs`" + `

**Structure:**
` + "```yaml" + `
step_name:
  type: qmd-search
  qmd_search:
    query: "${QUESTION}"      # Search query (supports variable substitution)
    collection: docs          # Optional: restrict to specific collection
    mode: search              # search (BM25), vsearch (vector), query (hybrid)
    limit: 5                  # Number of results (default: 5)
    min_score: 0.3            # Minimum relevance score (0.0-1.0)
    format: text              # Output format: text (default), json, files
  output: CONTEXT             # Store results in variable
` + "```" + `

**` + "`qmd_search`" + ` Block Attributes:**
- ` + "`query`" + `: (string, required) Search query. Supports variable substitution (e.g., ` + "`${QUESTION}`" + `).
- ` + "`collection`" + `: (string, optional) Restrict search to a specific qmd collection.
- ` + "`mode`" + `: (string, default: ` + "`search`" + `) Search mode:
  - ` + "`search`" + `: BM25 keyword search (fastest, recommended default)
  - ` + "`vsearch`" + `: Vector/semantic search (slower, requires embeddings)
  - ` + "`query`" + `: Hybrid search with LLM reranking (slowest, best quality)
- ` + "`limit`" + `: (int, default: 5) Maximum number of results to return.
- ` + "`min_score`" + `: (float, optional) Minimum relevance score threshold (0.0-1.0).
- ` + "`format`" + `: (string, default: ` + "`text`" + `) Output format:
  - ` + "`text`" + `: Human-readable text output
  - ` + "`json`" + `: Structured JSON output
  - ` + "`files`" + `: List of matching file paths only
- ` + "`full`" + `: (bool, default: false) Return full document content instead of snippets.

**Example: RAG Workflow**
` + "```yaml" + `
# Search for relevant context
retrieve_context:
  type: qmd-search
  qmd_search:
    query: "${USER_QUESTION}"
    collection: docs
    mode: search
    limit: 5
  output: CONTEXT

# Generate answer with context
generate_answer:
  input: |
    Context:
    ${CONTEXT}
    
    Question: ${USER_QUESTION}
  model: claude-sonnet
  action: "Answer the question using only the provided context."
  output: STDOUT
` + "```" + `

**Example: Code Search with codebase-index**
` + "```yaml" + `
# Index codebase with qmd registration
index_code:
  type: codebase-index
  codebase_index:
    root: ./src
    qmd:
      collection: mycode
      context: "Application source code"

# Search the indexed code
find_relevant_code:
  type: qmd-search
  qmd_search:
    query: "authentication middleware"
    collection: mycode
    limit: 10
  output: RELEVANT_CODE

# Analyze with LLM
analyze_code:
  input: ${RELEVANT_CODE}
  model: claude-code
  action: "Analyze these code snippets and suggest improvements."
  output: STDOUT
` + "```" + `

## Common Elements (for Standard Steps)

### Input Types
- File path: ` + "`input: path/to/file.txt`" + `
- Previous step output: ` + "`input: STDIN`" + `
- Multiple file paths: ` + "`input: [file1.txt, file2.txt]`" + `
- Web scraping: ` + "`input: { url: \"https://example.com\" }`" + ` (Further scrape config under ` + "`scrape_config`" + ` map if needed)
- Database query: ` + "`input: { database: { type: \"postgres\", query: \"SELECT * FROM users\" } }`" + `
- No input: ` + "`input: NA`" + `
- Input with alias for variable: ` + "`input: path/to/file.txt as $my_var`" + `
- List with aliases: ` + "`input: [file1.txt as $file1_content, file2.txt as $file2_content]`" + `

### Chunking
For processing large files, you can use the ` + "`chunk`" + ` configuration to split the input into manageable pieces:

**Basic Structure:**
` + "```yaml" + `
step_name:
  input: "large_file.txt"
  chunk:
    by: lines  # or "tokens"
    size: 1000  # number of lines or tokens per chunk
    overlap: 50  # optional: number of lines or tokens to overlap between chunks
    max_chunks: 10  # optional: limit the total number of chunks processed
  batch_mode: individual  # required for chunking to process each chunk separately
  model: gpt-4o-mini
  action: "Process this chunk of text: {{ current_chunk }}"
  output: "chunk_{{ chunk_index }}_result.txt"  # can use chunk_index in output path
` + "```" + `

**Key Elements:**
- ` + "`chunk`" + `: (Optional) Configuration block for chunking a large input file.
  - ` + "`by`" + `: (Required) Chunking method - either ` + "`lines`" + ` or ` + "`tokens`" + `.
  - ` + "`size`" + `: (Required) Number of lines or tokens per chunk.
  - ` + "`overlap`" + `: (Optional) Number of lines or tokens to include from the previous chunk, providing context continuity.
  - ` + "`max_chunks`" + `: (Optional) Maximum number of chunks to process, useful for testing or limiting processing.
- ` + "`batch_mode: individual`" + `: Required when using chunking to process each chunk as a separate LLM call.
- ` + "`{{ current_chunk }}`" + `: Template variable that gets replaced with the current chunk content in the action.
- ` + "`{{ chunk_index }}`" + `: Template variable for the current chunk number (0-based), useful in output paths.

**Consolidation Pattern:**
A common pattern is to process chunks individually and then consolidate the results:

` + "```yaml" + `
# Step 1: Process chunks
process_chunks:
  input: "large_document.txt"
  chunk:
    by: lines
    size: 1000
  batch_mode: individual
  model: gpt-4o-mini
  action: "Extract key points from: {{ current_chunk }}"
  output: "chunk_{{ chunk_index }}_summary.txt"

# Step 2: Consolidate results
consolidate_results:
  input: "chunk_*.txt"  # Use wildcard to collect all chunk outputs
  model: gpt-4o-mini
  action: "Combine these summaries into one coherent document."
  output: "final_summary.txt"
` + "```" + `

### Models
- Single model: ` + "`model: gpt-4o-mini`" + `
- No model (for non-LLM operations): ` + "`model: NA`" + `
- Multiple models (for comparison): ` + "`model: [gpt-4o-mini, claude-3-opus-20240229]`" + `

### Actions
- Single instruction: ` + "`action: \"Summarize this text.\"`" + `
- Multiple sequential instructions: ` + "`action: [\"Action 1\", \"Action 2\"]`" + `
- Reference variable: ` + "`action: \"Compare with $previous_data.\"`" + `
- Reference markdown file: ` + "`action: path/to/prompt.md`" + `

### Outputs
- Console: ` + "`output: STDOUT`" + `
- File: ` + "`output: results.txt`" + ` or ` + "`output: ./path/to/output.md`" + `
- Database: ` + "`output: { database: { type: \"postgres\", table: \"results_table\" } }`" + `
- Output with alias (if supported for variable creation from output): ` + "`output: STDOUT as $step_output_var`" + `

**⚠️ IMPORTANT: Writing to files**
When the result should be saved to a file, use the ` + "`output:`" + ` field directly. Do NOT instruct the LLM to "write to a file" in the action.

✅ **CORRECT** - Use ` + "`output:`" + ` for file writing:
` + "```yaml" + `
summarize_document:
  input: report.txt
  model: claude-code
  action: "Summarize this document"
  output: ./summary.md
` + "```" + `

❌ **WRONG** - Do NOT tell the LLM to write the file:
` + "```yaml" + `
summarize_document:
  input: report.txt
  model: claude-code
  action: "Summarize this document and write it to ./summary.md"
  output: STDOUT
` + "```" + `

## Variables
- Definition: ` + "`input: data.txt as $initial_data`" + `
- Reference: ` + "`action: \"Compare this analysis with $initial_data\"`" + `
- Scope: Variables are typically scoped to the workflow. For ` + "`process`" + ` steps, parent variables are not directly accessible by default; use the ` + "`process.inputs`" + ` map to pass data.

## Validation Rules Summary (for LLM)

1.  A step definition must clearly be one of: Standard, Generate, or Process.
    *   A step cannot mix top-level keys from different types (e.g., a ` + "`generate`" + ` step should not have a top-level ` + "`model`" + ` or ` + "`output`" + ` key; these belong inside the ` + "`generate`" + ` block).
2.  **Standard Step:**
    *   Must contain ` + "`input`" + `, ` + "`model`" + `, ` + "`action`" + `, ` + "`output`" + ` (unless ` + "`type: openai-responses`" + `, where ` + "`action`" + ` might be replaced by ` + "`instructions`" + `).
    *   ` + "`input`" + ` can be ` + "`NA`" + `. ` + "`model`" + ` can be ` + "`NA`" + `.
3.  **Generate Step:**
    *   Must contain a ` + "`generate`" + ` block.
    *   ` + "`generate`" + ` block must contain ` + "`action`" + ` (string prompt) and ` + "`output`" + ` (string filename).
    *   ` + "`generate.model`" + ` is optional (uses default if omitted).
    *   Top-level ` + "`input`" + ` for the step is optional (can be ` + "`NA`" + ` or provide context).
4.  **Process Step:**
    *   Must contain a ` + "`process`" + ` block.
    *   ` + "`process`" + ` block must contain ` + "`workflow_file`" + ` (string path).
    *   ` + "`process.inputs`" + ` is optional.
    *   Top-level ` + "`input`" + ` for the step is optional (can be ` + "`NA`" + ` or ` + "`STDIN`" + ` to pipe to sub-workflow).
5.  **Agentic Loop Step (Inline):**
    *   Must contain an ` + "`agentic_loop`" + ` block with loop configuration.
    *   Must also contain ` + "`input`" + `, ` + "`model`" + `, ` + "`action`" + `, ` + "`output`" + ` at the step level.
    *   ` + "`agentic_loop.max_iterations`" + ` defaults to 10 if not specified.
    *   ` + "`agentic_loop.exit_condition`" + ` can be ` + "`llm_decides`" + ` or ` + "`pattern_match`" + `.
6.  **Agentic Loop Block (Top-level):**
    *   Uses ` + "`agentic-loop:`" + ` as a top-level key (like ` + "`parallel-process:`" + `).
    *   Must contain ` + "`config`" + ` block with loop settings.
    *   Must contain ` + "`steps`" + ` block with one or more sub-steps.
    *   Each sub-step follows standard step structure (` + "`input`" + `, ` + "`model`" + `, ` + "`action`" + `, ` + "`output`" + `).
7.  **Codebase Index Step:**
    *   Must have ` + "`step_type: codebase-index`" + ` OR contain a ` + "`codebase_index`" + ` block.
    *   ` + "`codebase_index.root`" + ` defaults to ` + "`.`" + ` (current directory).
    *   Exports workflow variables: ` + "`<REPO>_INDEX`" + `, ` + "`<REPO>_INDEX_PATH`" + `, ` + "`<REPO>_INDEX_SHA`" + `, ` + "`<REPO>_INDEX_UPDATED`" + `.
    *   Does not require ` + "`input`" + `, ` + "`model`" + `, ` + "`action`" + `, or ` + "`output`" + ` fields.

8.  **qmd Search Step:**
    *   Must have ` + "`type: qmd-search`" + ` OR contain a ` + "`qmd_search`" + ` block.
    *   ` + "`qmd_search.query`" + ` is required.
    *   ` + "`qmd_search.mode`" + ` defaults to ` + "`search`" + ` (BM25).
    *   Does not require ` + "`input`" + `, ` + "`model`" + `, or ` + "`action`" + ` fields.
## Chaining and Examples

Steps can be "chained together" by either passing STDOUT from one step to STDIN of the next step or by writing to a file and then having subsequent steps take this file as input.

**Meta-Processing Example:**
` + "```yaml" + `
gather_requirements:
  input: requirements_document.txt
  model: claude-3-opus-20240229
  action: "Based on the input document, define the core tasks for a data processing workflow. Output as a concise list."
  output: STDOUT

generate_data_workflow:
  input: STDIN # Using output from previous step as context
  generate:
    model: gpt-4o-mini # LLM to generate the workflow
    action: "Generate a Comanda workflow YAML to perform the tasks described in the input. The workflow should read 'raw_data.csv', perform transformations, and save to 'processed_data.csv'."
    output: dynamic_data_processor.yaml # Filename for the generated workflow

execute_data_workflow:
  input: NA # Or potentially STDIN if dynamic_data_processor.yaml's first step expects it
  process:
    workflow_file: dynamic_data_processor.yaml # Execute the generated workflow
    # inputs: { source_file: "override_data.csv" } # Optional: override inputs for the sub-workflow
  output: STDOUT # Log output of the process step itself (e.g., success/failure)
` + "```" + `

### Advanced Chaining: Enabling Independent Analysis with Files

The standard ` + "`STDIN`" + `/` + "`STDOUT`" + ` chain is designed for sequential processing, where each step receives the output of the one immediately before it. However, many workflows require a downstream step to **independently analyze outputs from multiple, potentially non-sequential, upstream steps.**

To enable this, you must use files to store intermediate results. This pattern ensures that each output is preserved and can be accessed directly by any subsequent step, rather than being lost in a pipeline.

**The recommended pattern is:**
1.  Each upstream step saves its result to a distinct file (e.g., ` + "`step1_output.txt`" + `, ` + "`step2_output.txt`" + `).
2.  The downstream step that needs to perform the independent analysis lists these files as its ` + "`input`" + `.

**Example: A 3-Step Workflow with a Final Review**

In this scenario, the third step needs to review the outputs of both the first and second steps independently.

` + "```yaml" + `
# Step 1: Initial analysis
analyze_introductions:
  input: introductions.md
  model: gpt-4o-mini
  action: "Perform a detailed analysis of the introductions document. Focus on key themes, writing style, and effectiveness."
  output: step1_analysis.txt

# Step 2: Quality assessment of the original document
quality_assessment:
  input: introductions.md
  model: gpt-4o-mini
  action: "Perform a quality assessment on the original document. Identify strengths and potential gaps."
  output: step2_qa.txt

# Step 3: Final summary based on both outputs
final_summary:
  input: [step1_analysis.txt, step2_qa.txt]
  model: gpt-4o-mini
  action: "Review the results from the analysis (step1_analysis.txt) and the QA (step2_qa.txt). Provide a comprehensive summary that synthesizes the findings from both."
  output: final_summary.md
` + "```" + `

This file-based approach is the correct way to handle any workflow where a step's logic depends on having discrete access to multiple prior outputs.

## CRITICAL: Workflow Simplicity Guidelines

**ALWAYS prefer the simplest possible workflow.** Over-engineered workflows are harder to debug, maintain, and understand.

**Key principles:**
1. **Minimize steps**: If a task can be done in 1 step, don't use 3. Most tasks need 1-2 steps.
2. **Avoid unnecessary chaining**: Don't chain steps unless the output of one is genuinely needed by the next.
3. **Use direct file I/O**: If you need to read a file and process it, that's ONE step, not three.
4. **Prefer STDIN/STDOUT**: Use simple STDIN/STDOUT chaining over complex file intermediates when sequential processing suffices.
5. **One model per workflow when possible**: Don't use multiple models unless comparing outputs or the task genuinely requires different capabilities.

**Examples of OVER-ENGINEERED workflows (AVOID):**
` + "```yaml" + `
# BAD: Too many steps for a simple task
read_file:
  input: document.txt
  model: NA
  action: NA
  output: temp_content.txt

analyze_content:
  input: temp_content.txt
  model: gpt-4o-mini
  action: "Analyze this"
  output: temp_analysis.txt

format_output:
  input: temp_analysis.txt
  model: gpt-4o-mini
  action: "Format nicely"
  output: STDOUT
` + "```" + `

**GOOD: Simple and direct:**
` + "```yaml" + `
# GOOD: One step does the job
analyze_document:
  input: document.txt
  model: gpt-4o-mini
  action: "Analyze this document and format the output nicely"
  output: STDOUT
` + "```" + `

**When multiple steps ARE appropriate:**
- Processing different source files independently, then combining results
- Using tool commands to pre-process data before LLM analysis
- Generating a workflow dynamically, then executing it
- Tasks that genuinely require different models for different capabilities
- **Agentic loops** for iterative refinement, planning, or autonomous decision-making

**When to use Agentic Loops:**
- Code improvement cycles (analyze → fix → verify)
- Planning and execution workflows
- Tasks where quality depends on iteration
- When the LLM should decide when work is complete

This guide covers the core concepts and syntax of Comanda's YAML DSL, including meta-processing capabilities. LLMs should use this structure to generate valid workflow files.`

EmbeddedLLMGuide contains the Comanda YAML DSL Guide for LLM consumption This is embedded directly in the binary to avoid file path issues For backward compatibility, we keep this constant

View Source
const PromptPrefix = "" /* 234-byte string literal not displayed */

PromptPrefix is used to instruct providers to output only the requested content without metadata.

Variables

View Source
var DefaultAllowlist = []string{

	"ls",
	"cat",
	"head",
	"tail",
	"wc",
	"sort",
	"uniq",
	"grep",
	"awk",
	"sed",
	"cut",
	"tr",
	"diff",
	"comm",
	"join",
	"paste",
	"column",
	"fold",
	"fmt",
	"nl",
	"pr",
	"tee",
	"xargs",

	"jq",
	"yq",

	"date",
	"cal",

	"echo",
	"printf",
	"tac",
	"rev",

	"file",
	"stat",
	"du",
	"df",
	"which",
	"whereis",
	"type",
	"basename",
	"dirname",
	"realpath",
	"readlink",

	"env",
	"printenv",
	"pwd",
	"id",
	"whoami",
	"hostname",
	"uname",

	"bd",

	"base64",
	"md5sum",
	"sha256sum",
	"sha1sum",
	"xxd",
	"od",
	"hexdump",

	"find",
	"locate",
	"updatedb",

	"ps",
	"top",
	"htop",
	"pgrep",

	"ping",
	"host",
	"dig",
	"nslookup",
	"ifconfig",
	"ip",
	"netstat",
	"ss",
}

DefaultAllowlist contains safe commands commonly used in workflows

View Source
var DefaultDenylist = []string{

	"rm",
	"rmdir",
	"mv",
	"dd",
	"shred",
	"mkfs",
	"fdisk",
	"parted",

	"sudo",
	"su",
	"doas",
	"pkexec",

	"chmod",
	"chown",
	"chgrp",
	"chattr",
	"setfacl",

	"nc",
	"netcat",
	"ncat",
	"nmap",
	"masscan",
	"hping3",

	"kill",
	"killall",
	"pkill",

	"bash",
	"sh",
	"zsh",
	"fish",
	"csh",
	"tcsh",
	"ksh",
	"dash",

	"apt",
	"apt-get",
	"yum",
	"dnf",
	"pacman",
	"brew",
	"pip",
	"npm",
	"yarn",
	"gem",
	"cargo",
	"go",

	"wget",
	"curl",
	"ssh",
	"scp",
	"sftp",
	"rsync",
	"ftp",
	"telnet",
	"rsh",

	"passwd",
	"shadow",

	"tar",
	"zip",
	"unzip",
	"gzip",
	"gunzip",

	"mount",
	"umount",
	"losetup",

	"crontab",
	"at",

	"systemctl",
	"service",
	"init",
	"reboot",
	"shutdown",
	"halt",
	"poweroff",
}

DefaultDenylist contains dangerous commands that should never be executed These commands can cause system damage, security issues, or data loss

Functions

func ComputeWorkflowChecksum added in v0.0.102

func ComputeWorkflowChecksum(workflowPath string) (string, error)

ComputeWorkflowChecksum computes a SHA256 checksum of a workflow file

func GenerateFileManifest added in v0.0.111

func GenerateFileManifest(paths []string) (*filescan.ScanResult, error)

GenerateFileManifest scans paths and generates a token-aware manifest This is a convenience wrapper around filescan.ScanPaths

func GetEmbeddedLLMGuide added in v0.0.58

func GetEmbeddedLLMGuide() string

GetEmbeddedLLMGuide returns the Comanda YAML DSL Guide for LLM consumption with the current supported models injected from the registry

func GetEmbeddedLLMGuideWithModels added in v0.0.82

func GetEmbeddedLLMGuideWithModels(availableModels []string) string

GetEmbeddedLLMGuideWithModels returns the Comanda YAML DSL Guide for LLM consumption with a specific list of available models injected. Use this when you have a known list of configured/available models (e.g., from envConfig).

func IsToolInput added in v0.0.78

func IsToolInput(input string) bool

IsToolInput checks if an input string is a tool input specification

func IsToolOutput added in v0.0.78

func IsToolOutput(output string) bool

IsToolOutput checks if an output string is a tool output specification

func ParseToolInput added in v0.0.78

func ParseToolInput(input string) (command string, usesStdin bool, err error)

ParseToolInput parses a tool input specification and returns the command and any STDIN handling Formats supported: - "tool: ls -la" - simple command - "tool: STDIN|grep -i 'pattern'" - pipe STDIN to command

func ParseToolOutput added in v0.0.78

func ParseToolOutput(output string) (command string, pipesStdout bool, err error)

ParseToolOutput parses a tool output specification Formats supported: - "tool: jq '.data'" - pipe output through command - "STDOUT|grep 'pattern'" - pipe STDOUT through command

func SecurityWarning added in v0.0.78

func SecurityWarning() string

SecurityWarning returns a warning message about tool use

func ValidateWorkflowChecksum added in v0.0.102

func ValidateWorkflowChecksum(workflowPath string, expectedChecksum string) error

ValidateWorkflowChecksum verifies that a workflow file hasn't changed

func ValidateWorkflowModels added in v0.0.82

func ValidateWorkflowModels(yamlContent string, availableModels []string) []string

ValidateWorkflowModels parses a workflow YAML and validates that all model references are in the list of available models. Returns a list of invalid model names found, or nil if all models are valid.

Types

type ActionResult added in v0.0.75

type ActionResult struct {
	// Single combined result (used when not chunking or when batch_mode is "combined")
	CombinedResult string

	// Individual results (used when chunking with batch_mode "individual")
	IndividualResults []string

	// Corresponding input paths for each individual result (for chunk identification)
	InputPaths []string

	// Whether this contains individual results
	HasIndividualResults bool
}

ActionResult holds the results from processing actions It can contain either a single combined result or multiple individual results (for chunking)

type AdapterOverride added in v0.0.95

type AdapterOverride struct {
	IgnoreDirs      []string `yaml:"ignore_dirs,omitempty"`
	IgnoreGlobs     []string `yaml:"ignore_globs,omitempty"`
	PriorityFiles   []string `yaml:"priority_files,omitempty"`
	ReplaceDefaults bool     `yaml:"replace_defaults,omitempty"`
}

AdapterOverride allows customization of adapter behavior

type AgenticLoopConfig added in v0.0.94

type AgenticLoopConfig struct {
	MaxIterations  int      `yaml:"max_iterations"`          // Maximum iterations before stopping (default: 10)
	TimeoutSeconds int      `yaml:"timeout_seconds"`         // Total timeout in seconds (default: 0 = no timeout)
	ExitCondition  string   `yaml:"exit_condition"`          // Exit condition: llm_decides, pattern_match
	ExitPattern    string   `yaml:"exit_pattern"`            // Regex pattern for pattern_match exit condition
	ContextWindow  int      `yaml:"context_window"`          // Number of past iterations to include in context (default: 5)
	Steps          []Step   `yaml:"steps,omitempty"`         // Sub-steps to execute within each iteration
	AllowedPaths   []string `yaml:"allowed_paths,omitempty"` // Directories for agentic tool access
	Tools          []string `yaml:"tools,omitempty"`         // Optional tool whitelist (Read, Write, Edit, Bash, etc.)

	// State persistence & quality gates
	Name               string              `yaml:"name,omitempty"`                // Loop name (required for stateful loops)
	Stateful           bool                `yaml:"stateful,omitempty"`            // Enable state persistence
	CheckpointInterval int                 `yaml:"checkpoint_interval,omitempty"` // Save state every N iterations (default: 5)
	QualityGates       []QualityGateConfig `yaml:"quality_gates,omitempty"`       // Quality gates to run after each iteration

	// Multi-loop orchestration
	DependsOn   []string `yaml:"depends_on,omitempty"`   // Wait for these loops to complete
	InputState  string   `yaml:"input_state,omitempty"`  // Variable to read from dependent loop
	OutputState string   `yaml:"output_state,omitempty"` // Variable to export for dependent loops
}

AgenticLoopConfig represents the configuration for an agentic loop

func (*AgenticLoopConfig) UnmarshalYAML added in v0.0.116

func (c *AgenticLoopConfig) UnmarshalYAML(node *yaml.Node) error

UnmarshalYAML implements custom unmarshaling for AgenticLoopConfig to support both map and list syntax for steps

type ChunkConfig added in v0.0.60

type ChunkConfig struct {
	By        string `yaml:"by"`         // How to split the file: "lines", "bytes", or "tokens"
	Size      int    `yaml:"size"`       // Chunk size (e.g., 10000 lines)
	Overlap   int    `yaml:"overlap"`    // Lines/bytes to overlap between chunks for context
	MaxChunks int    `yaml:"max_chunks"` // Limit total chunks to prevent overload
}

ChunkConfig represents the configuration for chunking a large file

type CodebaseIndexConfig added in v0.0.95

type CodebaseIndexConfig struct {
	Root        string                      `yaml:"root"`                    // Repository path (defaults to current directory)
	Output      *CodebaseIndexOutputConfig  `yaml:"output,omitempty"`        // Output configuration
	Expose      *CodebaseIndexExposeConfig  `yaml:"expose,omitempty"`        // Variable/memory exposure configuration
	Adapters    map[string]*AdapterOverride `yaml:"adapters,omitempty"`      // Per-adapter overrides
	MaxOutputKB int                         `yaml:"max_output_kb,omitempty"` // Maximum output size in KB
	Qmd         *QmdIntegrationConfig       `yaml:"qmd,omitempty"`           // qmd integration configuration

	// Registry integration
	Use       interface{} `yaml:"use,omitempty"`       // Load from registry: string or []string
	MaxAge    string      `yaml:"max_age,omitempty"`   // Warn if index is older than this duration (e.g., "24h")
	Aggregate bool        `yaml:"aggregate,omitempty"` // Combine multiple indexes into single context
}

CodebaseIndexConfig represents the configuration for codebase-index step

type CodebaseIndexExposeConfig added in v0.0.95

type CodebaseIndexExposeConfig struct {
	WorkflowVariable bool                       `yaml:"workflow_variable,omitempty"` // Expose as workflow variable
	Memory           *CodebaseIndexMemoryConfig `yaml:"memory,omitempty"`            // Memory integration
}

CodebaseIndexExposeConfig configures how index is exposed

type CodebaseIndexMemoryConfig added in v0.0.95

type CodebaseIndexMemoryConfig struct {
	Enabled bool   `yaml:"enabled,omitempty"` // Enable memory integration
	Key     string `yaml:"key,omitempty"`     // Memory key name
}

CodebaseIndexMemoryConfig configures memory integration

type CodebaseIndexOutputConfig added in v0.0.95

type CodebaseIndexOutputConfig struct {
	Path    string `yaml:"path,omitempty"`    // Custom output path
	Format  string `yaml:"format,omitempty"`  // Output format: summary, structured, full (default: structured)
	Store   string `yaml:"store,omitempty"`   // Where to store: repo, config, both
	Encrypt bool   `yaml:"encrypt,omitempty"` // Whether to encrypt the output
}

CodebaseIndexOutputConfig configures index output

type CommandGate added in v0.0.102

type CommandGate struct {
	// contains filtered or unexported fields
}

CommandGate executes a shell command and checks the exit code

func NewCommandGate added in v0.0.102

func NewCommandGate(name, command string, timeout int) *CommandGate

NewCommandGate creates a new command-based quality gate

func (*CommandGate) Check added in v0.0.102

func (g *CommandGate) Check(ctx context.Context, workDir string) (*QualityGateResult, error)

func (*CommandGate) Name added in v0.0.102

func (g *CommandGate) Name() string

type DSLConfig

type DSLConfig struct {
	Steps         []Step
	ParallelSteps map[string][]Step             // Steps that can be executed in parallel
	Defer         map[string]StepConfig         `yaml:"defer,omitempty"`
	AgenticLoops  map[string]*AgenticLoopConfig // Block-style agentic loops (legacy)

	// Multi-loop orchestration
	Loops        map[string]*AgenticLoopConfig `yaml:"loops,omitempty"`         // Named loops for orchestration
	ExecuteLoops []string                      `yaml:"execute_loops,omitempty"` // Simple execution order
	Workflow     map[string]*WorkflowNode      `yaml:"workflow,omitempty"`      // Complex workflow definition

	// Worktree support for parallel Claude Code execution
	Worktrees *WorktreeConfig `yaml:"worktrees,omitempty"` // Git worktree configuration
}

DSLConfig represents the structure of the DSL configuration

func (*DSLConfig) UnmarshalYAML added in v0.0.55

func (c *DSLConfig) UnmarshalYAML(node *yaml.Node) error

UnmarshalYAML is a custom unmarshaler for DSLConfig to handle mixed types at the root level

type DebugWatcher added in v0.0.110

type DebugWatcher struct {
	// contains filtered or unexported fields
}

DebugWatcher monitors a claude-code debug file for context usage and other metrics

func NewDebugWatcher added in v0.0.110

func NewDebugWatcher(debugFilePath string, streamLog *StreamLogger) *DebugWatcher

NewDebugWatcher creates a watcher for the specified debug file

func (*DebugWatcher) Start added in v0.0.110

func (w *DebugWatcher) Start()

Start begins watching the debug file

func (*DebugWatcher) Stop added in v0.0.110

func (w *DebugWatcher) Stop()

Stop stops the watcher

type DependencyGraph added in v0.0.102

type DependencyGraph struct {
	// contains filtered or unexported fields
}

DependencyGraph represents a directed acyclic graph of loop dependencies

func (*DependencyGraph) TopologicalSort added in v0.0.102

func (g *DependencyGraph) TopologicalSort() ([]string, error)

TopologicalSort performs topological sort using Kahn's algorithm Returns execution order or error if cycle detected

type GenerateStepConfig added in v0.0.35

type GenerateStepConfig struct {
	Model        interface{} `yaml:"model"`
	Action       interface{} `yaml:"action"`
	Output       string      `yaml:"output"`
	ContextFiles []string    `yaml:"context_files"`
}

GenerateStepConfig defines the configuration for a generate step

type GraphNode added in v0.0.102

type GraphNode struct {
	// contains filtered or unexported fields
}

GraphNode represents a node in the dependency graph

type LoopContext added in v0.0.94

type LoopContext struct {
	Iteration      int             // Current iteration number (1-based)
	PreviousOutput string          // Output from previous iteration
	History        []LoopIteration // History of all iterations
	StartTime      time.Time       // When the loop started
}

LoopContext holds runtime state for an agentic loop

type LoopIteration added in v0.0.94

type LoopIteration struct {
	Index     int       // Iteration index
	Output    string    // Output from this iteration
	Timestamp time.Time // When this iteration completed
}

LoopIteration represents a single iteration's state

type LoopOrchestrator added in v0.0.102

type LoopOrchestrator struct {
	// contains filtered or unexported fields
}

LoopOrchestrator manages execution of multiple interdependent loops

func NewLoopOrchestrator added in v0.0.102

func NewLoopOrchestrator(processor *Processor, loops map[string]*AgenticLoopConfig, workflowFile string) *LoopOrchestrator

NewLoopOrchestrator creates a new loop orchestrator

func (*LoopOrchestrator) Execute added in v0.0.102

func (o *LoopOrchestrator) Execute() error

Execute runs all loops in dependency order

func (*LoopOrchestrator) ExecuteWithOrder added in v0.0.132

func (o *LoopOrchestrator) ExecuteWithOrder(executionOrder []string) error

ExecuteWithOrder runs loops in the specified order with progress display

func (*LoopOrchestrator) GetAllOutputs added in v0.0.102

func (o *LoopOrchestrator) GetAllOutputs() map[string]*LoopOutput

GetAllOutputs returns outputs of all completed loops

func (*LoopOrchestrator) GetLoopOutput added in v0.0.102

func (o *LoopOrchestrator) GetLoopOutput(loopName string) (*LoopOutput, error)

GetLoopOutput returns the output of a completed loop

func (*LoopOrchestrator) SetProgressDisplay added in v0.0.132

func (o *LoopOrchestrator) SetProgressDisplay(pd *ProgressDisplay)

SetProgressDisplay sets the progress display for visual output

type LoopOutput added in v0.0.102

type LoopOutput struct {
	LoopName     string              `json:"loop_name"`
	Status       string              `json:"status"`        // completed, failed
	Result       string              `json:"result"`        // Final output
	Variables    map[string]string   `json:"variables"`     // Variables to pass to dependent loops
	QualityGates []QualityGateResult `json:"quality_gates"` // Quality gate results
	StartTime    time.Time           `json:"start_time"`
	EndTime      time.Time           `json:"end_time"`
}

LoopOutput represents the output of a completed loop

type LoopState added in v0.0.102

type LoopState struct {
	LoopName           string              `json:"loop_name"`
	Iteration          int                 `json:"iteration"`
	MaxIterations      int                 `json:"max_iterations"`
	StartTime          time.Time           `json:"start_time"`
	LastUpdateTime     time.Time           `json:"last_update_time"`
	PreviousOutput     string              `json:"previous_output"`
	History            []LoopIteration     `json:"history"`
	Variables          map[string]string   `json:"variables"`
	Status             string              `json:"status"` // running, paused, completed, failed
	ExitCondition      string              `json:"exit_condition"`
	ExitPattern        string              `json:"exit_pattern,omitempty"`
	WorkflowFile       string              `json:"workflow_file"`
	WorkflowChecksum   string              `json:"workflow_checksum"` // Detect workflow changes
	QualityGateResults []QualityGateResult `json:"quality_gate_results,omitempty"`
}

LoopState represents the persistent state of an agentic loop

type LoopStateManager added in v0.0.102

type LoopStateManager struct {
	// contains filtered or unexported fields
}

LoopStateManager handles persistence of loop states

func NewLoopStateManager added in v0.0.102

func NewLoopStateManager(stateDir string) *LoopStateManager

NewLoopStateManager creates a new state manager

func (*LoopStateManager) DeleteState added in v0.0.102

func (m *LoopStateManager) DeleteState(loopName string) error

DeleteState removes a loop's state file and backups

func (*LoopStateManager) ListStates added in v0.0.102

func (m *LoopStateManager) ListStates() ([]*LoopState, error)

ListStates returns all saved loop states

func (*LoopStateManager) LoadState added in v0.0.102

func (m *LoopStateManager) LoadState(loopName string) (*LoopState, error)

LoadState loads a loop state from disk

func (*LoopStateManager) SaveState added in v0.0.102

func (m *LoopStateManager) SaveState(state *LoopState) error

SaveState persists a loop state to disk with backup rotation

type MemoryManager added in v0.0.73

type MemoryManager struct {
	// contains filtered or unexported fields
}

MemoryManager handles reading from and writing to the COMANDA.md memory file

func NewMemoryManager added in v0.0.73

func NewMemoryManager(filePath string) (*MemoryManager, error)

NewMemoryManager creates a new memory manager

func (*MemoryManager) AppendMemory added in v0.0.73

func (m *MemoryManager) AppendMemory(content string) error

AppendMemory appends content to the memory file

func (*MemoryManager) GetFilePath added in v0.0.73

func (m *MemoryManager) GetFilePath() string

GetFilePath returns the path to the memory file

func (*MemoryManager) GetMemory added in v0.0.73

func (m *MemoryManager) GetMemory() string

GetMemory returns the full memory content

func (*MemoryManager) GetMemorySection added in v0.0.73

func (m *MemoryManager) GetMemorySection(sectionName string) string

GetMemorySection returns content from a specific markdown section Section name format: "section_name" extracts content under "## section_name"

func (*MemoryManager) HasMemory added in v0.0.73

func (m *MemoryManager) HasMemory() bool

HasMemory returns true if a memory file is configured

func (*MemoryManager) Load added in v0.0.73

func (m *MemoryManager) Load() error

Load reads the memory file content

func (*MemoryManager) WriteMemorySection added in v0.0.73

func (m *MemoryManager) WriteMemorySection(sectionName, content string) error

WriteMemorySection writes or updates a specific section in the memory file

type NormalizeOptions

type NormalizeOptions struct {
	AllowEmpty bool // Whether to allow empty strings in the result
}

NormalizeOptions represents options for string slice normalization

type OllamaModelTag added in v0.0.25

type OllamaModelTag struct {
	Name string `json:"name"`
}

OllamaModelTag represents the details of a single model tag from /api/tags

type OllamaTagsResponse added in v0.0.25

type OllamaTagsResponse struct {
	Models []OllamaModelTag `json:"models"`
}

OllamaTagsResponse represents the top-level structure of Ollama's /api/tags response

type PerformanceMetrics added in v0.0.20

type PerformanceMetrics struct {
	InputProcessingTime  int64 // Time in milliseconds to process inputs
	ModelProcessingTime  int64 // Time in milliseconds for model processing
	ActionProcessingTime int64 // Time in milliseconds for action processing
	OutputProcessingTime int64 // Time in milliseconds for output processing
	TotalProcessingTime  int64 // Total time in milliseconds for the step
}

PerformanceMetrics tracks timing information for processing steps

type ProcessStepConfig added in v0.0.35

type ProcessStepConfig struct {
	WorkflowFile   string                 `yaml:"workflow_file"`
	Inputs         map[string]interface{} `yaml:"inputs"`
	CaptureOutputs []string               `yaml:"capture_outputs"`
}

ProcessStepConfig defines the configuration for a process step

type Processor

type Processor struct {
	// contains filtered or unexported fields
}

Processor handles the DSL processing pipeline

func NewProcessor

func NewProcessor(dslConfig *DSLConfig, envConfig *config.EnvConfig, serverConfig *config.ServerConfig, verbose bool, runtimeDir string, cliVariables ...map[string]string) *Processor

NewProcessor creates a new DSL processor

func (*Processor) CloseStreamLog added in v0.0.106

func (p *Processor) CloseStreamLog()

CloseStreamLog closes the stream log file

func (*Processor) GetMemoryFilePath added in v0.0.73

func (p *Processor) GetMemoryFilePath() string

GetMemoryFilePath returns the path to the memory file, or empty string if not configured

func (*Processor) GetModelProvider

func (p *Processor) GetModelProvider(modelName string) models.Provider

GetModelProvider returns the provider for the specified model

func (*Processor) GetProcessedInputs

func (p *Processor) GetProcessedInputs() []*input.Input

GetProcessedInputs returns all processed input contents

func (*Processor) GetStreamLogPath added in v0.0.108

func (p *Processor) GetStreamLogPath() string

GetStreamLogPath returns the path to the stream log file

func (*Processor) LastOutput

func (p *Processor) LastOutput() string

LastOutput returns the last output value

func (*Processor) NormalizeStringSlice

func (p *Processor) NormalizeStringSlice(val interface{}) []string

NormalizeStringSlice converts interface{} to []string

func (*Processor) Process

func (p *Processor) Process() error

Process executes the DSL processing pipeline

func (*Processor) SetLastOutput

func (p *Processor) SetLastOutput(output string)

SetLastOutput sets the last output value, useful for initializing with STDIN data

func (*Processor) SetMemoryContext added in v0.0.84

func (p *Processor) SetMemoryContext(context string)

SetMemoryContext sets external memory context (e.g., from OpenAI chat messages) This context is used alongside or instead of file-based memory

func (*Processor) SetProgressWriter added in v0.0.14

func (p *Processor) SetProgressWriter(w ProgressWriter)

SetProgressWriter sets the progress writer for streaming updates

func (*Processor) SetStreamLog added in v0.0.106

func (p *Processor) SetStreamLog(path string) error

SetStreamLog sets up stream logging to a file for real-time monitoring

func (*Processor) SubstituteCLIVariables added in v0.0.91

func (p *Processor) SubstituteCLIVariables(text string) string

SubstituteCLIVariables replaces {{varname}} with CLI-provided values

type ProgressDisplay added in v0.0.132

type ProgressDisplay struct {
	// contains filtered or unexported fields
}

ProgressDisplay manages visual output for workflow execution

func NewProgressDisplay added in v0.0.132

func NewProgressDisplay(enabled bool) *ProgressDisplay

NewProgressDisplay creates a new progress display

func (*ProgressDisplay) CompleteLoop added in v0.0.132

func (p *ProgressDisplay) CompleteLoop(name string, iterations int, duration time.Duration)

CompleteLoop marks a loop as complete

func (*ProgressDisplay) CompletePreLoopSteps added in v0.0.132

func (p *ProgressDisplay) CompletePreLoopSteps(duration time.Duration)

CompletePreLoopSteps marks pre-loop steps as complete

func (*ProgressDisplay) CompleteStep added in v0.0.132

func (p *ProgressDisplay) CompleteStep(name string, duration time.Duration)

CompleteStep marks a step as complete

func (*ProgressDisplay) CompleteWorkflow added in v0.0.132

func (p *ProgressDisplay) CompleteWorkflow(loopResults map[string]*LoopOutput)

CompleteWorkflow displays the workflow completion summary

func (*ProgressDisplay) FailLoop added in v0.0.132

func (p *ProgressDisplay) FailLoop(name string, iteration int, err error)

FailLoop marks a loop as failed

func (*ProgressDisplay) FailStep added in v0.0.132

func (p *ProgressDisplay) FailStep(name string, err error)

FailStep marks a step as failed

func (*ProgressDisplay) FailWorkflow added in v0.0.132

func (p *ProgressDisplay) FailWorkflow(err error)

FailWorkflow displays the workflow failure

func (*ProgressDisplay) LoopProgress added in v0.0.132

func (p *ProgressDisplay) LoopProgress(name string, iteration, maxIter int, step string, elapsed time.Duration) string

LoopProgress shows a compact progress line for a loop

func (*ProgressDisplay) SetEnabled added in v0.0.132

func (p *ProgressDisplay) SetEnabled(enabled bool)

SetEnabled enables or disables the progress display

func (*ProgressDisplay) ShowDependencyGraph added in v0.0.132

func (p *ProgressDisplay) ShowDependencyGraph(order []string)

ShowDependencyGraph displays the loop execution order

func (*ProgressDisplay) StartLoop added in v0.0.132

func (p *ProgressDisplay) StartLoop(name string, loopIndex, totalLoops, maxIterations int)

StartLoop displays the loop start header

func (*ProgressDisplay) StartPreLoopSteps added in v0.0.132

func (p *ProgressDisplay) StartPreLoopSteps(stepCount int)

StartPreLoopSteps displays the pre-loop steps header

func (*ProgressDisplay) StartStep added in v0.0.132

func (p *ProgressDisplay) StartStep(name string, model string)

StartStep displays a step starting within a loop

func (*ProgressDisplay) StartWorkflow added in v0.0.132

func (p *ProgressDisplay) StartWorkflow(name string, loopCount int)

StartWorkflow displays the workflow header

func (*ProgressDisplay) UpdateIteration added in v0.0.132

func (p *ProgressDisplay) UpdateIteration(iteration, maxIterations int)

UpdateIteration updates the current iteration display

type ProgressType added in v0.0.14

type ProgressType int

ProgressType represents different types of progress updates

const (
	ProgressSpinner ProgressType = iota
	ProgressStep
	ProgressComplete
	ProgressError
	ProgressOutput       // New type for output events
	ProgressParallelStep // New type for parallel step updates
)

type ProgressUpdate added in v0.0.14

type ProgressUpdate struct {
	Type               ProgressType
	Message            string
	Error              error
	Step               *StepInfo           // Optional step information
	Stdout             string              // Content from STDOUT when Type is ProgressOutput
	IsParallel         bool                // Whether this update is from a parallel step
	ParallelID         string              // Identifier for the parallel step group
	PerformanceMetrics *PerformanceMetrics // Performance metrics for the step
}

ProgressUpdate represents a progress update from the processor

type ProgressWriter added in v0.0.14

type ProgressWriter interface {
	WriteProgress(update ProgressUpdate) error
}

ProgressWriter is an interface for handling progress updates

func NewChannelProgressWriter added in v0.0.14

func NewChannelProgressWriter(ch chan<- ProgressUpdate) ProgressWriter

type QmdIntegrationConfig added in v0.0.129

type QmdIntegrationConfig struct {
	Collection string `yaml:"collection"`        // Collection name to register with qmd
	Embed      bool   `yaml:"embed,omitempty"`   // Run qmd embed after registration
	Context    string `yaml:"context,omitempty"` // Context description for the collection
	Mask       string `yaml:"mask,omitempty"`    // File mask for indexing (default: index file)
}

QmdIntegrationConfig configures qmd integration for codebase indexing

type QmdSearchConfig added in v0.0.129

type QmdSearchConfig struct {
	Query      string  `yaml:"query"`                // Search query (supports variable substitution)
	Collection string  `yaml:"collection,omitempty"` // Restrict to a specific collection
	Mode       string  `yaml:"mode,omitempty"`       // Search mode: search (BM25), vsearch (vector), query (hybrid)
	Limit      int     `yaml:"limit,omitempty"`      // Number of results (default: 5)
	MinScore   float64 `yaml:"min_score,omitempty"`  // Minimum score threshold (0.0-1.0)
	Format     string  `yaml:"format,omitempty"`     // Output format: text (default), json, files
	Full       bool    `yaml:"full,omitempty"`       // Return full document content
}

QmdSearchConfig configures qmd search step

type QualityGate added in v0.0.102

type QualityGate interface {
	Name() string
	Check(ctx context.Context, workDir string) (*QualityGateResult, error)
}

QualityGate is the interface that all quality gates must implement

type QualityGateConfig added in v0.0.102

type QualityGateConfig struct {
	Name    string       `yaml:"name"`              // Gate name
	Command string       `yaml:"command,omitempty"` // Shell command to execute
	Type    string       `yaml:"type,omitempty"`    // Built-in type: syntax, security, test
	OnFail  string       `yaml:"on_fail"`           // Action on failure: retry, skip, abort
	Timeout int          `yaml:"timeout,omitempty"` // Timeout in seconds
	Retry   *RetryConfig `yaml:"retry,omitempty"`   // Retry configuration
}

QualityGateConfig represents the configuration for a quality gate

type QualityGateResult added in v0.0.102

type QualityGateResult struct {
	GateName string                 `json:"gate_name"`
	Passed   bool                   `json:"passed"`
	Message  string                 `json:"message"`
	Details  map[string]interface{} `json:"details,omitempty"`
	Duration time.Duration          `json:"duration"`
	Attempts int                    `json:"attempts"` // Number of attempts made
}

QualityGateResult represents the result of running a quality gate

func RunQualityGates added in v0.0.102

func RunQualityGates(configs []QualityGateConfig, workDir string) ([]QualityGateResult, error)

RunQualityGates executes a list of quality gates with retry logic

type RetryConfig added in v0.0.102

type RetryConfig struct {
	MaxAttempts  int    `yaml:"max_attempts"`  // Maximum retry attempts (default: 3)
	BackoffType  string `yaml:"backoff_type"`  // Backoff strategy: linear, exponential
	InitialDelay int    `yaml:"initial_delay"` // Initial delay in seconds
}

RetryConfig configures retry behavior for quality gates

type SecurityGate added in v0.0.102

type SecurityGate struct {
	// contains filtered or unexported fields
}

SecurityGate scans for common security issues

func NewSecurityGate added in v0.0.102

func NewSecurityGate(name string, timeout int) *SecurityGate

NewSecurityGate creates a new security scanning gate

func (*SecurityGate) Check added in v0.0.102

func (g *SecurityGate) Check(ctx context.Context, workDir string) (*QualityGateResult, error)

func (*SecurityGate) Name added in v0.0.102

func (g *SecurityGate) Name() string

type Spinner

type Spinner struct {
	// contains filtered or unexported fields
}

func NewSpinner

func NewSpinner() *Spinner

func (*Spinner) Disable

func (s *Spinner) Disable()

Disable prevents the spinner from showing any output

func (*Spinner) SetProgressWriter added in v0.0.14

func (s *Spinner) SetProgressWriter(w ProgressWriter)

func (*Spinner) Start

func (s *Spinner) Start(message string)

func (*Spinner) Stop

func (s *Spinner) Stop()

type Step

type Step struct {
	Name   string
	Config StepConfig
}

Step represents a named step in the DSL

type StepConfig

type StepConfig struct {
	Type       string          `yaml:"type"`            // Step type (default is standard LLM step)
	Input      interface{}     `yaml:"input"`           // Can be string, map, or "tool: command"
	Model      interface{}     `yaml:"model"`           // Can be string or []string
	Action     interface{}     `yaml:"action"`          // Can be string or []string
	Output     interface{}     `yaml:"output"`          // Can be string, []string, or "tool: command" / "STDOUT|command"
	NextAction interface{}     `yaml:"next-action"`     // Can be string or []string
	BatchMode  string          `yaml:"batch_mode"`      // How to process multiple files: "combined" (default) or "individual"
	SkipErrors bool            `yaml:"skip_errors"`     // Whether to continue processing if some files fail
	Chunk      *ChunkConfig    `yaml:"chunk,omitempty"` // Configuration for chunking large files
	Memory     bool            `yaml:"memory"`          // Whether to include memory context in this step
	ToolConfig *ToolListConfig `yaml:"tool,omitempty"`  // Tool execution configuration for this step

	// OpenAI Responses API specific fields
	Instructions       string                   `yaml:"instructions"`         // System message
	Tools              []map[string]interface{} `yaml:"tools"`                // Tools configuration
	PreviousResponseID string                   `yaml:"previous_response_id"` // For conversation state
	MaxOutputTokens    int                      `yaml:"max_output_tokens"`    // Token limit
	Temperature        float64                  `yaml:"temperature"`          // Temperature setting
	TopP               float64                  `yaml:"top_p"`                // Top-p sampling
	Stream             bool                     `yaml:"stream"`               // Whether to stream the response
	ResponseFormat     map[string]interface{}   `yaml:"response_format"`      // Format specification (e.g., JSON)

	// Meta-processing fields
	Generate      *GenerateStepConfig  `yaml:"generate,omitempty"`       // Configuration for generating a workflow
	Process       *ProcessStepConfig   `yaml:"process,omitempty"`        // Configuration for processing a sub-workflow
	AgenticLoop   *AgenticLoopConfig   `yaml:"agentic_loop,omitempty"`   // Inline agentic loop configuration
	CodebaseIndex *CodebaseIndexConfig `yaml:"codebase_index,omitempty"` // Codebase indexing configuration
	QmdSearch     *QmdSearchConfig     `yaml:"qmd_search,omitempty"`     // qmd search configuration

	// Worktree fields
	Worktree     string `yaml:"worktree,omitempty"`      // Run step in a specific worktree by name
	EachWorktree bool   `yaml:"each_worktree,omitempty"` // Run step once per worktree (combine with parallel for concurrent)
}

StepConfig represents the configuration for a single step

type StepDependency added in v0.0.20

type StepDependency struct {
	Name      string
	DependsOn []string
}

StepDependency represents a dependency between steps

type StepInfo added in v0.0.14

type StepInfo struct {
	Name         string
	Model        string
	Action       string
	Instructions string // For openai-responses steps
}

StepInfo contains detailed information about a processing step

type StreamLogger added in v0.0.106

type StreamLogger struct {
	// contains filtered or unexported fields
}

StreamLogger handles real-time logging to a file for monitoring long-running operations

func NewStreamLogger added in v0.0.106

func NewStreamLogger(path string) (*StreamLogger, error)

NewStreamLogger creates a new stream logger that writes to the specified file

func (*StreamLogger) Close added in v0.0.106

func (s *StreamLogger) Close() error

Close closes the stream log file

func (*StreamLogger) IsEnabled added in v0.0.106

func (s *StreamLogger) IsEnabled() bool

IsEnabled returns whether stream logging is enabled

func (*StreamLogger) Log added in v0.0.106

func (s *StreamLogger) Log(format string, args ...interface{})

Log writes a message to the stream log with timestamp

func (*StreamLogger) LogContextUsage added in v0.0.110

func (s *StreamLogger) LogContextUsage(usedTokens, thresholdTokens, windowTokens int, percentage float64)

LogContextUsage writes context usage info

func (*StreamLogger) LogError added in v0.0.106

func (s *StreamLogger) LogError(err error)

LogError writes an error message

func (*StreamLogger) LogExit added in v0.0.106

func (s *StreamLogger) LogExit(reason string)

LogExit writes exit reason

func (*StreamLogger) LogIteration added in v0.0.106

func (s *StreamLogger) LogIteration(current, total int, loopName string)

LogIteration writes iteration start info

func (*StreamLogger) LogOutput added in v0.0.106

func (s *StreamLogger) LogOutput(output string, maxLines int)

LogOutput writes the output from an iteration (truncated if too long)

func (*StreamLogger) LogSection added in v0.0.106

func (s *StreamLogger) LogSection(title string)

LogSection writes a section header to the stream log

func (*StreamLogger) LogThinking added in v0.0.106

func (s *StreamLogger) LogThinking(thinking string)

LogThinking writes model thinking/reasoning content

type StyleConfig added in v0.0.132

type StyleConfig struct {
	UseColors   bool
	UseUnicode  bool
	CompactMode bool
}

StyleConfig controls output styling behavior

func DefaultStyleConfig added in v0.0.132

func DefaultStyleConfig() *StyleConfig

DefaultStyleConfig returns the default style configuration

type Styler added in v0.0.132

type Styler struct {
	// contains filtered or unexported fields
}

Styler provides methods for styled terminal output

func NewStyler added in v0.0.132

func NewStyler(config *StyleConfig) *Styler

NewStyler creates a new Styler with the given configuration

func (*Styler) Bold added in v0.0.132

func (s *Styler) Bold(text string) string

Bold returns bold text

func (*Styler) Box added in v0.0.132

func (s *Styler) Box(title string, width int) string

Box draws a box around content

func (*Styler) Dim added in v0.0.132

func (s *Styler) Dim(text string) string

Dim returns dimmed text

func (*Styler) Divider added in v0.0.132

func (s *Styler) Divider(width int) string

Divider returns a horizontal line

func (*Styler) Duration added in v0.0.132

func (s *Styler) Duration(d string) string

Duration returns styled duration

func (*Styler) Error added in v0.0.132

func (s *Styler) Error(text string) string

Error returns red error text

func (*Styler) ErrorIcon added in v0.0.132

func (s *Styler) ErrorIcon() string

ErrorIcon returns a red X

func (*Styler) Highlight added in v0.0.132

func (s *Styler) Highlight(text string) string

Highlight returns magenta highlighted text

func (*Styler) Info added in v0.0.132

func (s *Styler) Info(text string) string

Info returns cyan info text

func (*Styler) Iteration added in v0.0.132

func (s *Styler) Iteration(current, max int) string

Iteration returns styled iteration counter

func (*Styler) LoopIcon added in v0.0.132

func (s *Styler) LoopIcon() string

LoopIcon returns a loop indicator

func (*Styler) LoopName added in v0.0.132

func (s *Styler) LoopName(name string) string

LoopName returns styled loop name (magenta + bold)

func (*Styler) Model added in v0.0.132

func (s *Styler) Model(name string) string

Model returns styled model name (cyan + bold)

func (*Styler) Muted added in v0.0.132

func (s *Styler) Muted(text string) string

Muted returns dim gray text

func (*Styler) ProgressBar added in v0.0.132

func (s *Styler) ProgressBar(current, total, width int) string

ProgressBar returns a progress bar

func (*Styler) RunningIcon added in v0.0.132

func (s *Styler) RunningIcon() string

RunningIcon returns a running indicator

func (*Styler) StepIcon added in v0.0.132

func (s *Styler) StepIcon() string

StepIcon returns a step indicator

func (*Styler) StepName added in v0.0.132

func (s *Styler) StepName(name string) string

StepName returns styled step name (blue)

func (*Styler) Success added in v0.0.132

func (s *Styler) Success(text string) string

Success returns green success text

func (*Styler) SuccessIcon added in v0.0.132

func (s *Styler) SuccessIcon() string

SuccessIcon returns a green checkmark

func (*Styler) TreeBranch added in v0.0.132

func (s *Styler) TreeBranch(isLast bool) string

TreeBranch returns tree-drawing characters for hierarchical output

func (*Styler) TreePipe added in v0.0.132

func (s *Styler) TreePipe() string

TreePipe returns the vertical continuation line for trees

func (*Styler) Warning added in v0.0.132

func (s *Styler) Warning(text string) string

Warning returns yellow warning text

type SyntaxGate added in v0.0.102

type SyntaxGate struct {
	// contains filtered or unexported fields
}

SyntaxGate checks for common syntax errors in various languages

func NewSyntaxGate added in v0.0.102

func NewSyntaxGate(name string, timeout int) *SyntaxGate

NewSyntaxGate creates a new syntax checking gate

func (*SyntaxGate) Check added in v0.0.102

func (g *SyntaxGate) Check(ctx context.Context, workDir string) (*QualityGateResult, error)

func (*SyntaxGate) Name added in v0.0.102

func (g *SyntaxGate) Name() string

type TestGate added in v0.0.102

type TestGate struct {
	// contains filtered or unexported fields
}

TestGate runs tests and checks for failures

func NewTestGate added in v0.0.102

func NewTestGate(name, command string, timeout int) *TestGate

NewTestGate creates a new test execution gate

func (*TestGate) Check added in v0.0.102

func (g *TestGate) Check(ctx context.Context, workDir string) (*QualityGateResult, error)

func (*TestGate) Name added in v0.0.102

func (g *TestGate) Name() string

type ToolConfig added in v0.0.78

type ToolConfig struct {
	// Allowlist of command names that are explicitly allowed (e.g., "ls", "bd", "jq")
	// If empty, all non-denied commands are allowed (use with caution)
	Allowlist []string `yaml:"allowlist"`

	// Denylist of command names that are explicitly denied
	// These take precedence over the allowlist
	Denylist []string `yaml:"denylist"`

	// Timeout for tool execution in seconds (default: 30)
	Timeout int `yaml:"timeout"`

	// WorkingDir for command execution (optional, defaults to current directory)
	WorkingDir string `yaml:"working_dir"`
}

ToolConfig holds configuration for shell tool execution in workflows

func MergeToolConfigs added in v0.0.92

func MergeToolConfigs(globalConfig *ToolConfig, stepConfig *ToolConfig) *ToolConfig

MergeToolConfigs merges a global tool config with a step-level config. Step-level config takes precedence: if step specifies an allowlist, it's used. Global allowlist is additive to the defaults. Denylists are always merged (additive). Step timeout overrides global timeout if specified.

type ToolExecutor added in v0.0.78

type ToolExecutor struct {
	// contains filtered or unexported fields
}

ToolExecutor handles safe execution of shell tools

func NewToolExecutor added in v0.0.78

func NewToolExecutor(config *ToolConfig, verbose bool, debugFunc func(format string, args ...interface{})) *ToolExecutor

NewToolExecutor creates a new tool executor with the given configuration

func (*ToolExecutor) Execute added in v0.0.78

func (te *ToolExecutor) Execute(command string, stdin string) (stdout string, stderr string, err error)

Execute runs a shell command and returns its output

func (*ToolExecutor) IsAllowed added in v0.0.78

func (te *ToolExecutor) IsAllowed(command string) (bool, string)

IsAllowed checks if a command is allowed to be executed

type ToolListConfig added in v0.0.78

type ToolListConfig struct {
	Allowlist []string `yaml:"allowlist"` // Commands explicitly allowed
	Denylist  []string `yaml:"denylist"`  // Commands explicitly denied (takes precedence)
	Timeout   int      `yaml:"timeout"`   // Timeout in seconds for tool execution
}

ToolListConfig allows specifying tool allowlist/denylist at the step level

type ValidationError added in v0.0.120

type ValidationError struct {
	Line    int    // Line number (0 if unknown)
	Field   string // Field or step name involved
	Message string // Human-readable error message
	Fix     string // Suggested fix
}

ValidationError represents a single validation error with actionable feedback

func (ValidationError) String added in v0.0.120

func (e ValidationError) String() string

type ValidationResult added in v0.0.120

type ValidationResult struct {
	Valid  bool
	Errors []ValidationError
}

ValidationResult contains all validation errors

func ValidateWorkflowStructure added in v0.0.120

func ValidateWorkflowStructure(yamlContent string) ValidationResult

ValidateWorkflowStructure performs comprehensive DSL validation on workflow YAML Returns actionable errors that can be fed back to an LLM for correction

func (ValidationResult) ErrorSummary added in v0.0.120

func (r ValidationResult) ErrorSummary() string

ErrorSummary returns a formatted string of all errors for LLM feedback

type WorkflowNode added in v0.0.102

type WorkflowNode struct {
	Type      string `yaml:"type"`                // Node type: loop, step, parallel
	Loop      string `yaml:"loop,omitempty"`      // Loop name to execute
	Role      string `yaml:"role,omitempty"`      // Role: creator, checker, finalizer
	Validates string `yaml:"validates,omitempty"` // Loop name this node validates
	OnFail    string `yaml:"on_fail,omitempty"`   // Action on validation failure: rerun_creator, abort, manual
}

WorkflowNode represents a node in a multi-loop workflow

type WorktreeConfig added in v0.0.142

type WorktreeConfig struct {
	Repo    string         `yaml:"repo,omitempty"`     // Repository path (default: ".")
	BaseDir string         `yaml:"base_dir,omitempty"` // Directory for worktrees (default: ".comanda-worktrees")
	Trees   []WorktreeSpec `yaml:"trees,omitempty"`    // Explicit worktree specifications
	Cleanup bool           `yaml:"cleanup,omitempty"`  // Cleanup worktrees after workflow (default: true)
}

WorktreeConfig defines Git worktrees for parallel Claude Code execution

type WorktreeHandler added in v0.0.142

type WorktreeHandler struct {
	// contains filtered or unexported fields
}

WorktreeHandler manages worktrees for a processor

func NewWorktreeHandler added in v0.0.142

func NewWorktreeHandler(config *WorktreeConfig, repoPath string, verbose bool) (*WorktreeHandler, error)

NewWorktreeHandler creates a new worktree handler

func (*WorktreeHandler) Cleanup added in v0.0.142

func (h *WorktreeHandler) Cleanup() error

Cleanup removes all worktrees if cleanup is enabled

func (*WorktreeHandler) ExpandWorktreeVariables added in v0.0.142

func (h *WorktreeHandler) ExpandWorktreeVariables(input string) string

ExpandWorktreeVariables expands ${worktrees.name.path} and ${worktrees.name.branch} in a string

func (*WorktreeHandler) GetDiffs added in v0.0.142

func (h *WorktreeHandler) GetDiffs() (map[string]string, error)

GetDiffs returns diffs for all worktrees

func (*WorktreeHandler) GetWorkDir added in v0.0.142

func (h *WorktreeHandler) GetWorkDir(worktreeName, defaultWorkDir string) string

GetWorkDir returns the appropriate working directory for a step If worktreeName is specified and exists, returns the worktree path Otherwise returns the default workDir

func (*WorktreeHandler) GetWorktree added in v0.0.142

func (h *WorktreeHandler) GetWorktree(name string) *worktree.Worktree

GetWorktree returns the worktree object by name

func (*WorktreeHandler) GetWorktreePath added in v0.0.142

func (h *WorktreeHandler) GetWorktreePath(name string) (string, error)

GetWorktreePath returns the path for a named worktree

func (*WorktreeHandler) HasWorktrees added in v0.0.142

func (h *WorktreeHandler) HasWorktrees() bool

HasWorktrees returns true if any worktrees are configured

func (*WorktreeHandler) ListWorktreeNames added in v0.0.142

func (h *WorktreeHandler) ListWorktreeNames() []string

ListWorktreeNames returns all worktree names

func (*WorktreeHandler) ListWorktrees added in v0.0.142

func (h *WorktreeHandler) ListWorktrees() []*worktree.Worktree

ListWorktrees returns all worktrees

func (*WorktreeHandler) Setup added in v0.0.142

func (h *WorktreeHandler) Setup() error

Setup creates all worktrees defined in the config

type WorktreeSpec added in v0.0.142

type WorktreeSpec struct {
	Name       string `yaml:"name"`                 // Worktree identifier (required)
	Branch     string `yaml:"branch,omitempty"`     // Existing branch to checkout
	NewBranch  bool   `yaml:"new_branch,omitempty"` // Create a new branch (named worktree-<name>)
	BaseBranch string `yaml:"base,omitempty"`       // Base branch for new branch (default: HEAD)
}

WorktreeSpec defines a single worktree

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL