Lacquer Mascot

Ship Internal Tools with AI Superpowers

Lacquer is a open-source AI workflow engine that turns repeatable engineering tasks into reliable YAML workflows that never skip a step. Think GitHub Actions, but for AI-powered internal tools.

                        
inputs:
  pod_name:
    type: string
    required: true

agents:
  assistant:
    provider: anthropic
    model: claude-sonnet-4
    system_prompt: |
      You are a Kubernetes SRE expert. Analyze logs for: root causes, error patterns, 
      service impact, and specific remediation steps.
    
workflow:
  steps:
    - id: get_logs
      run: "kubectl logs '${{ inputs.pod_name }}' --tail=50 | grep -E 'ERROR|WARN|Exception'"

    - id: analyze_logs
      agent: assistant
      prompt: |
        Analyze these recent error logs and identify root causes and recommended fixes:
        ${{ steps.get_logs.output }}
      
  outputs:
    issues: ${{ steps.analyze_logs.output }}
                    

Stop Building Workflows No One Can Debug

Built for engineers who prefer terminals over drag-and-drop

GitOps Native

Your workflows are just YAML files. Commit them, review them, version them like any other code.

Zero Dependencies

No Python environments, no package conflicts, just a lightweight Go binary.

Local-First Development

Test everything on your laptop before deploying. No cloud account needed.

Familiar DSL

If you've used GitHub Actions, you'll feel right at home.

Declarative > Imperative

Describe what you want, not how to get it. Let Lacquer handle the orchestration complexity while you focus on business logic.

Production Ready

Built-in HTTP server, health checks, metrics, and observability. Deploy to Kubernetes, serverless, or regular VMs.

All the Features Engineering Teams Need

Everything you need to build production-ready AI workflows for internal tooling


steps:
  - id: check_health
    agent: monitor
    prompt: "Check health status of service: ${{ inputs.service_name }}"
    outputs:
      healthy: 
        type: boolean
        description: "Whether the service is healthy"
      error_rate:
        type: float
        description: "The error rate of the service"

  # Conditionally execute steps
  - id: scale_up
    condition: ${{ steps.check_health.outputs.error_rate > 0.05 }}
    run: "kubectl scale deployment ${{ inputs.service_name }} --replicas=5"

  # Break out steps into sub steps and run until a condition is met
  - id: rolling_restart
    while: ${{ steps.rolling_restart.iteration < 3 && !steps.rolling_restart.outputs.healthy }}
    steps:
      - id: restart_pod
        run: |
          kubectl rollout restart deployment/${{ inputs.service_name }}
          kubectl rollout status deployment/${{ inputs.service_name }} --timeout=300s

      - id: verify_health
        agent: monitor
        prompt: |
          Verify service health after restart:
          - Check HTTP endpoints return 200
          - Verify error rate < 1%
          - Confirm all pods are ready

          Service: ${{ inputs.service_name }}
        outputs:
          healthy: 
            type: boolean
            description: "Whether the service is healthy"
          metrics: 
            type: object
            description: "The metrics of the service"

agents:
  incident_responder:
    provider: anthropic
    model: claude-sonnet-4
    system_prompt: |
      You are an SRE expert who:

      - Analyzes production incidents
      - Identifies root causes from logs and metrics
      - Creates runbooks for remediation
      - Documents post-mortems
    tools:
      - name: filesystem
        description: Access runbooks and configuration files
        mcp_server:
          type: local
          command: npx
          args:
            - "-y"
            - "@modelcontextprotocol/server-filesystem"
            - "/etc/kubernetes/manifests"

requirements:
  runtimes:
    - name: python
      version: "3.9"

workflow:
  steps:
    - id: create_fix
      agent: fixer
      prompt: |
        We've encountered the following error in production
        ${{ inputs.error }}

        Please create a fix for the error in the following code:
        ${{ inputs.code }}
      outputs:
        patch:
          type: string
          description: The patch to apply to the code to fix the error

    # Use 'run' steps when you need to execute custom logic that goes beyond
    # simple agent interactions. These are bash scripts that are executed
    # directly on the host system.
    - id: validate_fix
      run: "python3 scripts/validate.py"
      with:
        patch: ${{ steps.create_fix.outputs.patch }}
        code: ${{ inputs.code }}

    # Or use `container` steps when you want to execute custom logic in a more
    # isolated environment. This is useful when you have complex dependencies.
    - id: validate_fix_container
      container: ./validate/Dockerfile
      command:
        - scripts/validate.py
        - ${{ steps.create_fix.outputs.patch }}
        - ${{ inputs.code }}

state:
  rollback_count: 0
  deployment_status: "pending"

workflow:
  steps:
    - id: deploy_service
      run: "helm upgrade --install ${{ inputs.service }} ./charts/${{ inputs.service }}"
      updates:
        deployment_status: "${{ steps.deploy_service.output ? 'deployed' : 'failed' }}"
        
    - id: rollback_if_needed
      condition: ${{ state.deployment_status == 'failed' }}
      run: "helm rollback ${{ inputs.service }}"
      updates:
        rollback_count: "${{ state.rollback_count + 1 }}"

agents:
  ops_assistant:
    provider: openai
    model: gpt-4
    temperature: 0.2
    system_prompt: You investigate production issues and query infrastructure state.
    tools:
      - name: query_metrics
        script: "python ./tools/prometheus_query.py"
        description: "Query Prometheus for system metrics"
        parameters:
          type: object
          properties:
            query:
              type: string
              description: "PromQL query to execute"
            timerange:
              type: string
              description: "Time range (e.g., '5m', '1h', '24h')"

Get Started in 60 Seconds

From install to first workflow in under a minute

1

Install

curl -sSL https://lacquer.ai/install.sh | sh

Single binary, zero dependencies

2

Create

laq init

Get AI to scaffold your first workflow

3

Run

laq run workflow.laq.yml

Execute and see the magic