lobster
by
openclaw

Description: Lobster is a Openclaw-native workflow shell: a typed, local-first “macro engine” that turns skills/tools into composable pipelines and safe automations—and lets Openclaw call those workflows in one step.

View openclaw/lobster on GitHub ↗

Summary Information

Updated 25 minutes ago
Added to GitGenius on April 28th, 2026
Created on January 18th, 2026
Open Issues & Pull Requests: 2 (+0)
Number of forks: 266
Total Stargazers: 1,177 (+0)
Total Subscribers: 19 (+0)

Issue Activity (beta)

Open issues: 2
New in 7 days: 1
Closed in 7 days: 0
Avg open age: 6 days
Stale 30+ days: 0
Stale 90+ days: 0

Recent activity

Opened in 7 days: 1
Closed in 7 days: 0
Comments in 7 days: 0
Events in 7 days: 0

Top labels

No label distribution available yet.

Most active issues this week

No issue events were indexed in the last 7 days.

Detailed Description

Lobster is a workflow shell designed to be a native component of the OpenClaw ecosystem, though it's built to be adaptable for any AI agent. Its primary function is to transform skills and tools into composable, typed pipelines, enabling safe and deterministic automation. The core concept revolves around creating "macro engines" that allow users to define complex workflows and execute them with a single command, saving time and resources, particularly in scenarios involving AI agents that might otherwise need to re-plan each step.

Lobster's key features center around its typed, JSON-first approach to pipelines. This means workflows are defined using structured data (YAML in the provided examples), rather than relying on unstructured text-based pipes. This design choice promotes clarity, maintainability, and easier integration with other systems. The system is designed to be local-first, meaning execution primarily happens within the user's environment, minimizing reliance on external services and enhancing privacy. A crucial design goal is to avoid introducing new authentication surfaces; Lobster leverages existing authentication mechanisms, ensuring security and simplifying integration.

The repository provides a clear example of Lobster's functionality, showcasing how it can be used to monitor GitHub pull requests. The example demonstrates how Lobster can detect changes in a PR and provide detailed information about the changes, including the state, title, and other relevant metadata. The output is structured, making it easy to parse and use in subsequent steps. The example also illustrates how Lobster can be used to monitor a PR that has been approved, highlighting the system's ability to track state changes and trigger actions based on those changes.

Lobster's architecture is built around several core commands. The `exec` command allows users to run operating system commands within the workflow. Data shaping commands like `where`, `pick`, and `head` enable users to manipulate and filter data within the pipeline. The `json` and `table` commands provide rendering capabilities for displaying results in a user-friendly format. The `approve` command introduces approval gates, allowing for human intervention and control within the workflow.

The repository also emphasizes the use of workflow files, which are designed to be readable and script-like. These files use a YAML-based syntax to define steps, including `run:` or `command:` for shell commands, `pipeline:` for native Lobster stages (like calling LLMs), and `approval:` for human approval gates. The use of `stdin` allows for seamless data transfer between steps, eliminating the need for temporary files. The system supports conditional execution using `when` and `condition` expressions, and provides features for handling transient failures through `retry`, `timeout_ms`, and `on_error` settings.

Lobster also offers visualization capabilities through the `lobster graph` command, which allows users to inspect the structure of their workflows before execution. This command generates visual representations of the workflow, including nodes for each step and edges representing data flow and dependencies. The visualization can be output in various formats, including Mermaid, DOT, and ASCII, making it easy to understand and debug complex workflows.

Finally, Lobster provides built-in support for calling Large Language Models (LLMs) through the `llm.invoke` command within a `pipeline:` step. It supports multiple LLM providers, including OpenClaw, Pi, and HTTP-based services. The repository provides clear instructions on how to configure and use these providers, as well as guidance on passing data between steps and ensuring shell safety. The use of environment variables and the availability of workflow arguments as environment variables further enhance the flexibility and security of the system.

lobster
by
openclawopenclaw/lobster

Repository Details

Fetching additional details & charts...