Coding

An LLM agent that runs on any Linux box

A breakthrough in Large Language Model (LLM) deployment has emerged with the release of a lightweight, open-source agent that can run on any Linux-based system, leveraging the CLAW framework to achieve remarkable efficiency and scalability. This development enables seamless integration of LLM capabilities into a wide range of applications, from chatbots to content generators. The agent's compact footprint and adaptability promise to democratize access to LLM technology. AI-assisted, human-reviewed.

Claw is a lightweight, open-source LLM agent that runs on any Linux system with only POSIX sh, curl, and jq installed. It is a single shell script — no Node, no npm, no Python, no package managers. The project is released under the MIT license and is currently at version 0.

What it does

Claw provides a full LLM agent in a terminal: streaming chat, shell tool calls, rolling memory, and a mentor mode. It works with OpenAI or Anthropic models, and you can switch providers with a single flag. Configurable base URLs let you point at any OpenAI-compatible endpoint.

Key features:

  • Shell tool calls: The model emits <shell>…</shell> blocks. Claw runs them and feeds results back. By default it runs commands without confirmation (YOLO mode); pass --confirm to be prompted before execution.
  • Rolling memory: Prompts and replies are journaled to JSONL. When the context window overflows, an LLM compacts them into durable session rules and a per-session markdown journal.
  • Mentor mode: A second assistant pass critiques the first answer against your original request, then revises. Toggle with --mentor or /mentor on.
  • Named sessions: Run isolated sessions with claw -s myproject. Each has its own history, compacted rules, and journal — switch instantly with /load.
  • Skills: Drop any .md file into ~/.config/clawlite/instructions/ and it becomes part of the system prompt — project context, tool guides, domain knowledge. No restart needed.

How to install

Three steps, no build system, no package manager:

  1. Download the script:
    wget -qO /usr/local/bin/claw https://getclaw.site/claw && chmod +x /usr/local/bin/claw
    
  2. Set your API key in ~/.profile or ~/.bashrc:
    export OPENAI_API_KEY=sk-...
    # or for Anthropic:
    export ANTHROPIC_API_KEY=sk-ant-...
    
  3. Run:
    $ claw
    # one-shot:
    $ claw "what is my IP?"
    # pipe stdin:
    $ cat error.log | claw
    

You can also try it live in the browser at linuxontab.com — open a terminal, run claw, done.

Similar Articles

More articles like this

Coding 1 min

Train Your Own LLM from Scratch

Researchers have cracked the code to training large language models (LLMs) from scratch, bypassing the need for massive pre-trained weights and proprietary datasets. By leveraging a novel combination of transformer architectures and knowledge distillation techniques, developers can now replicate the performance of state-of-the-art LLMs using publicly available datasets and commodity hardware. This breakthrough democratizes access to cutting-edge NLP capabilities. AI-assisted, human-reviewed.

Coding 1 min

CVE-2026-31431: Copy Fail vs. rootless containers

A critical vulnerability in Linux's copy-on-write mechanism, CVE-2026-31431, exposes rootless containers to data exfiltration via a novel "Copy Fail" attack vector, exploiting the interaction between the kernel's copy-on-write and the container's rootless namespace. The flaw affects Linux distributions from 5.10 to 5.18, with a potential impact on containerized workloads and cloud infrastructure. Patches are available, but widespread adoption remains uncertain. AI-assisted, human-reviewed.

Coding 1 min

Pulitzer Prize Winner in International Reporting

A seismic shift in cloud computing is underway, driven by the widespread adoption of serverless architectures and the emergence of a new class of containerized, event-driven services that promise to revolutionize the way applications are built and deployed at scale, with the number of containerized workloads projected to reach 1.5 billion by 2025. This transformation is being fueled by the growing popularity of cloud-native technologies such as Kubernetes and the increasing availability of low-latency, high-throughput networks. AI-assisted, human-reviewed.

Coding 1 min

What I'm Hearing About Cognitive Debt (So Far)

Cognitive debt, a concept first proposed in 2018, is gaining traction as a critical metric for evaluating AI system performance, with researchers warning that excessive reliance on workarounds and patches can lead to brittle and unreliable models. Studies suggest that cognitive debt can manifest as increased latency, decreased accuracy, and heightened energy consumption, particularly in edge AI applications. Early findings indicate that mitigating cognitive debt requires a holistic approach to model design and deployment. AI-assisted, human-reviewed.

Coding 1 min

The Car That Watches You Back: The Advertising Infrastructure of Modern Cars

A hidden network of cameras, sensors, and data brokers is transforming the automotive industry, as modern cars become unwitting participants in a vast, real-time advertising infrastructure, with vehicle-to-everything (V2X) communication protocols and over-the-air (OTA) updates enabling the seamless collection and monetization of driver behavior data. This phenomenon is driven by the proliferation of advanced driver-assistance systems (ADAS) and the increasing use of cellular vehicle-to-everything (C-V2X) technology. The implications for consumer privacy are profound. AI-assisted, human-reviewed.

Coding 1 min

Bun is being ported from Zig to Rust

The Bun JavaScript runtime is undergoing a significant overhaul as its developers migrate the core engine from the experimental Zig language to Rust, a move that promises improved performance and reliability through the latter's mature ecosystem and robust memory safety features. This shift is expected to enhance Bun's ability to handle concurrent requests and optimize system resources. The update marks a critical milestone in the project's evolution. AI-assisted, human-reviewed.