NanoClaw

NanoClaw - Secure personal AI agent with container isolation

Launched on Mar 26, 2026

NanoClaw is a lightweight open-source personal AI agent running on your own machine. It connects to messaging apps like WhatsApp and Telegram, executes AI tasks in isolated containers, and features just 15 source files for complete auditability. Designed for privacy-conscious users who want full control over their AI assistant.

AI AgentsFreemiumPrivacy FocusedAI Agent FrameworkWorkflow AutomationSelf-hostedOpen Source

What is NanoClaw

NanoClaw addresses a fundamental technical pain point in the modern AI landscape: the overwhelming complexity of enterprise-grade agent frameworks. While powerful, frameworks like OpenClaw—with its 434,453 lines of code across thousands of files and 70+ dependencies—are effectively black boxes for individual developers and privacy-conscious users. They are difficult to audit, understand, and customize, creating a trust deficit for personal use cases.

NanoClaw is a minimalist, open-source personal AI agent designed from the ground up for transparency and user sovereignty. It is built on a radically simplified architecture: a single Node.js process comprising just 15 source files and approximately 3,900 lines of code. This is less than 1% the size of OpenClaw's codebase. Its core philosophy is to provide a fully auditable, comprehensible, and customizable AI assistant that runs on your own hardware.

Technically, NanoClaw leverages the Claude Agent SDK for its core AI capabilities but replaces application-level permission checks with real OS-level container isolation. Each agent session executes within an ephemeral Linux container (or Apple Container on macOS), providing true process, filesystem, and IPC namespace isolation. This architectural choice eliminates the need for complex microservices or message brokers, resulting in a dependency count of less than 10.

The project has gained significant traction in the developer community, evidenced by 25.3k+ GitHub Stars and 8.5k+ Forks, and has been featured in technical publications like VentureBeat, Fortune, and The New Stack. It is positioned not as a team collaboration platform but as a personal tool for developers, tech enthusiasts, and privacy-focused individuals who demand complete control over their AI interactions.

Core Value Proposition
  • Minimalist Codebase: 15 source files, ~3,900 lines of code (vs. OpenClaw's 434k+).
  • Real Container Isolation: OS-level sandboxing (Linux containers/Apple Container) vs. application-level checks.
  • Full Auditability: The entire system can be understood in under 8 minutes.
  • AI-Native Setup: Zero manual configuration; guided by Claude Code via the /setup skill.
  • Multi-Channel Messaging: Connects to WhatsApp, Telegram, Slack, and more via a modular skill system.

NanoClaw's Core Technical Features

NanoClaw's power lies in its deliberate technical choices, which prioritize security, simplicity, and extensibility over monolithic feature bloat.

Container Isolation Architecture The primary security boundary is a real container. On each invocation, a fresh, ephemeral container is spun up with the --rm flag. The container process runs as a non-privileged user (UID 1000), with strict filesystem isolation: only explicitly mounted directories from a user-managed allowlist are visible. This provides guaranteed isolation that application-level sandboxes cannot match.

Security Boundary & Mount Allowlist A critical security component is the mount allowlist, stored at ~/.config/nanoclaw/mount-allowlist.json. This file, which is never mounted into the container, defines which host directories an agent can access. By default, sensitive paths like .ssh, .aws, .kube, .docker, and files containing credentials or .env are blocked. The system performs symbolic link resolution to prevent path traversal attacks and validates all container paths before mounting.

Credential Proxy System API keys and other secrets never enter the container environment. Instead, the host runs a lightweight HTTP credential proxy that transparently injects authentication headers (like x-api-key) into outgoing requests from the container. The agent inside the container only sees placeholder keys, ensuring that even a compromised agent cannot exfiltrate real credentials.

Group Isolation Mechanism NanoClaw implements a multi-tenant architecture for personal use. Each chat group (e.g., family, work project) gets an isolated environment:

  • Independent Memory: A separate CLAUDE.md file and Claude session stored in data/sessions/{group}/.claude/.
  • Isolated Filesystem: Each group has its own mounted workspace.
  • Separate Container Sandbox: IPC and process isolation between groups. This allows a single NanoClaw instance to serve multiple contexts without data leakage.

Agent Swarms Implementation NanoClaw is the first personal AI agent to support Agent Swarms. Using the /add-parallel skill, users can spawn multiple specialized agents to collaborate on complex tasks, enabling parallel analysis and division of labor previously seen only in enterprise frameworks.

Skill System Architecture To avoid codebase inflation, functionality is added via a git-based skill system. Skills are maintained as separate git branches (categorized as Feature, Utility, Operational, or Container skills). Users merge desired skills into their fork. This keeps the core lean while enabling endless customization, such as adding Telegram support or a PDF reader.

  • Minimalist & Auditable: ~3,900 lines of code across 15 files enables full comprehension and trust.
  • True Security Isolation: Real OS containers provide stronger guarantees than application-level sandboxing.
  • Complete User Sovereignty: No black boxes; users own and can modify every aspect of the system.
  • Lightweight & Efficient: Single Node.js process with <10 dependencies minimizes resource footprint and attack surface.
  • Highly Extensible: Git-based skill system allows for modular feature addition without core bloat.
  • Requires Claude API Key: Underlying AI usage is billed by Anthropic based on token consumption.
  • Container Runtime Dependency: Requires Docker (Linux/macOS) or Apple Container (macOS), adding setup complexity.
  • Higher Technical Barrier: Users must be comfortable with terminal, git, and basic DevOps concepts.
  • Limited to Supported Platforms: Currently runs on macOS and Linux, not Windows or cloud PaaS out-of-the-box.

Technical Application Scenarios

NanoClaw excels in specific technical scenarios where its architecture provides distinct advantages over cloud-based or monolithic alternatives.

Privacy-Sensitive Task Processing For developers handling proprietary code, financial data, or personal information, sending data to a cloud AI API is a non-starter. NanoClaw's local execution combined with container isolation provides a dual guarantee: AI capabilities are applied locally, and the agent is confined to a sandbox. This is ideal for tasks like analyzing private logs, refactoring internal codebases, or summarizing confidential documents.

Fully Auditable AI Operations In regulated industries or for security researchers, understanding exactly what an AI system is doing is paramount. NanoClaw's entire codebase can be read and understood in under 8 minutes. This transparency allows for rigorous security audits, compliance verification, and the elimination of unexpected behaviors that plague larger, opaque frameworks.

Scheduled Automation with Cron-like Precision The built-in scheduler supports three trigger types: cron expressions, millisecond intervals, and one-time execution. It uses an atomic declaration mechanism to prevent duplicate execution of the same job. A practical example is automating a daily 9 AM digest of specific RSS feeds or GitHub notifications, with the AI summarizing and sending results back to a designated chat group.

Multi-Group Collaboration with Hard Isolation A developer can use a single NanoClaw instance for both a family group and an open-source project group. The family group's memory, file access, and AI conversations are completely isolated from the project group's. This allows for resource sharing while maintaining strict context and data separation, mimicking secure multi-tenancy on a personal scale.

Custom Extension via Natural Language The AI-native design extends to customization. Instead of writing code, users describe a need to Claude Code (e.g., "add a skill to fetch my calendar events"). Claude Code can directly modify the NanoClaw codebase to implement the feature. This dramatically lowers the barrier to creating a truly personalized AI assistant.

Multi-Channel Messaging Integration The channel system is self-registering and modular. Starting with WhatsApp Web support, adding Telegram involves running the /add-telegram skill. Channels operate in parallel, allowing the AI assistant to be available simultaneously on WhatsApp for quick queries and Slack for team coordination, with context maintained per group across channels.

💡 Who is NanoClaw For?

NanoClaw is ideally suited for technically proficient individuals and developers who prioritize data privacy, require complete control and auditability of their AI tools, and are willing to manage a local runtime. It is less suitable for non-technical users seeking a purely cloud-based, point-and-click SaaS solution.

Technical Architecture & Ecosystem

NanoClaw's architecture is defined by its conscious choice of a simple, modern tech stack and its position within a broader ecosystem of AI tools and runtimes.

Core Technology Stack

  • Language: Primarily TypeScript (95.6%), with minor Python (2.7%) for specific utilities.
  • Runtime: Node.js 20+.
  • Agent SDK: @anthropic-ai/claude-agent-sdk (v0.2.29) for core AI interaction.
  • Local Storage: SQLite via better-sqlite3 for lightweight, persistent message and state storage.
  • Testing: Vitest for unit and integration tests.
  • Container Runtimes: Docker (Linux/macOS) and Apple Container (macOS, optimized for Apple Silicon).

Third-Party Model & Service Integration While optimized for Claude, the architecture supports alternative AI backends. Users can configure endpoints for Ollama (local LLMs), Together AI, Fireworks, and other providers compatible with the Claude Agent SDK's interface, offering flexibility in model choice and cost management.

Community & Development Ecosystem The project is maintained by a core team led by @gavrielc and supported by an active community of 57 contributors with over 424 commits. Development is facilitated through:

  • Discord Community: For real-time discussion and support.
  • GitHub Discussions & Issues: For structured planning and bug tracking.
  • Comprehensive Docs: Including SPEC.md (architecture) and SECURITY.md (security model).
  • CI/CD Pipeline: Automated testing and quality checks.

Architectural Comparison: NanoClaw vs. OpenClaw

Metric NanoClaw OpenClaw (Representative Large Framework)
Source Files 15 Thousands
Lines of Code ~3,900 ~434,453
Production Dependencies < 10 70+
Configuration Files 0 (AI-native setup) 53+
Runtime Code Tokens ~42.4k (21% of context window) Millions (un-auditable)
Core Architecture Single Node.js process, real containers Microservices, message brokers, app-level sandboxing
Primary Goal Personal auditability & control Enterprise-scale team collaboration

This comparison highlights NanoClaw's fundamental design divergence: it trades horizontal scalability for vertical simplicity and user trust.

Quick Start & Integration Guide

Getting started with NanoClaw involves a streamlined, AI-guided process that minimizes manual configuration.

System Requirements

  • Operating System: macOS or Linux.
  • Node.js: Version 20 or higher.
  • Claude Code: An active Claude Code subscription (claude.ai/product/claude-code).
  • Container Runtime:
    • macOS: Apple Container (recommended for Apple Silicon) or Docker Desktop.
    • Linux: Docker Engine.

Installation & AI-Native Setup

  1. Clone the Repository: git clone https://github.com/qwibitai/nanoclaw.git && cd nanoclaw
  2. Launch Claude Code in the project directory.
  3. Run the Setup Skill: Within Claude Code, execute /setup. This interactive skill will:
    • Install Node.js dependencies.
    • Guide you through authenticating your chosen messaging channels (e.g., WhatsApp Web QR scan).
    • Detect and configure the available container runtime (Apple Container or Docker).
    • Start the necessary background services.
    • There are no configuration files to edit manually.

Minimal Viable Test Once setup completes, the agent is typically connected to WhatsApp. Simply send a message to the AI's number from your phone. You should receive an intelligent response, confirming the stack is operational.

Environment Configuration & Best Practices

  • Mount Allowlist: Populate ~/.config/nanoclaw/mount-allowlist.json cautiously. Start with a single, non-critical project directory.
  • Skill Management: Add skills incrementally. Use /add-telegram or browse the skill branches in the repository.
  • Start with the Main Group: Perform initial testing and configuration in the trusted "Main Group" before creating additional groups.
  • Monitoring: Use standard commands like docker ps or ps aux | grep node to monitor container and process health.
💡 Best Practice Recommendation

Begin your journey in the Main Group, which has elevated trust. Test basic commands and file access here first. Only after verifying behavior should you create new groups for specific purposes (work, family). Regularly back up your data/ directory and your mount allowlist file.

Frequently Asked Questions (FAQ)

NanoClaw 与 OpenClaw 在技术架构上有何本质区别?

The difference is foundational. OpenClaw is an enterprise framework built on a microservices architecture with complex inter-service communication (often via message brokers) and relies on application-level permission checks within a shared OS environment. NanoClaw is a single-process application that delegates security to the OS kernel via containerization. It uses real Linux namespaces (pid, net, ipc, mnt) for isolation, which is more robust and simpler than re-implementing security boundaries in user space. NanoClaw has ~3,900 LOC; OpenClaw has over 434k.

容器隔离如何防止 AI 代理访问主机敏感文件?

Isolation is enforced at multiple levels: 1) Filesystem Namespace: The container gets its own root filesystem. Host directories are invisible unless explicitly mounted. 2) Mount Allowlist: The host maintains a strict allowlist (mount-allowlist.json) that is never passed to the container. 3) Path Blocking: Default rules block paths containing .ssh, .aws, credentials, etc. 4) Symbolic Link Resolution: The system resolves symlinks before allowing a mount, preventing traversal attacks like ../../../etc/passwd. An agent cannot read or write anything outside its allowed mounts.

凭证代理系统的工作原理是什么?如何保证 API key 安全?

The host runs a local HTTP proxy server. When the containerized agent makes an HTTP request to an API (e.g., api.anthropic.com), the request is routed through this proxy. The proxy intercepts the request, strips any placeholder authentication headers from the container, and injects the real API key stored securely on the host. The request then proceeds to the internet. The real key never enters the container's memory, filesystem, or environment variables. Even if the agent is malicious, it cannot access the genuine credential.

NanoClaw 支持哪些消息协议和数据格式?

NanoClaw uses a channel-agnostic internal message format. Integration with external platforms is handled by channel-specific "provider" skills. Officially supported providers use:

  • WhatsApp: The whatsapp-web.js library, leveraging the WhatsApp Web protocol.
  • Telegram: The official Telegram Bot API (HTTPS/JSON).
  • Email (Gmail): SMTP/IMAP protocols. The skill system can be extended to support any protocol with a Node.js library (e.g., Matrix, Slack RTM API, Discord Gateway).

技能系统的扩展机制如何工作?如何开发自定义技能?

Skills are developed as git branches following a naming convention (skill/name). A skill contains its implementation code and a manifest. To use a skill, a user merges that branch into their fork of NanoClaw. To develop a custom skill:

  1. Create a new git branch from main.
  2. Implement your skill logic in the appropriate directory (e.g., src/skills/).
  3. Define the skill's metadata and triggers.
  4. Test it locally. You can then maintain it privately or propose it to the upstream repository via a Pull Request for community adoption.

分组隔离在技术上是如何实现的?不同组之间完全隔离吗?

Isolation is implemented per group via:

  1. Separate Data Directories: Each group has its own data/sessions/{group_id}/ subdirectory containing its SQLite database and Claude session state.
  2. Independent Container Mounts: When a container is launched for a group, only that group's designated workspace directory is mounted.
  3. IPC Authentication: All inter-process communication (IPC) messages are tagged with a group ID. The core process validates that a request from Group A cannot perform actions intended for Group B. Groups are completely isolated at the filesystem and process level. They share the same host machine and core NanoClaw process but cannot access each other's data, memory, or containers.

调度器的性能表现如何?支持的最大并发任务数是多少?

The scheduler is designed for personal automation, not high-volume enterprise workloads. It polls for due tasks every 60 seconds. Performance is bound by:

  1. Container Startup Time: The major latency is spawning a new container for each scheduled task execution (typically a few seconds).
  2. Claude API Rate Limits: Concurrent tasks are limited by the throughput of your Claude API key. There is no hard-coded limit on the number of scheduled jobs, but practical constraints are system resources (CPU, memory) and API quotas. For most personal use (dozens of daily/weekly jobs), performance is more than adequate.

NanoClaw 的代码可审计性具体体现在哪些方面?

Auditability is quantified:

  • Volume: 15 files, ~3,900 lines. A developer can realistically read the entire source in one sitting.
  • Dependencies: <10 production dependencies, minimizing the "trusted computing base."
  • Zero Configuration Files: Logic is in code, not scattered across dozens of config files (OpenClaw has 53+).
  • Transparent Architecture: The SPEC.md document explains the entire data flow, IPC, and security model.
  • Simple Core Loop: The main process loop (src/core/loop.ts) is straightforward, orchestrating messages, skills, and containers without deep abstraction layers. This allows for genuine security reviews and deep understanding of all system behaviors.
Comments

Comments

Please sign in to leave a comment.
No comments yet. Be the first to share your thoughts!