ClawPad gives you a serious desktop AI workspace that starts in local mode instantly, then scales to cloud only when needed. No account required. No forced subscription. Deterministic local + cache + cloud routing keeps quality high and costs predictable.
Latest 3 uploads plus top 2 most-viewed videos
Latest 20-slide walkthrough: free local intelligence, OpenClaw setup, memory, channels, MCP servers, and cloud-only-for-hard-task routing.
β±οΈ New UploadStep-by-step quick setup showing OpenClaw integration with Gmail and Calendar MCP in a real working flow.
β±οΈ New UploadAdvanced local-model deep dive: private local workflows, stronger memory behavior, and practical cloud escalation only when needed.
β±οΈ New UploadMost-viewed upload on the channel: why local-first routing with OpenClaw + ClawPad can cut recurring AI costs significantly.
β±οΈ PopularSecond most-viewed upload: full product tour covering OpenClaw embedding, setup flow, and daily usage across model providers.
β±οΈ PopularNo download needed β try local AI directly in your browser
Discover the 18 key benefits of using ClawPad
ClawPad is your AI workspace and gateway. Instead of being trapped inside one vendor's chat app, you get a secure local brain that works across models and channels, retrieves the right context per project, and extends via an ecosystem of tools. The result: higher output, lower friction, and full control over your data.
Free for personal, business, and commercial use. Remote API usage is included and free. No subscription fees, no hidden costs, no premium tiers. You only pay providers directly when you choose cloud models.
Your long-term context lives on your device, encrypted and portable. Memory is scoped per workspace β not a leaky global soup. Full control: view, edit, delete, export your memories anytime. Your AI, your data, your control.
Switch between OpenAI, Anthropic, Google, and Grok seamlessly. Use the best model for the job: reasoning, coding, speed, cost. Standardized tool + memory layer means your workflow doesn't break when you change models. Model-agnostic. Workflow-consistent.
Bring the same ClawPad brain into Telegram, Discord, Slack, LINE, and WhatsApp. Conversations are not isolated islands β ClawPad keeps continuity across devices and channels. Your assistant follows you everywhere.
Unlike cloud-based solutions, all your conversations stay on YOUR device. API keys encrypted at rest with machine-specific keys β no cloud, no keychain dependencies.
ClawPad doesn't just stuff the prompt. It retrieves the right context: repo files, docs, decisions, APIs, previous sessions. Workspace isolation prevents cross-project contamination. Always relevant. Never noisy.
Unlike web-based chat interfaces, ClawPad's memory system means your AI remembers your projects, preferences, and past conversations across sessions. Memory survives compaction and works across all channels.
Use your own Anthropic, OpenAI, Google, or Grok API key. Four providers, 27+ models. No middleman. Direct connection means better privacy and you see exactly what you pay for.
Not a slow web wrapper. ClawPad is a proper native application that's fast, responsive, and integrates with your system's features.
Create topics for different projects. Attach files, images, PDFs. Keep your AI conversations as organized as your file system.
10 stunning themes inspired by great artists: Aurora Teal, Starry Night, Klimt Golden, Monet Lilies, Paper Beige, and more. Switch instantly with no restart.
Scale the entire UI from 75% to 200% with Cmd/Ctrl +/-. Every element scales proportionally for comfortable viewing on any screen.
ClawPad's FTS5 knowledge engine indexes every conversation. When you ask a question, it automatically finds relevant context from ALL your topics β past projects inform current work.
Fine-tune how much context the AI uses with intuitive sliders. Control token budgets for history, knowledge engine, and memory. Balance intelligence vs. cost β everything is on by default.
ClawPad's architecture uses four core optimization layers: intent classification, context elision, answer-directed query distillation, and progressive conversation summarization. Supporting modules like semantic cache and prompt caching further reduce cloud spend.
Local MiniLM embeddings detect semantically identical questions β "fix login bug" matches "repair auth issue" at 85% similarity. 3-5x better cache hit rates than plain text matching. Zero API cost for repeated queries.
Each topic remembers its own model. Use GPT for code generation, Claude for architecture, a cheaper model for quick questions. Switch with one click β context preserved, no configuration needed.
What is OpenClaw? OpenClaw is a local AI gateway that runs on your computer. It provides unified memory across all your AI interactions, whether from ClawPad desktop, Telegram, Discord, Slack, LINE, WhatsApp, or any connected channel. Think of it as your personal AI brain that never forgets.
OpenClaw is your personal AI gateway that keeps everything in sync
Start a conversation on your desktop. Organize by topics, attach files, get work done.
Your AI gateway stores context, memories, and conversation history locally.
Pick up where you left off on Telegram, Discord, Slack, LINE, or WhatsApp. Your AI knows what you discussed.
β¨ Same conversation, different devices. Your AI remembers everything because OpenClaw keeps your context unified.
Access your AI assistant wherever you chat β same memory, any platform
2B+ users worldwide
Available Now2B+ users worldwide
Available Now1B+ users worldwide
Coming Soon200M+ users in Asia
Available Now1.3B+ users in China
Coming Soon150M+ active users
Available Now30M+ daily users
Coming Soon40M+ privacy users
Coming SoonNo matter which messenger you use, OpenClaw keeps your AI context unified. Start a conversation on your desktop, continue on Telegram during your commute, switch to Discord at home, or pick up on LINE β your AI remembers everything.
Switch between the world's best AI models with one click
Anthropic's most capable AI
RecommendedOpenAI's powerful model
AvailableGoogle's multimodal AI
AvailablexAI's conversational model
AvailablePer-topic model selection: Each topic remembers its model. Use GPT-5.2 Codex for code generation, Claude Opus for architecture decisions, a cheaper model for quick questions. Switch with one click β context preserved across all channels.
4 architecture-backed optimization layers, plus semantic cache and prompt caching for lower cloud-token usage.
Anthropic cache control reuses system prompts across turns
MiniLM semantic embeddings catch similar questions
Simple tasks stay local; cloud is used when complexity or tool needs exceed local limits
Intent classification, context elision, query distillation, and progressive summarization
Each message is classified as LIGHT/MEDIUM/HEAVY so context depth and budgets match task complexity.
Follow-up turns avoid re-injecting unchanged context, reducing repetitive token spend.
Local assist creates answer-directed directives that steer cloud responses toward concise, task-shaped outputs.
Long conversations are compacted into rolling summaries to preserve continuity without runaway token growth.
MiniLM embeddings detect similar questions and reuse prior answers with zero API cost on cache hits.
Provider prompt-caching features reduce repeated system/context token charges on long threads.
Per-model context/output limits and token counting keep payloads within safe, efficient bounds.
Context headers and history are injected only when needed, reducing repeated prompt overhead.
Default local runtime (`local_openai` + qwen3-4b-q4), gate-scored local/cloud routing, and semantic cache orchestration built directly into ClawPad.
Local models run in-process via llama-cpp-python (no external server dependency). This reduces moving parts and improves stability for daily use.
Each query is classified (LIGHT/MEDIUM/HEAVY), scored for context-fit/tool-need confidence, then routed to local or cloud. Topic-level model overrides always win over global defaults.
Simple requests use local models for speed and privacy. Hard or tool-intensive requests auto-escalate to cloud models for quality and capability continuity.
Local-first handling + semantic cache reduce paid API calls while preserving cloud quality for complex work.
In-process local inference avoids network round trips for lightweight tasks and routine follow-up questions.
More requests can stay on-device when routed locally, keeping sensitive project context under user control.
If a request exceeds local capability, ClawPad escalates to cloud and preserves model-interface consistency instead of producing unstable output.
Min RAM: 24GB β’ Effective Context: 6,144 β’ Size: ~18.3GB
Min RAM: 16GB β’ Effective Context: 6,144 β’ Size: ~11.6GB
Min RAM: 16GB β’ Effective Context: 6,144 β’ Size: ~8.4GB
Min RAM: 10GB β’ Effective Context: 6,144 β’ Size: ~4.7GB
Min RAM: 10GB β’ Effective Context: 6,144 β’ Size: ~4.4GB
Min RAM: 10GB β’ Effective Context: 6,144 β’ Size: ~4.4GB
Min RAM: 10GB β’ Effective Context: 6,144 β’ Size: ~4.1GB
Min RAM: 6GB β’ Effective Context: 3,072 β’ Size: ~2.5GB
Min RAM: 6GB β’ Effective Context: 3,072 β’ Size: ~2.2GB
Min RAM: 6GB β’ Effective Context: 6,144 β’ Size: ~2.0GB
Min RAM: 6GB β’ Effective Context: 6,144 β’ Size: ~2.5GB
Min RAM: 4GB β’ Effective Context: 3,072 β’ Size: ~0.9GB
Min RAM: 4GB β’ Effective Context: 1,536 β’ Size: ~1.0GB
All local models run with compact-context optimization and deterministic handoff: local-first, semantic cache second, cloud path when complexity/tool requirements exceed local limits. Tool-capable local models include GLM-4.7 Flash, GPT-OSS 20B, Qwen3 14B, and Qwen3 8B; reliable MCP local execution is currently validated on Qwen3 14B.
Run Phi-3.5, Qwen2.5, Llama 3.2, or Gemma 2 directly in your browser via WebGPU. No download, no API key, completely free.
Agent Jobs is ClawPad's always-on execution layer: scheduled work, manual run-now dispatch, run history, structured outputs, and health-aware recovery built into the product core.
Two execution loops run together: full proactive analysis on a 5-minute cadence and a lightweight 30-second heartbeat pulse for fast due-job detection and health transitions.
Due-job execution honors explicit job_id to prevent wrong-job dispatch, and Run Now triggers real manual engine execution (not detail-only navigation).
Job definitions are separated from run instances. Each run tracks status, trigger source, timeline updates, and canonical structured_report payloads for dashboard/report rendering.
Manage jobs through active, archived, and trash states with restore/clone/delete flows, plus run-aware timelines and analytics surfaces in dashboard views.
Required MCP dependencies are checked before execution. Jobs can skip safely when dependencies are unhealthy, retry on later pulses, and avoid silent degradation.
HTTP hooks support wake, agent turn, proactive cycle, pulse, manual job run, and status audit, enabling CI/cron/monitoring systems to trigger and observe Agent Jobs externally.
Remote control: drive Agent Jobs via /api/proactive/* and /api/hooks/job/run, monitor with /api/hooks/status and /health, and keep desktop + channel execution context in sync.
Mapped to `REQUIREMENTS.md` + `ARCHITECTURE.md`, then validated by ClawPad's RRG go/no-go gate before shipping.
Global setting inheritance with per-topic overrides and runtime model validation avoids broken provider/model combinations in channel flows.
`local_openai` + qwen3-4b-q4 runs out of the box. Cloud models are optional upgrades for heavy reasoning and tool-intensive tasks.
Desktop + Telegram + Discord + Slack + LINE + WhatsApp use one shared memory and channel identity mapping, so context remains coherent across surfaces.
Fast/API/full tier matrices, release regression suite, and artifact preflight checks run as the official ship/no-ship decision path.
Every release DMG is Developer ID signed, notarized, stapled, checksumed, and aligned with updater asset naming conventions.
Website download links, GitHub release assets, and updater resolution are preflight-validated to prevent broken update prompts.
25 powerful features for serious AI work β 100% Free
Intent classification, context elision, query distillation, and progressive summarization are the core architecture layers. Semantic cache and prompt caching further lower cloud-token usage.
ClawPad routes each query through a token-efficient pipeline: local model first for fast/private tasks, semantic cache layer for repeat or similar questions, then cloud model for heavy/tool-intensive work. Per-topic model overrides are still respected.
One-click install MCP servers from the MCP Server Store. Filesystem, Git, Memory, Web Fetch, YouTube, Playwright, SQLite, Notion, Slack, Home Assistant, and more. Every server gives your AI new real-world tools.
ClawPad's superpower. Every message includes topic name, recent history, and relevant files β the AI always knows exactly what you're working on.
Switch between Claude, GPT, Gemini, and Grok in one click. Global defaults propagate across channels, while per-topic model overrides remain isolated and enforced.
Each topic is an isolated workspace with its own history, files, and context. Work on multiple projects simultaneously without mixing context.
Click any message to copy it. Click again to deselect. Select multiple messages and they concatenate on your clipboard. Code blocks get their own copy button too.
Start a chat, switch topics, and both continue streaming. Work on multiple things at once with visual indicators for active responses.
Attach files to topics. Track inputs and AI-generated outputs with version history. Drag & drop to add, click header to browse.
FTS5 full-text search indexes every conversation. The AI finds relevant context from ALL topics β your past projects automatically inform current work.
Your AI remembers facts, preferences, and decisions across sessions. Survives conversation compaction, works across all channels. Auto-enabled with zero setup.
GLM-4.7 Flash, GPT-OSS 20B, Qwen3 14B/8B/4B, Qwen2.5 7B, Qwen2.5 Coder 7B, Qwen2.5 1.5B, Mistral 7B v0.3, Phi-3.5 Mini, Llama 3.2 3B, Gemma 3 4B, and SmolLM2 1.7B.
Seamless sync across all channels via OpenClaw Gateway. Start on desktop, continue on any connected channel. Your AI remembers everything.
Local SQLite database, encrypted API keys, no cloud storage. Your data stays on your device. Works offline for browsing history.
Aurora Teal, Starry Night, Klimt Golden, Frozen Steel, Monet Lilies, Paper Beige, and more. Real-time switching with desktop-grade color clarity.
Built with PyQt6 β not Electron. Fast startup, smooth scrolling, proper macOS integration. Feels like a real app because it is one.
Database backup/restore, connection management, API key configuration, device info, and automatic updates. You're in charge.
Full syntax highlighting, markdown rendering with native table support, per-block copy buttons, and toggle copy on messages. Markdown tables render with themed colors and alignment.
AI responses render as rich interactive HTML β dashboards, styled data tables, color-coded charts, and SVG graphics right inside chat bubbles. Not screenshots β real, live rendering.
Never truncated again. When responses hit the token limit, ClawPad auto-continues in a new bubble. Up to 5 continuations for 80K+ token responses β zero manual intervention.
OpenClaw's agentic loop gives your AI real tools β file search, code reading, structured analysis. Every tool call shows as a visible card. Visual control with permission presets and per-tool allow/ask/block.
Full internationalization with 12 languages: English, Korean, Japanese, Chinese (Simplified), Chinese (Traditional), Spanish, French, German, Portuguese, Italian, Russian, and Arabic. Switch instantly in Settings.
Full parity with Claude Code's tool suite β Edit, MultiEdit, LS, NotebookEdit, TodoWrite, and TodoRead. Your AI agent has the same capabilities as a professional coding assistant.
Comprehensive keyboard shortcuts in the menu bar for power users. Navigate topics, send messages, toggle panels, and manage conversations without touching the mouse.
Terminal-style input history with Up/Down arrow keys. Recall and re-send previous messages instantly β just like in a Unix shell or IDE terminal.
Everything you need to start now: local-first AI, serious desktop workflow controls, and zero lock-in licensing.
Install, open, and chat immediately in local mode. No API key, no account setup, no cloud dependency required.
Use shell-style shortcuts, live similar-prompt suggestions, and cross-channel prompt history search from one input system.
Run scheduled and manual AI jobs with lifecycle control, progress visibility, health checks, and channel-connected execution.
REST + WebSocket APIs, webhooks, and job endpoints let you integrate ClawPad into your own internal and external systems.
Build on top of ClawPad β 180 endpoints across 15 categories for conversations, files, agent jobs, UI automation, MCP servers, and gateway control.
Full API reference is available at /api/.
Full HTTP API on port 18790 with real-time WebSocket events. The API auto-starts by default (remote_api_autostart=true), and can also run in explicit API/headless modes for automation.
Trigger wake/agent/proactive flows via /api/hooks/* and audit delivery in webhook status logs for production automation.
Create/filter/select/schedule agent jobs through /api/proactive/* while keeping desktop + channel execution context aligned.
# Standard launch (API auto-starts by default)
./ClawPad.app/Contents/MacOS/ClawPad
# Headless automation runtime
./ClawPad.app/Contents/MacOS/ClawPad --stealth --api --api-port 18790
# Health check
curl http://127.0.0.1:18790/health
import requests, uuid
BASE = "http://127.0.0.1:18790"
run_id = f"site-{uuid.uuid4().hex[:8]}"
# Ask ClawPad to run a proactive cycle now
hook = requests.post(f"{BASE}/api/hooks/proactive").json()
print(hook["success"], hook["data"]["run_id"])
# Read dashboard state
state = requests.get(f"{BASE}/api/proactive/dashboard/state").json()
print("cards:", len(state["data"]["cards"]))
# Inspect webhook execution log
status = requests.get(f"{BASE}/api/hooks/status").json()
print("recent hooks:", len(status["data"]["recent"]))
Track AI usage, token consumption, and conversation patterns across all your projects.
Build CI/CD pipelines that use AI for code review, documentation, and testing.
Connect ClawPad to your existing tools β Notion, Jira, Slack, or your own apps.
Chain multiple AI calls with different models, contexts, and file inputs for complex tasks.
ClawPad installs OpenClaw automatically β just download and go. Or try it free in your browser first β
Choose your platform and download ClawPad. Free for all use β personal, business, and commercial.
On first launch, ClawPad starts with its embedded local model by default. Add cloud API keys only when you want higher-end reasoning or provider-specific models.
Local default: qwen3-4b-q4 β’ Optional: sk-ant-... / sk-... / AIza...
Optional cloud keys β
Use ClawPad on desktop and mobile PWA with one shared brain. Desktop stays passive by default and serves remote sessions only when you request a ticket.
/connect to your linked bot.
/ticket to your ClawPad bot.
/app/ and paste the ticket.
Thatβs it. Desktop + mobile PWA + channels now share the same AI memory and topic context, including voice input and TTS read-aloud.
Now you can:
Having issues? Here are quick solutions
Try these steps:
Verify your key:
Connection checklist:
Check sync status:
Your data location:
~/ClawPad/
Contains: database, projects, settings. API keys are encrypted with your machine's unique ID.
Fresh start:
~/ClawPad/ folderSwitch themes:
Scale the entire UI:
Cmd/Ctrl + to zoom in (up to 200%)Cmd/Ctrl - to zoom out (down to 75%)Cmd/Ctrl 0 to reset to 100%Speak, transcribe, and read aloud:
Ctrl+Shift+Space to toggle playback and Ctrl+Shift+S to toggle auto-read modeStill stuck? For feedback, questions, and inquiries, post in the ClawPad YouTube Community.
Free for personal, business, and commercial use. Remote APIs are included at no extra cost. No account required.
v0.7.5 β macOS (Apple Silicon + Intel via Rosetta)
~352 MBPlatform-focused release: stronger local-first routing, Prompt Intelligence, Agent Jobs maturity, and production-grade API workflows.
Input-level command shortcuts, live similar-prompt suggestions, and searchable prompt history now work as a single productivity layer.
Routing and model inheritance were tightened so local-first behavior remains predictable while still escalating cleanly for complex tasks.
Dashboard lifecycle behavior, scheduling reliability, and job state observability were upgraded for real daily operational workflows.
The extension catalog experience now reflects broader official/community coverage for practical integrations across work and personal tasks.
Core workflows and views were normalized to respect theme colors and improve contrast/clarity across diverse visual themes.
Back navigation and prompt suggestion behavior were refined to reduce friction and make fast context switching reliable.
API and webhook surfaces were further aligned with production automation use cases and release validation standards.
Packaging, signing, and release checks were reinforced so downloadable builds remain stable and trustable for end users.
Full-featured AI workspace with topics, files, themes
One-click install: files, git, browser, database, web fetch & more
Local AI gateway that syncs everything
Telegram, Discord, Slack, LINE, WhatsApp β all set up inside ClawPad
Your AI remembers across all platforms
Local STT mic input, Edge TTS voices, auto-read mode, and adaptive character-level highlighting
FTS5 search indexes all conversations across topics
Persistent facts and preferences across sessions
Art-inspired themes including Aurora Teal, Starry Night, Klimt Golden, and Paper Beige
Scale UI 75%-200% with Cmd/Ctrl +/-
Charts, tables, and rich content rendered natively in chat
Unlimited AI response length with automatic continuation
Beautiful themed tables with colors and alignment
Real tools with visible tool cards and permission controls
Full i18n β English, Korean, Japanese, Chinese (Simplified/Traditional), Spanish & more
Edit, MultiEdit, LS, NotebookEdit, TodoWrite & TodoRead