Common questions about security, permissions, and multi-model setups.
The Desktop app is designed as a “ready-to-use” product: it focuses on graphical install/upgrade, a visual settings panel, one-click integrations for common platforms, and safer defaults (tightened permissions, operation prompts, logging and alerts) so non-developers can get productive quickly. The open-source core is more like an “extensible foundation”: it focuses on a unified agent runtime, plugin/skill architecture, abstract interfaces for models and tools, task orchestration, and memory—making it ideal for secondary development and deep customization. A simple way to think about it: the Desktop app answers “how do I use it with less hassle”, while the open-source core answers “how do I extend it freely”. They can be combined: validate the workflow with the Desktop app first, then add your business capabilities by writing plugins on top of the core.
Yes—and it’s best to design restrictions as layered guardrails, not a single warning message. Layer 1 is command-level authorization: maintain an allowlist of permitted commands (or a denylist for dangerous ones), and separate read/write/execute permissions by scenario—for example, allow log fetching and read-only queries while forbidding high-risk operations like shutdown or deletion. Layer 2 is directory access control: restrict readable/writable paths to the project workspace or a designated sandbox to prevent accidental access to private files or system directories; hard-deny sensitive paths when needed. Layer 3 is interactive confirmation and auditing: require a second confirmation for side-effect actions (deletes, payments, emails, group messages) and record who triggered what and when for traceability. Layer 4 is least privilege and isolation: use a dedicated user, containers/VMs, and egress network policies to reduce blast radius. With these guardrails, even a model mistake is more likely to be blocked safely.
Yes—multiple models and providers are supported, and it’s recommended to treat them as a “capability pool” rather than locking into a single vendor. A common approach is task-based routing: use long-form strengths for writing and summarization, stronger coding models for development and debugging, and faster models for latency-sensitive interactions. You can also tier by cost: reserve expensive models for key steps and use cheaper ones for rough screening, outlining, or batch processing. Additionally, implement failover: when a provider rate-limits, times out, or becomes unstable, automatically switch to a fallback model so work continues. For consistency, centralize “provider / model / API key / quota” in configuration, and set different defaults and limits per environment (local, server, corporate network). This reduces cost and avoids vendor lock-in.
This page is a bilingual static-site template. It mainly reproduces the color palette, layout, and interaction components of a reference site so you can preview locally and customize quickly. Content like download buttons, integration lists, and testimonials can be replaced or extended to match your actual product/project. If you plan to publish it as an official website, it’s recommended to: 1) verify brand information and link destinations (downloads, docs, community, GitHub) all point to your real maintained entry points; 2) add clearer boundaries for security and permission claims (default privileges, sandboxing, whether actions require confirmation) to avoid misleading users; 3) improve critical buttons with availability and fallback (mirrors, backup links, status hints). In short, it can serve as the “frontend shell” of an official site, but final content and navigation should reflect your real release channels.
On Windows, it’s recommended to use a signed installer or a portable ZIP build. Installers are best for everyday users: one-click install, Start Menu shortcuts, optional auto-start, and a clean uninstall entry. Portable builds work well inside a workspace and can move with a project. If Windows prompts for network/firewall access on first launch, allow only the current app and prefer “Private network” to avoid exposing ports on public networks. For updates, prefer an in-app update check and incremental replacement, and avoid overwriting user data directories. Keep model keys, connector credentials, and workspace indexes in user config locations (not the program directory) to make rollback safer. If the app won’t start or crashes immediately, check: 1) whether Windows Defender or antivirus quarantined the executable; 2) missing runtime dependencies or policy restrictions; 3) the workspace contains very long paths, permission-protected folders, or locked files; 4) proxy settings causing first-time initialization to time out. Use application logs and Windows Event Viewer for diagnosis, and consider running as a standard user to surface real permission issues rather than masking them with admin rights.
Linux is often best treated as a “service” deployment. For a single machine, run it via systemd under a dedicated user; on servers, containerization (Docker/Podman) is recommended for clearer permission boundaries and reproducible environments. In all cases, follow least privilege: use a dedicated account, constrain read/write paths to mounted workspaces and cache directories, and avoid direct access to sensitive locations like /etc or other users’ home directories. Apply outbound network allowlists or proxy policies to reduce the risk of key leakage or unintended connections. For reliability, centralize provider settings, fallback routing, concurrency, and timeouts in configuration, and enable health checks with auto-restart. Prefer logging to stdout or a centralized logging system instead of writing sensitive data to world-readable files. If SELinux/AppArmor is enabled, ensure the right profiles/contexts are applied; otherwise you may see denials even when permissions look correct. If exposed beyond localhost, put it behind a reverse proxy with authentication (terminate TLS and handle login at the gateway), and do not expose internal ports directly to the Internet.
There are typically two ways to use it on Android. The first is “remote use”: run the main service/desktop app on a home PC or server, and use your phone as a client via a mobile browser or lightweight Web UI to open the console, monitor task progress and logs, start conversations, and approve sensitive actions. This is the most battery-friendly and stable option, and it keeps keys, plugins, and file capabilities in a controlled environment. The second is “local use”: start a slim runtime or proxy component on the phone using environments like Termux, but you must handle background restrictions, network switching, storage permissions, and battery policies—experience varies across devices and Android versions. It’s also not recommended to store high-privilege keys on a phone long-term; prefer short-lived tokens or QR-code login and keep sensitive credentials server-side. In both approaches, prioritize: 1) sessions and confirmation steps (to prevent accidental side effects); 2) encrypted transport and certificate validation; 3) offline and weak-network UX; 4) notifications and status subscriptions (notify on completion, failure, or when human approval is required). This makes your phone a portable control panel rather than the execution environment that carries all the risk.
Ready to get started?
Choose a platform to install locally, then connect OpenClaw to your favorite chat app.