#43

Globales Ranking · von 600 Skills

openclaw-secure-linux-cloud AI Agent Skill

Quellcode ansehen: xixu-me/skills

Critical

Installation

npx skills add xixu-me/skills --skill openclaw-secure-linux-cloud

56.9K

Installationen

Overview

Use this skill for the conservative "deploy first, expose later" pattern for
OpenClaw on a cloud server.

Default to a private control plane:

  • Harden the Linux host before exposing anything.
  • Keep the gateway bound to 127.0.0.1.
  • Reach the Control UI through an SSH tunnel first.
  • Keep token authentication, pairing, and sandboxing enabled.
  • Start with a narrow tool profile and loosen only with an explicit need.

This skill is for secure Linux cloud hosting. If the user only wants the
fastest generic OpenClaw install on a local machine, prefer the official
OpenClaw onboarding docs instead of forcing this flow.

Open references/REFERENCE.md when you need the
command matrix, baseline config shape, checklist, or access-path comparison.

When To Use

Use this skill when the user mentions any of the following:

  • OpenClaw on a cloud server, VM, or other Linux host
  • Secure self-hosting, hardening, or "run it privately"
  • Podman, loopback binding, SSH tunneling, or remote Control UI access
  • Tailscale vs reverse proxy for OpenClaw
  • Pairing, sandboxing, token auth, or locked-down tool permissions
  • Reviewing whether an existing OpenClaw host is too exposed

Do not use this skill for:

  • General Linux hardening with no OpenClaw component
  • Local single-machine onboarding where remote access is irrelevant
  • Pure local onboarding with no remote-host hardening questions
  • Non-Linux hosting unless the user explicitly wants this Linux-first pattern
    adapted

Workflow

1. Classify the request

Put the task in one of these buckets before giving detailed guidance:

  1. Fresh deploy: the user wants to stand up OpenClaw securely on a Linux
    cloud host from scratch.
  2. Hardening review: the user already has OpenClaw running and wants to
    reduce exposure or audit risky defaults.
  3. Access-model decision: the user is choosing between SSH tunneling,
    Tailscale, or a reverse proxy.

2. Start from the secure baseline

Unless the user clearly asks for something else, recommend this baseline:

  • Harden the Linux host first: updates, SSH keys, SSH lock-down, and a
    default-deny inbound firewall matched to the distro.
  • Run OpenClaw under rootless Podman rather than as a root-owned long-lived
    process.
  • Keep the gateway on loopback only.
  • Keep the Control UI private and access it through an SSH tunnel.
  • Require token authentication.
  • Keep pairing enabled for inbound messaging channels.
  • Start with a minimal tool set and sandbox sessions by default.

Treat these as explicit red flags:

  • Binding the gateway to 0.0.0.0
  • Opening port 18789 to the public internet
  • Turning on broad runtime, filesystem, automation, or browser access by
    default
  • Leaving ~/.openclaw readable by other local users

3. Separate local and server actions

Always distinguish between:

  • Local machine actions: SSH key generation, tunnel setup, browser access
  • Server actions: Linux hardening, Podman install path, OpenClaw service
    setup, config permissions, service restarts

Do not blur the two execution contexts together. The user should be able to
tell which commands run on their laptop and which run on the Linux host.

4. Ask only for blocking facts

Only stop for missing facts that change the safe path, such as:

  • Linux distro and host access details when package-manager or firewall
    commands matter
  • Whether OpenClaw is already installed
  • Whether the user truly needs repeated remote private access or public access
  • Whether an existing deployment is already reachable from the internet

If a detail is not safety-critical, make the reasonable secure assumption and
state it.

5. Use the access escalation ladder

Recommend remote access in this order:

  1. SSH tunnel: default for first deployment and personal use
  2. Tailscale: next step when the user needs repeated private access across
    trusted devices
  3. Reverse proxy: only when the user explicitly needs public exposure and
    accepts the extra hardening burden

If the user asks for Tailscale or reverse proxy, still explain why the loopback
binding and private-first model remain the baseline.

Output Expectations

For a fresh deployment, provide:

  • A short architecture summary
  • Local-vs-server steps
  • A conservative config baseline
  • A pre-launch checklist
  • A short "what not to expose" warning

For a hardening review, provide:

  • The likely risks in the current setup
  • A prioritized remediation sequence
  • Any immediate exposure concerns to fix before anything else

For an access-path decision, provide:

  • A recommendation
  • Why it is the lowest-risk fit
  • What extra safeguards are required if the user chooses a broader exposure
    model

Common Mistakes

  • Treating OpenClaw like a normal public web app on day one
  • Assuming auth alone replaces network boundaries
  • Turning on more tool power before the user has a clear workflow that needs it
  • Disabling pairing just to save time during early setup
  • Skipping follow-up audits after changing config or sandbox settings

Reference Usage

Use references/REFERENCE.md when you need:

  • The cross-distro hardening flow and Debian/Ubuntu example commands
  • The Podman-based OpenClaw setup outline
  • The baseline config skeleton
  • The pre-launch checklist
  • The day-to-day audit commands
  • The SSH tunnel vs Tailscale vs reverse-proxy comparison

Installationen

Installationen 56.9K
Globales Ranking #43 von 600

Sicherheitsprüfung

ath High
socket Safe
Warnungen: 0 Bewertung: 90
snyk Medium
zeroleaks Safe
Bewertung: 93
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

So verwenden Sie diesen Skill

1

Install openclaw-secure-linux-cloud by running npx skills add xixu-me/skills --skill openclaw-secure-linux-cloud in your project directory. Führen Sie den obigen Installationsbefehl in Ihrem Projektverzeichnis aus. Die Skill-Datei wird von GitHub heruntergeladen und in Ihrem Projekt platziert.

2

Keine Konfiguration erforderlich. Ihr KI-Agent (Claude Code, Cursor, Windsurf usw.) erkennt installierte Skills automatisch und nutzt sie als Kontext bei der Code-Generierung.

3

Der Skill verbessert das Verständnis Ihres Agenten für openclaw-secure-linux-cloud, und hilft ihm, etablierte Muster zu befolgen, häufige Fehler zu vermeiden und produktionsreifen Code zu erzeugen.

Was Sie erhalten

Skills sind Klartext-Anweisungsdateien — kein ausführbarer Code. Sie kodieren Expertenwissen über Frameworks, Sprachen oder Tools, das Ihr KI-Agent liest, um seine Ausgabe zu verbessern. Das bedeutet null Laufzeit-Overhead, keine Abhängigkeitskonflikte und volle Transparenz: Sie können jede Anweisung vor der Installation lesen und prüfen.

Kompatibilität

Dieser Skill funktioniert mit jedem KI-Coding-Agenten, der das skills.sh-Format unterstützt, einschließlich Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider und anderen Tools, die projektbezogene Kontextdateien lesen. Skills sind auf Transportebene framework-agnostisch — der Inhalt bestimmt, für welche Sprache oder welches Framework er gilt.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Kundensupport