Ollama Memory Embeddings OpenClaw Skill
Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.
Installation
clawhub install ollama-memory-embeddings
Requires npm i -g clawhub
1.5k
Downloads
4
Stars
4
current installs
5 all-time
5
Versions
Ollama Memory Embeddings
This skill configures OpenClaw memory search to use Ollama as the embeddings
server via its OpenAI-compatible /v1/embeddings endpoint.
Embeddings only. This skill does not affect chat/completions routing —
it only changes how memory-search embedding vectors are generated.
What it does
- Installs this skill under
~/.openclaw/skills/ollama-memory-embeddings - Verifies Ollama is installed and reachable
- Lets the user choose an embedding model:
embeddinggemma(default — closest to OpenClaw built-in)nomic-embed-text(strong quality, efficient)all-minilm(smallest/fastest)mxbai-embed-large(highest quality, larger)
- Optionally imports an existing local embedding GGUF into Ollama via
ollama create(currently detects embeddinggemma, nomic-embed, all-minilm,
and mxbai-embed GGUFs in known cache directories) - Normalizes model names (handles
:latesttag automatically) - Updates
agents.defaults.memorySearchin OpenClaw config (surgical — only
touches keys this skill owns):provider = "openai"model = <selected model>:latestremote.baseUrl = "http://127.0.0.1:11434/v1/"remote.apiKey = "ollama"(required by client, ignored by Ollama)
- Performs a post-write config sanity check (reads back and validates JSON)
- Optionally restarts the OpenClaw gateway (with detection of available
restart methods:openclaw gateway restart, systemd, launchd) - Optional memory reindex during install (
openclaw memory index --force --verbose) - Runs a two-step verification:
- Checks model exists in
ollama list - Calls the embeddings endpoint and validates the response
- Checks model exists in
- Adds an idempotent drift-enforcement command (
enforce.sh) - Adds optional config drift auto-healing watchdog (
watchdog.sh)
Install
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh
From this repository:
bash skills/ollama-memory-embeddings/install.sh
Non-interactive usage
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto
Bulletproof setup (install watchdog):
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto \
--install-watchdog \
--watchdog-interval 60
Note: In non-interactive mode,
--import-local-gguf autois treated asno(safe default). Use--import-local-gguf yesto explicitly opt in.
Options:
--model <id>: one ofembeddinggemma,nomic-embed-text,all-minilm,mxbai-embed-large--import-local-gguf <auto|yes|no>: defaultno(safer default; opt in withyes)--import-model-name <name>: defaultembeddinggemma-local--restart-gateway <yes|no>: defaultno(restart only when explicitly requested)--skip-restart: deprecated alias for--restart-gateway no--openclaw-config <path>: config file path override--install-watchdog: install launchd drift auto-heal watchdog (macOS)--watchdog-interval <sec>: watchdog interval (default 60)--reindex-memory <auto|yes|no>: memory rebuild mode (defaultauto)--dry-run: print planned changes and commands; make no modifications
Verify
~/.openclaw/skills/ollama-memory-embeddings/verify.sh
Use --verbose to dump raw API response on failure:
~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose
Drift enforcement and auto-heal
Manually enforce desired state (safe to run repeatedly):
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
--model embeddinggemma \
--openclaw-config ~/.openclaw/openclaw.json
Check for drift only:
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh \
--check-only \
--model embeddinggemma
Run watchdog once (check + heal):
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
--once \
--model embeddinggemma
Install watchdog via launchd (macOS):
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
--install-launchd \
--model embeddinggemma \
--interval-sec 60
GGUF detection scope
The installer searches for embedding GGUFs matching these patterns in known
cache directories (~/.node-llama-cpp/models, ~/.cache/node-llama-cpp/models,~/.cache/openclaw/models):
*embeddinggemma*.gguf*nomic-embed*.gguf*all-minilm*.gguf*mxbai-embed*.gguf
Other embedding GGUFs are not auto-detected. You can always import manually:
ollama create my-model -f /path/to/Modelfile
Notes
- This does not modify OpenClaw package code. It only updates user config.
- A timestamped backup of config is written before changes.
- If no local GGUF exists, install proceeds by pulling the selected model from Ollama.
- Model names are normalized with
:latesttag for consistent Ollama interaction. - If embedding model changes, rebuild/re-embed existing memory vectors to avoid
retrieval mismatch across incompatible vector spaces. - With
--reindex-memory auto, installer reindexes only when the effective
embedding fingerprint changed (provider,model,baseUrl,apiKey presence). - Drift checks require a non-empty apiKey but do not require a literal
"ollama"value. - Config backups are created only when a write is needed.
- Legacy schema fallback is supported: if
agents.defaults.memorySearchis absent,
the enforcer reads known legacy paths and mirrors writes to preserve compatibility.
Statistics
Author
vidarbrekke
@vidarbrekke
Latest Changes
v1.0.4 · Feb 13, 2026
No changes detected in this version (1.0.4). - No file changes between previous and latest version. - No updates to features, documentation, or behavior.
Quick Install
clawhub install ollama-memory-embeddings Related Skills
Other popular skills you might find useful.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.