Liu Longterm Memory OpenClaw Skill

Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vib...

v1.0.4 Recently Updated Updated 4 days ago

Installation

clawhub install liu-longterm-memory

Requires npm i -g clawhub

39

Downloads

0

Stars

0

current installs

0 all-time

5

Versions

EU EU-Hosted Inference API

Power your OpenClaw skills with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

Elite Longterm Memory ๐Ÿง 

The ultimate memory system for AI agents. Combines 6 layers into one bulletproof architecture.

Never lose context. Never forget decisions. Never repeat mistakes.

Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    ELITE LONGTERM MEMORY                        โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                                 โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”             โ”‚
โ”‚  โ”‚   HOT RAM   โ”‚  โ”‚  WARM STORE โ”‚  โ”‚  COLD STORE โ”‚             โ”‚
โ”‚  โ”‚             โ”‚  โ”‚             โ”‚  โ”‚             โ”‚             โ”‚
โ”‚  โ”‚ SESSION-    โ”‚  โ”‚  LanceDB    โ”‚  โ”‚  Git-Notes  โ”‚             โ”‚
โ”‚  โ”‚ STATE.md    โ”‚  โ”‚  Vectors    โ”‚  โ”‚  Knowledge  โ”‚             โ”‚
โ”‚  โ”‚             โ”‚  โ”‚             โ”‚  โ”‚  Graph      โ”‚             โ”‚
โ”‚  โ”‚ (survives   โ”‚  โ”‚ (semantic   โ”‚  โ”‚ (permanent  โ”‚             โ”‚
โ”‚  โ”‚  compaction)โ”‚  โ”‚  search)    โ”‚  โ”‚  decisions) โ”‚             โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜             โ”‚
โ”‚         โ”‚                โ”‚                โ”‚                     โ”‚
โ”‚         โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                     โ”‚
โ”‚                          โ–ผ                                      โ”‚
โ”‚                  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                โ”‚
โ”‚                  โ”‚  MEMORY.md  โ”‚  โ† Curated long-term           โ”‚
โ”‚                  โ”‚  + daily/   โ”‚    (human-readable)            โ”‚
โ”‚                  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                โ”‚
โ”‚                          โ”‚                                      โ”‚
โ”‚                          โ–ผ                                      โ”‚
โ”‚                  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                โ”‚
โ”‚                  โ”‚   Backup    โ”‚  โ† zip / Git remote (optional) โ”‚
โ”‚                  โ”‚ zip / Gitee โ”‚                                โ”‚
โ”‚                  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                โ”‚
โ”‚                                                                 โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

The 6 Memory Layers

Layer 1: HOT RAM (SESSION-STATE.md)

From: bulletproof-memory

Active working memory that survives compaction. Write-Ahead Log protocol.

# SESSION-STATE.md โ€” Active Working Memory

## Current Task
[What we're working on RIGHT NOW]

## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...

## Pending Actions
- [ ] ...

Rule: Write BEFORE responding. Triggered by user input, not agent memory.

Layer 2: WARM STORE (LanceDB Vectors)

From: lancedb-memory

Semantic search across all memories. Auto-recall injects relevant context.

# Auto-recall (happens automatically)
memory_recall query="project status" limit=5

# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9

Layer 3: COLD STORE (Git-Notes Knowledge Graph)

From: git-notes-memory

Structured decisions, learnings, and context. Branch-aware.

# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h

# Retrieve context
python3 memory.py -p $DIR get "frontend"

Layer 4: CURATED ARCHIVE (MEMORY.md + daily/)

From: OpenClaw native

Human-readable long-term memory. Daily logs + distilled wisdom.

workspace/
โ”œโ”€โ”€ MEMORY.md              # Curated long-term (the good stuff)
โ””โ”€โ”€ memory/
    โ”œโ”€โ”€ 2026-01-30.md      # Daily log
    โ”œโ”€โ”€ 2026-01-29.md
    โ””โ”€โ”€ topics/            # Topic-specific files

Layer 5: BACKUP (zip / Git Remote) โ€” Optional

Cross-device sync and disaster recovery. Use the CLI commands:

zip Backup (็ฎ€ๅ•ๅฟซ้€Ÿ)

npx liu-longterm-memory backup
# โ†’ Creates memory-backup-20260404-153022.zip

npx liu-longterm-memory restore memory-backup-20260404-153022.zip
# โ†’ Restores from backup

Git Remote Backup (ๆŽจ่๏ผŒๆ”ฏๆŒ็‰ˆๆœฌๅކๅฒ)

npx liu-longterm-memory backup --git
# โ†’ Commits and pushes memory files to your Git remote

# Tip: Use Gitee for domestic users (ๅ›ฝๅ†…ๆŽจ่)
# git remote add origin https://gitee.com/your-username/my-memory

Benefits:

  • Version history: Track how decisions evolved over time
  • Cross-device sync: Pull on any machine
  • Free: GitHub and Gitee both offer free private repos
  • ๅ›ฝๅ†…็›ด่ฟž: Gitee ๆ— ้œ€ไปฃ็†

Layer 6: AUTO-EXTRACTION (LLM-Powered)

Automatic fact extraction from conversations using LLM. Two modes:

Mode A: Agent-Driven Extraction (้›ถไพ่ต–๏ผŒ้ป˜่ฎค)

No external service needed. The agent follows these rules to auto-extract facts:

Detected Pattern Auto-Action
User states a preference Write to MEMORY.md ## Preferences + memory_store (importance=0.9)
User makes a decision Write to MEMORY.md ## Decisions Log + Git-Notes
User gives a deadline/date Write to SESSION-STATE.md ## Key Context
User mentions a tech stack Write to MEMORY.md ## Projects
User corrects the agent Update SESSION-STATE.md + memory/lessons.md
Session ends Distill key facts into memory/YYYY-MM-DD.md

Mode B: LLM Batch Extraction (ๆ™บ่ฐฑๅ…่ดนๆจกๅž‹๏ผŒๆŽจ่)

Use ZhipuAI's free GLM-4-Flash model to batch-extract facts from conversation history. Zero cost.

Call the GLM-4-Flash chat completions endpoint with a system prompt:

"Extract structured facts from the conversation. Return JSON array: [{type, content, importance}]. Types: preference, decision, fact, deadline, correction."

Then write each extracted fact to the appropriate memory layer.

  • Free: GLM-4-Flash ๅฎŒๅ…จๅ…่ดน๏ผŒๅœจ https://bigmodel.cn/ ๆณจๅ†Œ่Žทๅ–ๅฏ†้’ฅ
  • Automatic: Extracts preferences, decisions, facts, deadlines
  • ๅ›ฝๅ†…็›ด่ฟž: No proxy needed
  • 80% token reduction vs raw conversation history

Quick Setup

1. Create SESSION-STATE.md (Hot RAM)

cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md โ€” Active Working Memory

This file is the agent's "RAM" โ€” survives compaction, restarts, distractions.

## Current Task
[None]

## Key Context
[None yet]

## Pending Actions
- [ ] None

## Recent Decisions
[None yet]

---
*Last updated: [timestamp]*
EOF

2. Enable LanceDB (Warm Store) โ€” Optional

No API key required for core memory. Layers 1/3/4 (SESSION-STATE.md, Git-Notes, MEMORY.md) work without any key. LanceDB vector search is an optional enhancement.

Choose your embedding provider in your config file (~/.openclaw/openclaw.json or ~/.clawdbot/clawdbot.json):

Option A: ZhipuAI (ๅ›ฝๅ†…ๆŽจ่๏ผŒๅ…่ดน้ขๅบฆๅ……่ถณ)

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai-compatible",
    "baseURL": "https://open.bigmodel.cn/api/paas/v4",
    "model": "embedding-3",
    "apiKeyEnv": "ZHIPUAI_API_KEY",
    "sources": ["memory"],
    "minScore": 0.3,
    "maxResults": 10
  },
  "plugins": {
    "entries": {
      "memory-lancedb": {
        "enabled": true,
        "config": {
          "autoCapture": false,
          "autoRecall": true,
          "captureCategories": ["preference", "decision", "fact"],
          "minImportance": 0.7
        }
      }
    }
  }
}

Register at https://bigmodel.cn/ to get your free key, then set the ZHIPUAI_API_KEY environment variable.

Option B: Local Ollama (ๅฎŒๅ…จๅ…่ดน๏ผŒ็ฆป็บฟๅฏ็”จ)

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai-compatible",
    "baseURL": "http://localhost:11434/v1",
    "model": "nomic-embed-text",
    "apiKeyEnv": "",
    "sources": ["memory"],
    "minScore": 0.3,
    "maxResults": 10
  }
}
# Install and pull embedding model
ollama pull nomic-embed-text

Option C: Any OpenAI-Compatible API (้€š็”จๆ–นๆกˆ)

Works with OpenAI, DeepSeek, Moonshot, ้€šไน‰ๅƒ้—ฎ, or any service with an OpenAI-compatible /v1/embeddings endpoint.

{
  "memorySearch": {
    "enabled": true,
    "provider": "openai-compatible",
    "baseURL": "https://api.openai.com/v1",
    "model": "text-embedding-3-small",
    "apiKeyEnv": "OPENAI_API_KEY",
    "sources": ["memory"],
    "minScore": 0.3,
    "maxResults": 10
  }
}

Set the environment variable matching your apiKeyEnv config (e.g. OPENAI_API_KEY, DEEPSEEK_API_KEY, or DASHSCOPE_API_KEY).

Option D: Disabled (็บฏๆ–‡ไปถ่ฎฐๅฟ†๏ผŒๆ— ้œ€ไปปไฝ• Key)

{
  "memorySearch": {
    "enabled": false
  }
}

Memory still works via SESSION-STATE.md, MEMORY.md, Git-Notes, and daily logs โ€” just without vector semantic search.

3. Initialize Git-Notes (Cold Store)

cd ~/clawd
git init  # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start

4. Verify MEMORY.md Structure

# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory

5. (Optional) Setup Backup

# Option 1: zip backup (one command)
npx liu-longterm-memory backup

# Option 2: Git remote backup (ๆŽจ่๏ผŒๆ”ฏๆŒ็‰ˆๆœฌๅކๅฒ)
npx liu-longterm-memory backup --git

# Restore from backup
npx liu-longterm-memory restore memory-backup-20260404.zip

Agent Instructions

On Session Start

  1. Read SESSION-STATE.md โ€” this is your hot context
  2. Run memory_search for relevant prior context
  3. Check memory/YYYY-MM-DD.md for recent activity

During Conversation (Auto-Extraction)

For every user message, scan for extractable facts and act BEFORE responding (WAL):

  1. Preference detected? โ†’ Write to SESSION-STATE.md + append to MEMORY.md ## Preferences + memory_store (importance=0.9)
  2. Decision made? โ†’ Write to SESSION-STATE.md + append to MEMORY.md ## Decisions Log + Git-Notes (SILENTLY)
  3. Deadline/date given? โ†’ Write to SESSION-STATE.md ## Key Context
  4. Tech stack mentioned? โ†’ Append to MEMORY.md ## Projects
  5. Correction received? โ†’ Update SESSION-STATE.md + log to memory/lessons.md
  6. Other concrete detail? โ†’ Write to SESSION-STATE.md ## Key Context
  7. Backup requested? (user says "ๅค‡ไปฝ" / "backup" / "save memory") โ†’ Run npx liu-longterm-memory backup (zip) or npx liu-longterm-memory backup --git (Git)

On Session End

  1. Update SESSION-STATE.md with final state
  2. Distill session facts into MEMORY.md (if worth keeping long-term)
  3. Create/update daily log in memory/YYYY-MM-DD.md with:
    • Tasks completed
    • Decisions made
    • Lessons learned
    • Action items for next session
  4. (Optional) If significant changes were made, suggest: npx liu-longterm-memory backup

Memory Hygiene (Weekly)

  1. Review SESSION-STATE.md โ€” archive completed tasks
  2. Check LanceDB for junk: memory_recall query="*" limit=50
  3. Clear irrelevant vectors: memory_forget id=<id>
  4. Consolidate daily logs into MEMORY.md
  5. Run backup: npx liu-longterm-memory backup or npx liu-longterm-memory backup --git

The WAL Protocol (Critical)

Write-Ahead Log: Write state BEFORE responding, not after.

Trigger Action
User states preference Write to SESSION-STATE.md โ†’ then respond
User makes decision Write to SESSION-STATE.md โ†’ then respond
User gives deadline Write to SESSION-STATE.md โ†’ then respond
User corrects you Write to SESSION-STATE.md โ†’ then respond

Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.

Example Workflow

User: "Let's use Tailwind for this project, not vanilla CSS"

Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it โ€” Tailwind it is..."

Supported Embedding Providers

Any service with an OpenAI-compatible /v1/embeddings endpoint works. Tested providers:

Provider baseURL Model Free Tier
ZhipuAI ๆ™บ่ฐฑ https://open.bigmodel.cn/api/paas/v4 embedding-3 2500 ไธ‡ tokens ๅ…่ดน
Ollama (local) http://localhost:11434/v1 nomic-embed-text ๅฎŒๅ…จๅ…่ดน็ฆป็บฟ
OpenAI https://api.openai.com/v1 text-embedding-3-small Paid
DeepSeek https://api.deepseek.com/v1 deepseek-embedding Free tier available
้€šไน‰ๅƒ้—ฎ https://dashscope.aliyuncs.com/compatible-mode/v1 text-embedding-v3 Free tier available

Maintenance Commands

# Check memory health
npx liu-longterm-memory status

# Create zip backup
npx liu-longterm-memory backup

# Git backup (commit + push)
npx liu-longterm-memory backup --git

# Restore from backup
npx liu-longterm-memory restore memory-backup-20260404.zip

# Audit vector memory
memory_recall query="*" limit=50

# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/   # or ~/.clawdbot/memory/lancedb/
openclaw gateway restart

# Export Git-Notes
python3 memory.py -p . export --format json > memories.json

# Check disk usage
du -sh ~/.openclaw/memory/       # or ~/.clawdbot/memory/
wc -l MEMORY.md
ls -la memory/

Why Memory Fails

Understanding the root causes helps you fix them:

Failure Mode Cause Fix
Forgets everything memory_search disabled Enable memorySearch + configure embedding provider (see Setup)
Files not loaded Agent skips reading memory Add to AGENTS.md rules
Facts not captured No auto-extraction Ensure Agent follows Auto-Extraction rules (Layer 6)
Sub-agents isolated Don't inherit context Pass context in task prompt
Repeats mistakes Lessons not logged Write to memory/lessons.md

Solutions (Ranked by Effort)

1. Quick Win: Enable memory_search

Enable semantic search with any OpenAI-compatible embedding provider:

openclaw configure --section web

This enables vector search over MEMORY.md + memory/*.md files. See the Enable LanceDB section above for provider configuration (ZhipuAI, Ollama, OpenAI, etc.).

2. LLM-Powered Auto-Extraction (Recommended)

Use the built-in auto-extraction rules (Layer 6) + optional LLM batch extraction with ZhipuAI's free GLM-4-Flash model. The agent scans each message for preferences, decisions, deadlines, and corrections, then writes them to the appropriate memory layer before responding. See Layer 6 for setup details.

3. Better File Structure (No Dependencies)

memory/
โ”œโ”€โ”€ projects/
โ”‚   โ”œโ”€โ”€ strykr.md
โ”‚   โ””โ”€โ”€ taska.md
โ”œโ”€โ”€ people/
โ”‚   โ””โ”€โ”€ contacts.md
โ”œโ”€โ”€ decisions/
โ”‚   โ””โ”€โ”€ 2026-01.md
โ”œโ”€โ”€ lessons/
โ”‚   โ””โ”€โ”€ mistakes.md
โ””โ”€โ”€ preferences.md

Keep MEMORY.md as a summary (<5KB), link to detailed files.

Immediate Fixes Checklist

Problem Fix
Forgets preferences Add ## Preferences section to MEMORY.md
Repeats mistakes Log every mistake to memory/lessons.md
Sub-agents lack context Include key context in spawn task prompt
Forgets recent work Strict daily file discipline
Memory search not working Check your configured env var is set

Troubleshooting

Agent keeps forgetting mid-conversation:
โ†’ SESSION-STATE.md not being updated. Check WAL protocol.

Irrelevant memories injected:
โ†’ Disable autoCapture, increase minImportance threshold.

Memory too large, slow recall:
โ†’ Run hygiene: clear old vectors, archive daily logs.

Git-Notes not persisting:
โ†’ Run git notes push to sync with remote.

memory_search returns nothing:
โ†’ Verify your configured env var is set (check apiKeyEnv in config)
โ†’ Verify memorySearch enabled in openclaw.json (or clawdbot.json)
โ†’ Verify baseURL and model are correct for your provider


๐Ÿ‡จ๐Ÿ‡ณ ๅ›ฝๅ†…็”จๆˆทๆŒ‡ๅ—

ๅฎ‰่ฃ…ๅŠ ้€Ÿ

# ไฝฟ็”จ npmmirror ้•œๅƒๅŠ ้€Ÿๅฎ‰่ฃ…
npx --registry https://registry.npmmirror.com liu-longterm-memory init

# ๆˆ–ๅ…จๅฑ€่ฎพ็ฝฎ้•œๅƒ
npm config set registry https://registry.npmmirror.com

ๆœๅŠกๅฏ็”จๆ€ง

ๆœๅŠก ๅ›ฝๅ†…ๅฏ็”จๆ€ง ่ฏดๆ˜Ž
ๆ ธๅฟƒ่ฎฐๅฟ† (SESSION-STATE.md, MEMORY.md, daily logs) โœ… ๅฎŒๅ…จๅฏ็”จ ็บฏๆœฌๅœฐๆ–‡ไปถ๏ผŒๆ— ็ฝ‘็ปœไพ่ต–
LanceDB + ๆ™บ่ฐฑAI โœ… ๅฎŒๅ…จๅฏ็”จ ๆ™บ่ฐฑๅ›ฝๅ†…็›ด่ฟž๏ผŒๅ…่ดน้ขๅบฆๅ……่ถณ
LanceDB + Ollama โœ… ๅฎŒๅ…จๅฏ็”จ ๆœฌๅœฐ่ฟ่กŒ๏ผŒๆ— ้œ€็ฝ‘็ปœ
LanceDB + DeepSeek โœ… ๅฎŒๅ…จๅฏ็”จ DeepSeek API ๅ›ฝๅ†…็›ด่ฟž
Git-Notes โœ… ๅฎŒๅ…จๅฏ็”จ ๆœฌๅœฐ git ๆ“ไฝœ
LLM ไบ‹ๅฎžๆๅ– (GLM-4-Flash) โœ… ๅฎŒๅ…จๅฏ็”จ ๆ™บ่ฐฑๅ…่ดนๆจกๅž‹๏ผŒๅ›ฝๅ†…็›ด่ฟž
Backup (zip / Gitee) โœ… ๅฎŒๅ…จๅฏ็”จ zip ๆœฌๅœฐๅค‡ไปฝ ๆˆ– Gitee ่ฟœ็จ‹ๅŒๆญฅ
ClawdHub โœ… ๆœ‰ๅ›ฝๅ†…้•œๅƒ ไฝฟ็”จ mirror-cn.clawhub.com

ๆŽจ่้…็ฝฎ๏ผˆๅ›ฝๅ†…ๆœ€ไฝณๅฎž่ทต๏ผ‰

  1. ไฝฟ็”จๆ™บ่ฐฑAIๆˆ–Ollamaไฝœไธบ embedding provider๏ผˆ่ง Setup ็ซ ่Š‚๏ผ‰
  2. ไฝฟ็”จๅ†…็ฝฎ Auto-Extraction + GLM-4-Flash๏ผˆๅ…่ดน๏ผŒๅ›ฝๅ†…็›ด่ฟž๏ผ‰
  3. ไฝฟ็”จ zip ๆˆ– Gitee ่ฟœ็จ‹ไป“ๅบ“ๅค‡ไปฝ่ฎฐๅฟ†ๆ–‡ไปถ
  4. ้€š่ฟ‡ๅ›ฝๅ†…้•œๅƒๅฎ‰่ฃ… npm ๅŒ…

Links


Built by @NextXFrontier โ€” Part of the Next Frontier AI toolkit

Statistics

Downloads 39
Stars 0
Current installs 0
All-time installs 0
Versions 5
Comments 0
Created Apr 4, 2026
Updated Apr 4, 2026

Latest Changes

v1.0.4 · Apr 4, 2026

v1.0.4: Rename all references from elite-longterm-memory to liu-longterm-memory

Quick Install

clawhub install liu-longterm-memory
EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Customer Support