Global Rank · of 600 Skills
upstash-vector-db-skills AI Agent Skill
View Source: gocallum/nextjs16-agent-skills
SafeInstallation
npx skills add gocallum/nextjs16-agent-skills --skill upstash-vector-db-skills 157
Installs
Links
- Docs: https://upstash.com/docs/vector
- Getting Started: https://upstash.com/docs/vector/overall/getstarted
- Semantic Search Tutorial: https://upstash.com/docs/vector/tutorials/semantic_search
- Namespaces: https://upstash.com/docs/vector/features/namespaces
- Embedding Models: https://upstash.com/docs/vector/features/embeddingmodels
- MixBread AI: https://www.mixbread.ai/ (preferred embedding provider)
Quick Setup
1. Create Vector Index (Upstash Console)
- Go to Upstash Console
- Create Vector Index: name, region (closest to app), type (Dense for semantic search)
- Select embedding model: MixBread AI recommended (or use Upstash built-in models)
- Copy
UPSTASH_VECTOR_REST_URLandUPSTASH_VECTOR_REST_TOKENto.env
2. Install SDK
pnpm add @upstash/vector3. Environment
UPSTASH_VECTOR_REST_URL=your_url
UPSTASH_VECTOR_REST_TOKEN=your_tokenCode Examples
Initialize Client (Node.js / TypeScript)
import { Index } from "@upstash/vector";
const index = new Index({
url: process.env.UPSTASH_VECTOR_REST_URL,
token: process.env.UPSTASH_VECTOR_REST_TOKEN,
});Upsert Documents (Auto-Embed)
When using an embedding model in the index, text is embedded automatically:
// Single document
await index.upsert({
id: "doc-1",
data: "Upstash provides serverless vector database solutions.",
metadata: { source: "docs", category: "intro" },
});
// Batch
await index.upsert([
{ id: "doc-2", data: "Vector search powers semantic similarity.", metadata: { source: "docs" } },
{ id: "doc-3", data: "MixBread AI provides high-quality embeddings.", metadata: { source: "blog" } },
]);Query / Semantic Search
// Semantic search with auto-embedding
const results = await index.query({
data: "What is semantic search?",
topK: 5,
includeMetadata: true,
});
results.forEach((result) => {
console.log(`ID: ${result.id}, Score: ${result.score}, Metadata:`, result.metadata);
});Using Namespaces (Data Isolation)
Namespaces partition a single index into isolated subsets. Useful for multi-tenant or multi-domain apps.
// Upsert in namespace "blog"
await index.namespace("blog").upsert({
id: "post-1",
data: "Next.js tutorial for Vercel deployment",
metadata: { author: "user-123" },
});
// Query only "blog" namespace
const blogResults = await index.namespace("blog").query({
data: "Vercel deployment",
topK: 3,
includeMetadata: true,
});
// List all namespaces
const namespaces = await index.listNamespaces();
console.log(namespaces);
// Delete namespace
await index.deleteNamespace("blog");Full Semantic Search Example (Vercel Function)
// api/search.ts (Vercel Edge Function or Serverless Function)
import { Index } from "@upstash/vector";
export const config = {
runtime: "nodejs", // or "edge"
};
const index = new Index({
url: process.env.UPSTASH_VECTOR_REST_URL,
token: process.env.UPSTASH_VECTOR_REST_TOKEN,
});
export default async function handler(req, res) {
if (req.method !== "POST") {
return res.status(405).json({ error: "Method not allowed" });
}
const { query, namespace = "", topK = 5 } = req.body;
try {
const searchIndex = namespace ? index.namespace(namespace) : index;
const results = await searchIndex.query({
data: query,
topK,
includeMetadata: true,
});
return res.status(200).json({ results });
} catch (error) {
console.error("Search error:", error);
return res.status(500).json({ error: "Search failed" });
}
}Index Operations
// Reset (clear all vectors in index or namespace)
await index.reset();
// Or reset a specific namespace
await index.namespace("old-data").reset();
// Delete a single vector
await index.delete("doc-1");
// Delete multiple vectors
await index.delete(["doc-1", "doc-2", "doc-3"]);Embedding Models
Available in Upstash
BAAI/bge-large-en-v1.5(1024 dim, best performance, ~64.23 MTEB score)BAAI/bge-base-en-v1.5(768 dim, good balance)BAAI/bge-small-en-v1.5(384 dim, lightweight)BAAI/bge-m3(1024 dim, sparse + dense hybrid)
Recommended: MixBread AI
If using MixBread as your embedding provider:
- Create a MixBread API key at https://www.mixbread.ai/
- When creating your Upstash index, select MixBread as the embedding model.
- MixBread handles tokenization and semantic quality automatically.
- No extra setup needed in your code; use
index.upsert()/index.query()with text directly.
Best Practices
For Vercel Deployment
- Store credentials in Vercel Environment Variables (project settings or
.env.local). - Use Edge Functions or Serverless Functions for low-latency access.
- Implement request rate limiting to stay within Upstash quotas.
Namespace Strategy
- Use namespaces to isolate data by tenant, domain, or use case.
- Example:
namespace("user-123")for per-user search. - Clean up old namespaces to avoid storage bloat.
Query Performance
- Keep
topKreasonable (5–10 typically sufficient). - Use metadata filtering to pre-filter results if possible.
- Upstash is eventually consistent; expect slight delays after upserts.
Error Handling
try {
const results = await index.query({
data: userQuery,
topK: 5,
includeMetadata: true,
});
} catch (error) {
if (error.status === 401) {
console.error("Invalid credentials");
} else if (error.status === 429) {
console.error("Rate limited");
} else {
console.error("Query error:", error);
}
}Common Patterns
RAG (Retrieval Augmented Generation)
- Upsert documents / knowledge base into Upstash.
- On user query, retrieve top-k similar docs via semantic search.
- Pass retrieved docs + user query to LLM for better context.
const docs = await index.query({ data: userQuestion, topK: 3 });
const context = docs.map((d) => d.metadata?.text).join("\n");
// Pass context to LLMMulti-Tenant Search
Use namespaces to isolate each tenant's vectors:
const userNamespace = `tenant-${userId}`;
await index.namespace(userNamespace).upsert({ id, data, metadata });
// Queries only see that tenant's dataBatch Indexing
For bulk imports, upsert in batches:
const batchSize = 100;
for (let i = 0; i < documents.length; i += batchSize) {
const batch = documents.slice(i, i + batchSize);
await index.upsert(batch);
console.log(`Indexed batch ${i / batchSize + 1}`);
}Troubleshooting
- No results returned: Ensure documents are indexed and embedding model is active.
- Slow queries: Check quota limits; consider upgrading plan or reducing dataset size.
- Stale data: Upstash is eventually consistent; wait 1–2 seconds before querying new inserts.
- Namespace not working: Ensure namespace exists (created on first upsert) or use the default
"".
Installs
Security Audit
Power your AI Agents with
the best open-source models.
Drop-in OpenAI-compatible API. No data leaves Europe.
Explore Inference APIGLM
GLM 5
$1.00 / $3.20
per M tokens
Kimi
Kimi K2.5
$0.60 / $2.80
per M tokens
MiniMax
MiniMax M2.5
$0.30 / $1.20
per M tokens
Qwen
Qwen3.5 122B
$0.40 / $3.00
per M tokens
How to use this skill
Install upstash-vector-db-skills by running npx skills add gocallum/nextjs16-agent-skills --skill upstash-vector-db-skills in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.
No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.
The skill enhances your agent's understanding of upstash-vector-db-skills, helping it follow established patterns, avoid common mistakes, and produce production-ready output.
What you get
Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.
Compatibility
This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.