Globally Distributed • Serverless • Production-Ready

Coming Soon: Vector Database Built for Modern AI

Power your AI applications with enterprise-grade vector search. Semantic similarity, intelligent recommendations, and anomaly detection at global scale with sub-50ms latency.

Reform

Enterprise Features

Everything You Need for Production AI

From semantic search to recommendation engines, our vector database handles billions of embeddings with consistent sub-50ms query performance.

Lightning Fast Queries

Sub-50ms p99 latency for vector similarity search across billions of embeddings. Optimized HNSW indexing ensures consistent performance at any scale.

Global Distribution

Deployed across 300+ data centers worldwide. Your vectors are automatically replicated to serve users from the nearest edge location.

Zero Infrastructure

Fully serverless and managed. No servers to provision, no indices to tune. Scale from zero to billions of vectors automatically.

Flexible Dimensions

Support for 128 to 1536 dimensions. Works seamlessly with OpenAI, Cohere, Hugging Face, and custom embedding models.

Advanced Filtering

Combine vector similarity with metadata filtering. Query by tags, categories, timestamps, and custom attributes for precise results.

Built-in Analytics

Real-time insights into query performance, index utilization, and embedding distributions. Optimize your AI applications with data-driven decisions.

Cutting-edge vector search for enterprise-grade performance

Our production-ready infrastructure delivers lightning-fast similarity search at global scale with HNSW indexing, automatic optimization, and support for all major embedding models.

<50 ms

Query latency (p99) at any scale.

300+

Edge locations serving globally.

1536

Max dimensions for large models.

99.99%

Uptime SLA
guarantee.

Simple Pricing

Pay Only for What You Use

No upfront costs. No infrastructure management. Scale from zero to billions of vectors with transparent, usage-based pricing.

Free

$0

Forever free tier

  • Up to 10M vectors
  • 100k queries/month
  • Global distribution
  • Community support
Coming Soon
MOST POPULAR

Pro

$49

per month + usage

  • Up to 100M vectors
  • 5M queries/month included
  • Advanced analytics
  • Priority support
Coming Soon

Enterprise

Custom

Tailored to your needs

  • Unlimited vectors
  • Custom query limits
  • 99.99% SLA
  • Dedicated support
Contact Sales

Additional usage: $0.40 per million queries$5 per billion stored vectors/month

Questions & Answers

Frequently Asked Questions

Everything you need to know about our vector database

What is a vector database and why do I need one?

A vector database stores and queries high-dimensional vectors (embeddings) that represent data like text, images, and audio. Unlike traditional databases that match exact values, vector databases find similar items based on mathematical distance.

You need a vector database if you're building:

  • Semantic search that understands meaning, not just keywords
  • Recommendation engines for products, content, or users
  • RAG (Retrieval-Augmented Generation) applications with LLMs
  • Image or audio similarity search
  • Anomaly detection and classification systems

Example: Instead of searching for exact keywords, a vector database can find "Paris travel tips" when you search for "French vacation advice" because it understands semantic meaning.

How does pricing work? Are there hidden costs?

100% transparent pricing with no hidden fees. You pay for two things:

  1. Storage: $5 per billion vectors per month (prorated daily)
  2. Queries: $0.40 per million similarity searches

Example Calculation:

  • • 50M vectors stored: $0.25/month
  • • 500k queries/month: $0.20/month
  • Total: $0.45/month

What's included at no extra cost:

  • Global edge distribution
  • HTTPS API access
  • Automatic backups & replication
  • Web dashboard & analytics
  • Community support
Which embedding models and dimensions are supported?

Our vector database supports 128 to 1536 dimensions, working with virtually any embedding model:

Popular Models Supported:

  • • OpenAI (text-embedding-3-small: 1536d)
  • • OpenAI (text-embedding-3-large: 3072d)
  • • Cohere (embed-english-v3.0: 1024d)
  • • Hugging Face (all-MiniLM-L6-v2: 384d)
  • • Custom models via API

Distance Metrics:

  • • Cosine Similarity (recommended)
  • • Euclidean Distance (L2)
  • • Dot Product (inner product)

Best practice: Choose cosine similarity for most text embeddings, as it's normalized and works well with models like OpenAI and Cohere. Use Euclidean for absolute distance measurements.

How fast are queries and what's the latency?

Our vector database delivers sub-50ms p99 latency for similarity searches, regardless of database size:

Performance Benchmarks:

  • 10M vectors: ~15ms average, 35ms p99
  • 100M vectors: ~25ms average, 45ms p99
  • 1B+ vectors: ~35ms average, 50ms p99

Why so fast?

  • HNSW (Hierarchical Navigable Small World) indexing algorithm
  • Automatic query optimization and index tuning
  • Global edge network with 300+ locations
  • Queries served from the nearest data center to your users
Can I filter results by metadata?

Yes! Advanced metadata filtering is fully supported. You can attach custom metadata to each vector and filter results during queries:

Example metadata:

{
  "category": "electronics",
  "price": 299,
  "brand": "Apple",
  "in_stock": true,
  "tags": ["smartphone", "5G"],
  "published_date": "2024-01-15"
}

Supported filter operations:

  • Equality: category == "electronics"
  • Comparison: price < 500
  • Boolean: in_stock == true
  • Array contains: tags CONTAINS "5G"
  • Date range: published_date > "2024-01-01"

Combine multiple filters with AND/OR operators to create complex queries while maintaining sub-50ms performance.

How do I migrate my existing vector data?

We provide multiple migration options to make switching to our vector database seamless:

Bulk Upload API:

Upload millions of vectors in batches of up to 1000 vectors per request. Automatic retry logic and progress tracking included.

CSV/JSON Import:

Upload vector data from CSV or JSON files via our web dashboard. Supports automatic schema detection and validation.

Migration Scripts:

We provide pre-built migration scripts for Pinecone, Weaviate, Milvus, and Qdrant. Contact support for assistance.

White-Glove Migration (Enterprise):

Our team will handle the entire migration process for you, including data validation and testing.

Zero downtime migrations: Run our service in parallel with your existing database, gradually shift traffic, then switch over when ready.

What SLA and support do you provide?

We offer enterprise-grade reliability with comprehensive support options:

Free & Pro Tiers

  • • 99.9% uptime SLA
  • • Community support (Discord)
  • • Email support (24-48hr response)
  • • Comprehensive documentation
  • • Public status page

Enterprise Tier

  • • 99.99% uptime SLA (with credits)
  • • 24/7 priority support
  • • Dedicated Slack channel
  • • Quarterly business reviews
  • • Custom SLAs available

Automatic failover: Your vectors are replicated across multiple data centers. If one location fails, traffic automatically routes to healthy nodes with zero downtime.

We also provide detailed API documentation, integration guides, and code examples for all major programming languages.

Still have questions?

Our team is here to help you build amazing AI applications with vector search.