GTE-Large
by Thenlper
The gte-large embedding model converts English sentences, paragraphs and moderate-length documents into a 1024-dimensional dense vector space, delivering high-quality semantic embeddings optimized for information retrieval, semantic textual similarity, reranking and clustering tasks. Trained via multi-stage contrastive learning on a large domain-diverse relevance corpus, it offers excellent performance across general-purpose embedding use-cases.
Pricing
Input Tokens
Per 1M tokens
Free
Output Tokens
Per 1M tokens
Free
Image Processing
Per 1M tokens
$0.00/1M tokens
Supported Modalities
Input
text
Output
embeddings
Specifications
- Context Length
- 512 tokens
- Provider
- Thenlper
- Released
- Nov 18, 2025
- Model ID
- thenlper/gte-large
More from Thenlper
View all modelsCompare Models
Select a model to compare with GTE-Large