Quickstart
Make your first LLMBase API call in under two minutes.
Updated
LLMBase exposes an OpenAI-compatible REST API at https://api.llmbase.ai.
Any client or SDK that works with OpenAI will work with LLMBase — just swap the base URL and your API key.
Base URL
https://api.llmbase.ai
Your first request
curl
curl https://api.llmbase.ai/v1/chat/completions \
-H "Authorization: Bearer $LLMBASE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "zai-org/glm-5",
"messages": [
{ "role": "user", "content": "Hello! What can you do?" }
]
}'
Node.js — OpenAI SDK
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.llmbase.ai/v1",
apiKey: process.env.LLMBASE_API_KEY,
});
const response = await client.chat.completions.create({
model: "zai-org/glm-5",
messages: [{ role: "user", content: "Hello! What can you do?" }],
});
console.log(response.choices[0].message.content);
Python — OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="https://api.llmbase.ai/v1",
api_key=os.environ["LLMBASE_API_KEY"],
)
response = client.chat.completions.create(
model="zai-org/glm-5",
messages=[{"role": "user", "content": "Hello! What can you do?"}],
)
print(response.choices[0].message.content)
Streaming
Add "stream": true to receive tokens as they are generated using
Server-Sent Events.
const stream = await client.chat.completions.create({
model: "zai-org/glm-5",
messages: [{ role: "user", content: "Write a haiku about inference." }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
Next steps
- Authentication — learn how API keys work
- Models — browse available models
- Chat completions — full parameter reference