Quickstart
Make your first API call to Berget AI Serverless Inference
Berget AI Serverless Inference is a drop-in replacement for the OpenAI API. Point any OpenAI-compatible SDK or tool at https://api.berget.ai/v1 and it works.
By the end of this tutorial, you'll have made your first chat completion request using the OpenAI SDK.
Before you start
To complete this tutorial, you'll need:
- A Berget AI account
- An API key from the Berget AI Console
All inference runs on Berget AI's infrastructure in Stockholm. Your data never leaves Europe.
Make your first request
You'll use the OpenAI SDK, which works with Berget AI out of the box.
Install the SDK
Install the official OpenAI SDK for your language.
pip install openainpm install openaiMake a request
Use the client to send a chat completion request to Berget AI Serverless Inference. Replace <your-api-key> with the key from your Berget AI Console.
from openai import OpenAI
client = OpenAI(
base_url="https://api.berget.ai/v1",
api_key="<your-api-key>",
)
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.berget.ai/v1",
apiKey: "<your-api-key>",
});
const response = await client.chat.completions.create({
model: "meta-llama/Llama-3.3-70B-Instruct",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);curl https://api.berget.ai/v1/chat/completions \
-H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct",
"messages": [{"role": "user", "content": "Hello!"}]
}'