Model capabilities
Capability matrix for Berget AI language models, including tool use, JSON output, streaming, and multimodal input
The table below lists the supported capabilities for each language model available on Berget AI. You can also verify capabilities against our API using our Test matrix.
Language models
| Model | Basic chat | Tool use | JSON mode | JSON schema | Streaming | Multimodal | Long context + JSON |
|---|---|---|---|---|---|---|---|
meta-llama/Llama-3.1-8B-Instruct | ✓ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ |
meta-llama/Llama-3.3-70B-Instruct | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |
mistralai/Mistral-Small-3.2-24B-Instruct-2506 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
openai/gpt-oss-120b | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
zai-org/GLM-4.7-FP8 | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |
Capability definitions
Each column in the table above maps to one of the following capabilities.
| Capability | Description |
|---|---|
| Basic chat | Standard chat completion (/chat/completions). |
| Tool use | Function calling via the tools parameter. |
| JSON mode | Structured JSON output via response_format: { type: "json_object" }. |
| JSON schema | Strict schema validation via response_format: { type: "json_schema", json_schema: ... }. |
| Streaming | Real-time token streaming via server-sent events (stream: true). |
| Multimodal | Image input in the content array (vision). |
| Long context + JSON | JSON output on long-context requests (~8k output tokens). |
Related pages
- Models overview: pricing and context window sizes
- Choose a language model: a decision process for picking the right model