Vercel AI SDK with APIBox: Use Claude, GPT, Gemini, and DeepSeek Through One Provider
Learn how to use APIBox with the Vercel AI SDK through an OpenAI-compatible provider. Includes setup values, generateText, streamText, model switching, and troubleshooting.
The shortest way to use APIBox with the Vercel AI SDK is to configure an OpenAI-compatible provider with baseURL: "https://api.apibox.cc/v1". After that, generateText, streamText, tool calling, and model switching can use the same AI SDK patterns while APIBox routes requests to Claude, GPT, Gemini, DeepSeek, and other supported models.
1. The exact setup values
| Setting | Value |
|---|---|
| Provider package | @ai-sdk/openai-compatible |
| API key | Your APIBox key |
| Base URL | https://api.apibox.cc/v1 |
| Model | Use an APIBox model name, such as gpt-5, claude-sonnet-4-6, gemini-2.5-pro, or deepseek-r1 |
| Best first test | generateText with a short prompt |
Use this setup when you need to:
- connect Vercel AI SDK to an OpenAI-compatible provider
- set one
baseURLfor multiple model families - call Claude, GPT, Gemini, or DeepSeek from the same AI SDK code
- keep model switching inside configuration instead of rewriting routes
- use one provider pattern across Next.js and Node.js services
2. Install the required packages
npm install ai @ai-sdk/openai-compatibleCreate an environment variable:
APIBOX_API_KEY=your_apibox_keyDo not expose this key in frontend code. Keep model calls in server routes, server actions, jobs, or backend services.
3. Create a shared APIBox provider
Create a small provider module so the rest of your app does not repeat baseURL and key handling.
// lib/apibox.ts
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
export const apibox = createOpenAICompatible({
name: 'apibox',
apiKey: process.env.APIBOX_API_KEY,
baseURL: 'https://api.apibox.cc/v1',
});Now every model call can import the same provider.
4. Generate text with APIBox
// app/api/summarize/route.ts
import { generateText } from 'ai';
import { apibox } from '@/lib/apibox';
export async function POST(req: Request) {
const { text } = await req.json();
const result = await generateText({
model: apibox('gpt-5'),
system: 'You summarize technical content clearly and concisely.',
prompt: `Summarize this text:\n\n${text}`,
});
return Response.json({ summary: result.text });
}The provider stays the same when you switch models:
model: apibox('claude-sonnet-4-6')
model: apibox('gemini-2.5-pro')
model: apibox('deepseek-r1')5. Stream text for chat interfaces
Streaming is usually the better default for chat UI because users see output sooner.
// app/api/chat/route.ts
import { streamText } from 'ai';
import { apibox } from '@/lib/apibox';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: apibox('claude-sonnet-4-6'),
system: 'You are a precise assistant for software developers.',
messages,
});
return result.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}If you are building a production chat app, add request validation, authentication, rate limits, and logging before opening the route publicly.
6. How to choose models in a Vercel AI SDK app
| Workload | Good first model choice | Why |
|---|---|---|
| Coding assistant | Claude or GPT coding model | Better reasoning over code and diffs |
| High-volume summaries | Lower-cost GPT, Gemini, or DeepSeek model | Cost matters more than perfect prose |
| Agent with tools | Strong reasoning model | Tool selection and argument quality matter |
| Search answer generation | Fast model with good instruction following | Latency and concise answers matter |
| Internal automation | Cheapest model that passes evaluation | Batch tasks should be measured, not guessed |
The important point is not that one model is always best. It is that APIBox lets your AI SDK application keep a stable integration while you test and change models.
Related reading:
7. Troubleshooting common errors
401 Unauthorized
Check:
APIBOX_API_KEYexists in the runtime environment- the key is not wrapped in quotes with extra spaces
- the server was restarted after editing
.env.local - the key is not being sent from browser code
404 model not found
Check the model name in the APIBox pricing or model list page. Do not assume official provider model IDs always match gateway model IDs.
Useful link:
Request hangs or streams slowly
Check:
- whether the selected model is a reasoning model
- output length
- your hosting timeout
- whether the route is running in the expected runtime
For streaming routes, keep the response as a stream instead of buffering the full answer first.
Tool calling behaves differently across models
OpenAI-compatible does not mean identical behavior. A tool schema that works with one model should still be tested with the exact models you plan to offer.
For tool design, read:
8. Production checklist
Before shipping, verify:
- API key is server-side only
- model names are configured through environment variables or a small allowlist
- request size is limited
- user-level rate limits exist
- prompts are versioned
- errors are logged with model name and latency
- cost is tracked per route or feature
- fallback model behavior is explicit
9. Recommended setup
To use APIBox with the Vercel AI SDK, install ai and @ai-sdk/openai-compatible, create a provider with baseURL: "https://api.apibox.cc/v1", pass your APIBox key, and call models with generateText or streamText. This gives a Next.js or Node.js app one OpenAI-compatible integration path for Claude, GPT, Gemini, DeepSeek, and other models.
Register for APIBox to create an API key, then use the provider setup above in your Vercel AI SDK project.
Try it now, sign up and start using 30+ models with one API key
Sign up free →