Add Azure OpenAI and Foundry models to OpenCode
GitHub Copilot is great, but somehow I get better results and a nicer UX/DX with OpenCode. It integrates with all the models I have available via GitHub Copilot and more.
There’s also a great ecosystem around this software and the documentation is quite good as well. The Awesome OpenCode repository lists quite a few useful tools, plugins and agents. Currently, I use the Smart Title Plugin and Open Agents Control, but Oh My OpenCode also has my interest even though it has a bit of overlap.
All of this is awesome and greatly enhances my joy in creating solutions during my dayjob and side projects. However, it does cost quite a bit of (premium) requests. Especially when using the rather expensive (and good) models.
What is great about OpenCode is you can connect it to your models deployed in Azure too!
This way, when your premium requests are all used up, you can use a GPT-Codex or Kimi model deployed in your own environment. Or even when you do still have premium requests available, you can offload your questions to a model of your choice.
When I was searching for the correct configuration I couldn’t find the correct documentation for this, so decided to post it over here.
If you have OpenCode installed, there should be a folder with an opencode.json file on your system. On a Mac it’s at ~/.config/opencode/opencode.json. In this file you can add a property called providers, if it doesn’t exist already, and configure your models.
For Open AI models, GPT, you should specify them in the azure object.
Non-Open AI models, like Kimi, go into the azure-foundry-base-models object.
Below is a sample of my configuration.
{
// The rest of the configuration
"provider": {
"azure": {
"options": {
"apiKey": "[my-key]",
"baseURL": "https://my-coding-resource.openai.azure.com/openai",
"resourceName": "my-coding-resource",
},
"models": {
"gpt-5.2-chat": {
"name": "Self - gpt-5.2",
"modalities": {
"input": ["text"],
"output": ["text"],
},
},
},
},
"azure-foundry-base-models": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"apiKey": "[my-other-key]",
"baseURL": "https://my-coding-resource.cognitiveservices.azure.com/openai/v1",
},
"models": {
"Kimi-K2.5": {
"name": "Self - Kimi-K2.5",
"modalities": {
"input": ["text"],
"output": ["text"],
},
},
},
},
},
}
As you probably notice, the resourceName is not required for the Foundry models. Adding it will cause issues.
Also notice the difference in the baseUrl. The azure object is using the openai.azure.com domain whereas the azure-foundry-base-model uses the cognitiveservices.azure.com domain and the /v1 in the route.
Small, but very important differences to set this up.
