Skip to content

Commit 6a3eb78

Browse files
authored
[aisdk] Set model config for CUA models (#576)
## Summary CUA models require auto-truncation. I also set `isReasoning` and `systemMessageMode` to match what Vercel's AISDK sets for them, but honestly I'm not sure how those fields are used... The inference calls worked just fine whether the roles were "system" or "developer". Also, it seems that CUA is indeed a "reasoning model", because we do get reasoning output from it, even if it's not listed as a reasoning model in OpenAI's docs. But, OpenAI's docs also say that reasoning models do _not_ support temperature, but we do set temperature (after loop-detection) and it does seem to work... so I'm a bit confused here. I might have to change some of the AISDK code that currently enforces that temperature is NOT set for reasoning models. ## How was it tested? Ran `testpilot test` while pulling these local changes. ## Community Contribution License All community contributions in this pull request are licensed to the project maintainers under the terms of the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0). By creating this pull request I represent that I have the right to license the contributions to the project maintainers under the Apache 2 License as stated in the [Community Contribution License](https://github.com/jetify-com/opensource/blob/main/CONTRIBUTING.md#community-contribution-license).
1 parent 98e6ea9 commit 6a3eb78

File tree

2 files changed

+13
-3
lines changed

2 files changed

+13
-3
lines changed

aisdk/ai/provider/openai/constants.go

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
package openai
22

3-
import "go.jetify.com/ai/provider/openai/internal/codec"
3+
import (
4+
"github.com/openai/openai-go/v2/shared"
5+
"go.jetify.com/ai/provider/openai/internal/codec"
6+
)
47

58
var ProviderName = codec.ProviderName
69

710
// Models
811
const (
9-
ChatModelComputerUsePreview = "computer_use_preview"
10-
ChatModelComputerUsePreview2025_03_11 = "computer_use_preview-2025-03-11"
1112
ChatModelGPT5 = "gpt-5"
1213
ChatModelGPT5Mini = "gpt-5-mini"
1314
ChatModelGPT5Nano = "gpt-5-nano"
@@ -70,4 +71,7 @@ const (
7071
ChatModelGPT3_5Turbo1106 = "gpt-3.5-turbo-1106"
7172
ChatModelGPT3_5Turbo0125 = "gpt-3.5-turbo-0125"
7273
ChatModelGPT3_5Turbo16k0613 = "gpt-3.5-turbo-16k-0613"
74+
75+
ResponsesModelComputerUsePreview = shared.ResponsesModelComputerUsePreview
76+
ResponsesModelComputerUsePreview2025_03_11 = shared.ResponsesModelComputerUsePreview2025_03_11
7377
)

aisdk/ai/provider/openai/internal/codec/encode.go

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,12 @@ func getModelConfig(modelID string) modelConfig {
7777
SystemMessageMode: "developer",
7878
RequiredAutoTruncation: false,
7979
}
80+
} else if len(modelID) > 0 && strings.HasPrefix(modelID, shared.ResponsesModelComputerUsePreview) {
81+
return modelConfig{
82+
IsReasoningModel: true,
83+
SystemMessageMode: "developer",
84+
RequiredAutoTruncation: true,
85+
}
8086
}
8187

8288
// gpt models (non o-series)

0 commit comments

Comments
 (0)