|
1 | 1 | # Release History
|
2 | 2 |
|
3 |
| -## 2.2.0-beta.1 (Unreleased) |
| 3 | +## 2.2.0-beta.1 (2025-02-07) |
4 | 4 |
|
5 | 5 | ### Features added
|
6 | 6 |
|
7 |
| -- Chat completion now supports audio input and output! |
8 |
| - - To configure a chat completion to request audio output using the `gpt-4o-audio-preview` model, use `ChatResponseModalities.Text | ChatResponseModalities.Audio` as the value for `ChatCompletionOptions.ResponseModalities` and create a `ChatAudioOptions` instance for `ChatCompletionOptions.AudioOptions`. |
9 |
| - - Input chat audio is provided to `UserChatMessage` instances using `ChatContentPart.CreateInputAudioPart()` |
10 |
| - - Output chat audio is provided on the `OutputAudio` property of `ChatCompletion` |
11 |
| - - References to prior assistant audio are provided via `OutputAudioReference` instances on the `AudioReference` property of `AssistantChatMessage`; `AssistantChatMessage(chatCompletion)` will automatically handle this, too |
12 |
| - - For more information, see the example in the README |
13 |
| -- Predicted output can be used with chat completion: the new `OutputPrediction` property on `ChatCompletionOptions` can be populated with `ChatMessageContentPart` instances via `ChatOutputPrediction.CreateStaticContentPrediction()` to substantially accelerate some varieties of requests. |
14 |
| -- For `o3-mini`, `o1`, and later models with reasoning capabilities: |
15 |
| - - The new `DeveloperChatMessage`, which replaces `SystemChatMessage`, can be used to provide instructions to the model |
16 |
| - - `ChatCompletionOptions` can specify a `ReasoningEffortLevel` property to adjust the level of token consumption the model will attempt to apply |
| 7 | +- OpenAI.Audio: |
| 8 | + - Added explicit support for new values of `GeneratedSpeechVoice`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 9 | +- OpenAI.Chat: |
| 10 | + - Enabled support for input and output audio in chat completions using the `gpt-4o-audio-preview` model. |
| 11 | + - Added a `ResponseModalities` property to `ChatCompletionOptions` ([`modalities` in the REST API](https://platform.openai.com/docs/api-reference/chat/create)). ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 12 | + - Set this flags enum property to `ChatResponseModalities.Text | ChatResponseModalities.Audio` in order to request output audio. Note that you need to set the new `AudioOptions` property too. |
| 13 | + - Added an `AudioOptions` property to `ChatCompletionOptions` ([`audio` in the REST API](https://platform.openai.com/docs/api-reference/chat/create)). ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 14 | + - Use this property to configure the output audio voice and format. |
| 15 | + - Added a `CreateInputAudioPart(BinaryData, ChatInputAudioFormat)` static method to `ChatMessageContentPart`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 16 | + - Use this method to send input audio as a `ChatMessageContentPart` in the `Content` property of a `UserChatMessage`. |
| 17 | + - Added an `OutputAudio` property to `ChatCompletion`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 18 | + - Use this property to retrieve the output audio that was generated by the model. |
| 19 | + - Added an `OutputAudioReference` property to `AssistantChatMessage`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 20 | + - Use this property to reference the output audio of a prior `ChatCompletion` to continue an existing chat. |
| 21 | + - For more information, see the example in the [README](README.md). |
| 22 | + - Enabled support for [Predicted Outputs](https://platform.openai.com/docs/guides/predicted-outputs) in chat completions using the `gpt-4o` and `gpt-4o-mini` models. Predicted Outputs can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content. |
| 23 | + - Added an `OutputPrediction` property to `ChatCompletionOptions` ([`prediction` in the REST API](https://platform.openai.com/docs/api-reference/chat/create)). ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 24 | + - Use this property to configure a Predicted Output. The property is of a new type called `ChatOutputPrediction`, which you can create via the `ChatOutputPrediction.CreateStaticContentPrediction(string)` or `ChatOutputPrediction.CreateStaticContentPrediction(IEnumerable<ChatMessageContentPart>)` static methods. |
| 25 | + - Added an `AcceptedPredictionTokenCount` property to `ChatOutputTokenUsageDetails`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 26 | + - When using Predicted Outputs, use this property to track the number of tokens in the prediction that appeared in the completion. |
| 27 | + - Added a `RejectedPredictionTokenCount` property to `ChatOutputTokenUsageDetails`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 28 | + - When using Predicted Outputs, use this property to track the number of tokens in the prediction that did not appear in the completion. Note that these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits. |
| 29 | + - Added a `ReasoningEffortLevel` property to `ChatCompletionOptions` ([`reasoning_effort` in the REST API](https://platform.openai.com/docs/api-reference/chat/create)). ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 30 | + - Use this property to constrain the effort on reasoning for the models with reasoning capabilities (such as `o3-mini` and `o1`). Reducing the reasoning effort can result in faster responses and fewer tokens used on reasoning. |
| 31 | + - Added a `DeveloperChatMessage` class. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 32 | + - Use this type of message to provide instructions that the model should follow, regardless of messages sent by the user. It replaces the existing `SystemChatMessage` class when using `o1` and newer models. |
| 33 | +- OpenAI.RealtimeConversation: |
| 34 | + - Added explicit support for new values of `ConversationVoice`. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
17 | 35 |
|
18 |
| -### `[Experimental]` Breaking changes |
| 36 | +### Breaking changes in Preview APIs |
| 37 | + |
| 38 | +- OpenAI.Assistants: |
| 39 | + - Removed the setters of the `IDictionary<string, string> Metadata` properties of the "options" classes (e.g., `AssistantCreationOptions`) to be able to guarantee that the collections are always initialized. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 40 | + - The dictionaries remain writeable, and you can add elements to it using collection initializer syntax or the `Add(TKey, TValue)` method. |
| 41 | +- OpenAI.RealtimeConversation: |
| 42 | + - Renamed the `InputTokens`, `OutputTokens`, and `TotalTokens` properties in `ConversationTokenUsage` to `InputTokenCount`, `OutputTokenCount`, and `TotalTokenCount`, respectively. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 43 | + - Renamed the `AudioTokens`, `CachedTokens`, and `TextTokens` properties in `ConversationInputTokenUsageDetails` to `AudioTokenCount`, `CachedTokenCount`, and `TextTokenCount`, respectively. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 44 | + - Renamed the `AudioTokens` and `TextTokens` properties in `ConversationOutputTokenUsageDetails` to `AudioTokenCount` and `TextTokenCount`, respectively. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 45 | +- OpenAI.VectorStores: |
| 46 | + - Removed the setters of the `IDictionary<string, string> Metadata` properties of the "options" classes (e.g., `VectorStoreCreationOptions`) to be able to guarantee that the collections are always initialized. ([0e0c460](https://github.com/openai/openai-dotnet/commit/0e0c460c88424fc2241956ed5ead6dd5ed7638ec)) |
| 47 | + - The dictionaries remain writeable, and you can add elements to it using collection initializer syntax or the `Add(TKey, TValue)` method. |
| 48 | + |
| 49 | +### Other Changes |
19 | 50 |
|
20 |
| -- The `IDictionary<string, string> Metadata` property in several request options types in the Assistants and RealtimeConversation areas have had their setters removed, aligning them with other request use of collections. The dictionaries remain writeable and use both initializer syntax and range copies to produce the same effect. |
| 51 | +- Added .NET 8 as a target framework. ([6203354](https://github.com/openai/openai-dotnet/commit/6203354b36395fa79a34e3b4fd7b97b43b9720b9)) |
| 52 | +- Enabled support for trimming. ([4cd8529](https://github.com/openai/openai-dotnet/commit/4cd85298d110afcbef40f07746d6502a73493e16)) |
| 53 | +- Enabled support for native AOT compilation. ([4cd8529](https://github.com/openai/openai-dotnet/commit/4cd85298d110afcbef40f07746d6502a73493e16)) |
21 | 54 |
|
22 | 55 | ## 2.1.0 (2024-12-04)
|
23 | 56 |
|
|
0 commit comments