Skip to content

Commit 48fc27c

Browse files
committed
Manual merge PR #1879: Fix indent inconsistency in ApiProxy templates and update docs, preserving v4.1.16
1 parent 30d653f commit 48fc27c

4 files changed

Lines changed: 53 additions & 61 deletions

File tree

README.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
<p>
1010
<a href="https://github.com/lbjlaq/Antigravity-Manager">
11-
<img src="https://img.shields.io/badge/Version-4.1.16-blue?style=flat-square" alt="Version">
11+
<img src="https://img.shields.io/badge/Version-4.1.15-blue?style=flat-square" alt="Version">
1212
</a>
1313
<img src="https://img.shields.io/badge/Tauri-v2-orange?style=flat-square" alt="Tauri">
1414
<img src="https://img.shields.io/badge/Backend-Rust-red?style=flat-square" alt="Rust">
@@ -413,10 +413,6 @@ response = client.chat.completions.create(
413413
## 📝 开发者与社区
414414

415415
* **版本演进 (Changelog)**:
416-
* **v4.1.16 (2026-02-12)**:
417-
- **[核心修复] 解决 Gemini 图像生成因关键词匹配导致的 effortLevel 冲突 (PR #1873)**:
418-
- **逻辑冲突修复**: 彻底修复了 `gemini-3-pro-image` 及其 4k/2k 变体因包含 `gemini-3-pro` 关键词,被系统错误判定为支持 Adaptive Thinking 从而误注入 `effortLevel` 导致的 HTTP 400 错误。
419-
- **参数清洗**: 在代理请求层增加了对图像生成模型的特殊过滤,确保不再为非思维链模型注入不兼容的生成参数。
420416
* **v4.1.15 (2026-02-11)**:
421417
- **[核心功能] 开启 macOS 与 Windows 原生自动更新支持 (PR #1850)**:
422418
- **端到端自动更新**: 启用了 Tauri 的原生更新插件,支持在应用内直接检测、下载并安装更新。

README_EN.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ curl -sSL https://raw.githubusercontent.com/lbjlaq/Antigravity-Manager/main/depl
138138

139139
**Option 2: via Homebrew** (If you have [Linuxbrew](https://sh.brew.sh/) installed)
140140
```bash
141-
brew tap lbjlaq/antigravity-manager https://github.com/lbjlaq/Antigravity-Manager/releases/download/v4.1.16/Antigravity_Tools_4.1.16_x64.dmg
141+
brew tap lbjlaq/antigravity-manager https://github.com/lbjlaq/Antigravity-Manager/releases/download/v4.1.15/Antigravity_Tools_4.1.15_x64.dmg
142142
```
143143

144144
#### Other Linux Distributions
@@ -264,10 +264,6 @@ print(response.choices[0].message.content)
264264
## 📝 Developer & Community
265265
266266
* **Changelog**:
267-
* **v4.1.16 (2026-02-12)**:
268-
- **[Core Fix] Resolve effortLevel conflict in Gemini Image Generation caused by keyword matching (PR #1873)**:
269-
- **Logic Conflict Fix**: Completely fixed the HTTP 400 error where `gemini-3-pro-image` and its 4k/2k variants were incorrectly identified as supporting Adaptive Thinking due to the `gemini-3-pro` keyword, leading to the erroneous injection of `effortLevel`.
270-
- **Parameter Scrubbing**: Added specific filtering for image generation models at the proxy layer to ensure incompatible generation parameters are no longer injected into non-thinking models.
271267
* **v4.1.15 (2026-02-11)**:
272268
- **[Core Feature] Enable Native Auto-Update for macOS and Windows (PR #1850)**:
273269
- **End-to-End Auto-Update**: Enabled the native Tauri updater plugin, supporting in-app update checks, downloads, and installations.

docker/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ docker build --build-arg USE_MIRROR=true -t antigravity-manager:latest -f docker
126126
```bash
127127
# 打上版本標籤並推送
128128
docker tag antigravity-manager:latest lbjlaq/antigravity-manager:latest
129-
docker tag antigravity-manager:latest lbjlaq/antigravity-manager:4.1.16
129+
docker tag antigravity-manager:latest lbjlaq/antigravity-manager:4.1.15
130130
docker push lbjlaq/antigravity-manager:latest
131-
docker push lbjlaq/antigravity-manager:4.1.16
131+
docker push lbjlaq/antigravity-manager:4.1.15
132132
```

src/pages/ApiProxy.tsx

Lines changed: 49 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -959,21 +959,21 @@ export default function ApiProxy() {
959959
// 1. Anthropic Protocol
960960
if (selectedProtocol === 'anthropic') {
961961
return `from anthropic import Anthropic
962-
963-
client = Anthropic(
964-
# 推荐使用 127.0.0.1
965-
base_url="${`http://127.0.0.1:${port}`}",
966-
api_key="${apiKey}"
967-
)
968-
969-
# 注意: Antigravity 支持使用 Anthropic SDK 调用任意模型
970-
response = client.messages.create(
971-
model="${modelId}",
972-
max_tokens=1024,
973-
messages=[{"role": "user", "content": "Hello"}]
974-
)
975-
976-
print(response.content[0].text)`;
962+
963+
client = Anthropic(
964+
# 推荐使用 127.0.0.1
965+
base_url="${`http://127.0.0.1:${port}`}",
966+
api_key="${apiKey}"
967+
)
968+
969+
# 注意: Antigravity 支持使用 Anthropic SDK 调用任意模型
970+
response = client.messages.create(
971+
model="${modelId}",
972+
max_tokens=1024,
973+
messages=[{"role": "user", "content": "Hello"}]
974+
)
975+
976+
print(response.content[0].text)`;
977977
}
978978

979979
// 2. Gemini Protocol (Native)
@@ -997,43 +997,43 @@ print(response.text)`;
997997
// 3. OpenAI Protocol
998998
if (modelId.startsWith('gemini-3-pro-image')) {
999999
return `from openai import OpenAI
1000-
1001-
client = OpenAI(
1002-
base_url="${baseUrl}",
1003-
api_key="${apiKey}"
1004-
)
1005-
1006-
response = client.chat.completions.create(
1007-
model="${modelId}",
1008-
# 方式 1: 使用 size 参数 (推荐)
1009-
# 支持: "1024x1024" (1:1), "1280x720" (16:9), "720x1280" (9:16), "1216x896" (4:3)
1010-
extra_body={ "size": "1024x1024" },
1011-
1012-
# 方式 2: 使用模型后缀
1013-
# 例如: gemini-3-pro-image-16-9, gemini-3-pro-image-4-3
1014-
# model="gemini-3-pro-image-16-9",
1015-
messages=[{
1016-
"role": "user",
1017-
"content": "Draw a futuristic city"
1018-
}]
1019-
)
1020-
1021-
print(response.choices[0].message.content)`;
1000+
1001+
client = OpenAI(
1002+
base_url="${baseUrl}",
1003+
api_key="${apiKey}"
1004+
)
1005+
1006+
response = client.chat.completions.create(
1007+
model="${modelId}",
1008+
# 方式 1: 使用 size 参数 (推荐)
1009+
# 支持: "1024x1024" (1:1), "1280x720" (16:9), "720x1280" (9:16), "1216x896" (4:3)
1010+
extra_body={ "size": "1024x1024" },
1011+
1012+
# 方式 2: 使用模型后缀
1013+
# 例如: gemini-3-pro-image-16-9, gemini-3-pro-image-4-3
1014+
# model="gemini-3-pro-image-16-9",
1015+
messages=[{
1016+
"role": "user",
1017+
"content": "Draw a futuristic city"
1018+
}]
1019+
)
1020+
1021+
print(response.choices[0].message.content)`;
10221022
}
10231023

10241024
return `from openai import OpenAI
1025-
1026-
client = OpenAI(
1027-
base_url="${baseUrl}",
1028-
api_key="${apiKey}"
1029-
)
1030-
1031-
response = client.chat.completions.create(
1032-
model="${modelId}",
1033-
messages=[{"role": "user", "content": "Hello"}]
1034-
)
1035-
1036-
print(response.choices[0].message.content)`;
1025+
1026+
client = OpenAI(
1027+
base_url="${baseUrl}",
1028+
api_key="${apiKey}"
1029+
)
1030+
1031+
response = client.chat.completions.create(
1032+
model="${modelId}",
1033+
messages=[{"role": "user", "content": "Hello"}]
1034+
)
1035+
1036+
print(response.choices[0].message.content)`;
10371037
};
10381038

10391039
// 在 filter 逻辑中,当选择 openai 协议时,允许显示所有模型

0 commit comments

Comments
 (0)