Skip to content
This repository has been archived by the owner on Jan 17, 2025. It is now read-only.

gemini-1.5-pro出现API错误 #30

Open
wansenlyt opened this issue Mar 12, 2024 · 6 comments
Open

gemini-1.5-pro出现API错误 #30

wansenlyt opened this issue Mar 12, 2024 · 6 comments

Comments

@wansenlyt
Copy link

已经申请到gemini-1.5-pro使用资格几天了,没想到在今天使用ChatGemini 时,提示错误:[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1/models/gemini-pro:streamGenerateContent?alt=sse: [403 ] Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API。

到aistudio.google.com去查询,不知道什么时候已将我使用的模型默认改为了gemini-1.5-pro,在aistudio.google.com的API管理页面上看到测试接口地址改为:https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_API_KEY。

本人是通过docker部署的ChatGemini可否通过REACT_APP_GEMINI_API_URL配置项来修改接口地址呢?
同时在此谢谢开发者的无私贡献。

@chunzha1
Copy link

gemini 1.5有API可以用了吗?我看好像还只能在谷歌的网页上用

@wansenlyt
Copy link
Author

wansenlyt commented Mar 13, 2024

还是原来的api,但使用的模型直接默认就改为了1.5,api接口也由原来的.../v1/...改为了.../v1beta/...。

@chunzha1
Copy link

curl https://generativelanguage.googleapis.com/v1beta/models?key=$API_KEY
{
"models": [
{
"name": "models/chat-bison-001",
"version": "001",
"displayName": "PaLM 2 Chat (Legacy)",
"description": "A legacy text-only model optimized for chat conversations",
"inputTokenLimit": 4096,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateMessage",
"countMessageTokens"
],
"temperature": 0.25,
"topP": 0.95,
"topK": 40
},
{
"name": "models/text-bison-001",
"version": "001",
"displayName": "PaLM 2 (Legacy)",
"description": "A legacy model that understands text and generates text as an output",
"inputTokenLimit": 8196,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateText",
"countTextTokens",
"createTunedTextModel"
],
"temperature": 0.7,
"topP": 0.95,
"topK": 40
},
{
"name": "models/embedding-gecko-001",
"version": "001",
"displayName": "Embedding Gecko",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 1024,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedText",
"countTextTokens"
]
},
{
"name": "models/gemini-1.0-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-001",
"version": "001",
"displayName": "Gemini 1.0 Pro 001 (Tuning)",
"description": "The best model for scaling across a wide range of tasks. This is a stable model that supports tuning.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens",
"createTunedModel"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Latest",
"description": "The best model for scaling across a wide range of tasks. This is the latest model.",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-1.0-pro-vision-latest",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/gemini-pro",
"version": "001",
"displayName": "Gemini 1.0 Pro",
"description": "The best model for scaling across a wide range of tasks",
"inputTokenLimit": 30720,
"outputTokenLimit": 2048,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.9,
"topP": 1,
"topK": 1
},
{
"name": "models/gemini-pro-vision",
"version": "001",
"displayName": "Gemini 1.0 Pro Vision",
"description": "The best image understanding model to handle a broad range of applications",
"inputTokenLimit": 12288,
"outputTokenLimit": 4096,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
],
"temperature": 0.4,
"topP": 1,
"topK": 32
},
{
"name": "models/embedding-001",
"version": "001",
"displayName": "Embedding 001",
"description": "Obtain a distributed representation of a text.",
"inputTokenLimit": 2048,
"outputTokenLimit": 1,
"supportedGenerationMethods": [
"embedContent"
]
},
{
"name": "models/aqa",
"version": "001",
"displayName": "Model that performs Attributed Question Answering.",
"description": "Model trained to return answers to questions that are grounded in provided sources, along with estimating answerable probability.",
"inputTokenLimit": 7168,
"outputTokenLimit": 1024,
"supportedGenerationMethods": [
"generateAnswer"
],
"temperature": 0.2,
"topP": 1,
"topK": 40
}
]
}
V1beta貌似也只有1.0pro

@ghost
Copy link

ghost commented Mar 14, 2024

1.5还不能用

@wansenlyt
Copy link
Author

1.5还不能用

谢谢你

@GitHubChrisChen8035
Copy link

遇到相同问题,我可以在gemini中强制使用1.0的model吗?或者大概什么时候可以实现支持到1.5呢?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants