Skip to content

Commit 8a96b86

Browse files
authored
Use flash more often. (google-gemini#517)
* Use flash more often. Change-Id: If20e5d5e8462d160681d9dc2bfec965fd94fb633 * format Change-Id: I5a47b80da6f07b26a8079e33b2350ace3454bb50 * fix link Change-Id: If23c8b48e53238c99d71d68f3669e521a5a82c2f * fix check Change-Id: Iac2f1bd545395949cf013fc2fbc6c91716766d1d
1 parent 8a29017 commit 8a96b86

File tree

11 files changed

+45
-48
lines changed

11 files changed

+45
-48
lines changed

.github/workflows/samples.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ jobs:
2828
2929
for file in ${NEW_FILES}; do
3030
echo "Testing $file"
31+
name=$(basename $file)
3132
if [[ -f ${file} ]]; then
3233
# File exists, so needs to be listed.
3334
if ! grep -q $name ${README}; then

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ genai.configure(api_key=os.environ["GEMINI_API_KEY"])
3333
3. Create a model and run a prompt.
3434

3535
```python
36-
model = genai.GenerativeModel('gemini-1.0-pro-latest')
36+
model = genai.GenerativeModel('gemini-1.5-flash')
3737
response = model.generate_content("The opposite of hot is")
3838
print(response.text)
3939
```

samples/README.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,19 @@ Each file is structured as a runnable test case, ensuring that samples are execu
88

99
## Contents
1010

11-
| File | Description |
12-
| ---- | ----------- |
13-
| [cache.py](./cache.py) | Context caching |
14-
| [chat.py](./chat.py) | Multi-turn chat conversations |
15-
| [code_execution.py](./code_execution.py) | Executing code |
11+
| File | Description |
12+
|----------------------------------------------------------| ----------- |
13+
| [cache.py](./cache.py) | Context caching |
14+
| [chat.py](./chat.py) | Multi-turn chat conversations |
15+
| [code_execution.py](./code_execution.py) | Executing code |
1616
| [configure_model_parameters.py](./configure_model_parameters.py) | Setting model parameters |
17-
| [controlled_generation.py](./controlled_generation.py) | Generating content with output constraints (e.g. JSON mode) |
18-
| [count_tokens.py](./count_tokens.py) | Counting input and output tokens |
19-
| [embed.py](./embed.py) | Generating embeddings |
20-
| [files.py](./files.py) | Managing files with the File API |
21-
| [function_calling.py](./function_calling.py) | Using function calling |
22-
| [models.py](./models.py) | Listing models and model metadata |
23-
| [safety_settings.py](./safety_settings.py) | Setting and using safety controls |
24-
| [system_instruction.py](./system_instruction.py) | Setting system instructions |
25-
| [text_generation.py](./text_generation.py) | Generating text |
26-
| [tuned_models.py](./tuned_models.py) | Creating and managing tuned models |
17+
| [controlled_generation.py](./controlled_generation.py) | Generating content with output constraints (e.g. JSON mode) |
18+
| [count_tokens.py](./count_tokens.py) | Counting input and output tokens |
19+
| [embed.py](./embed.py) | Generating embeddings |
20+
| [files.py](./files.py) | Managing files with the File API |
21+
| [function_calling.py](./function_calling.py) | Using function calling |
22+
| [models.py](./models.py) | Listing models and model metadata |
23+
| [safety_settings.py](./safety_settings.py) | Setting and using safety controls |
24+
| [system_instruction.py](./system_instruction.py) | Setting system instructions |
25+
| [text_generation.py](./text_generation.py) | Generating text |
26+
| [tuned_models.py](./tuned_models.py) | Creating and managing tuned models |

samples/code_execution.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ def test_code_execution_basic(self):
2929
)
3030

3131
# Each `part` either contains `text`, `executable_code` or an `execution_result`
32-
for part in result.candidates[0].content.parts:
32+
for part in response.candidates[0].content.parts:
3333
print(part, "\n")
3434

3535
print("-" * 80)
@@ -92,7 +92,7 @@ def test_code_execution_basic(self):
9292

9393
def test_code_execution_request_override(self):
9494
# [START code_execution_request_override]
95-
model = genai.GenerativeModel(model_name="gemini-1.5-pro")
95+
model = genai.GenerativeModel(model_name="gemini-1.5-flash")
9696
response = model.generate_content(
9797
(
9898
"What is the sum of the first 50 prime numbers? "
@@ -140,7 +140,7 @@ def test_code_execution_request_override(self):
140140

141141
def test_code_execution_chat(self):
142142
# [START code_execution_chat]
143-
model = genai.GenerativeModel(model_name="gemini-1.5-pro", tools="code_execution")
143+
model = genai.GenerativeModel(model_name="gemini-1.5-flash", tools="code_execution")
144144
chat = model.start_chat()
145145
response = chat.send_message('Can you print "Hello world!"?')
146146
response = chat.send_message(

samples/count_tokens.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
class UnitTests(absltest.TestCase):
2424
def test_tokens_context_window(self):
2525
# [START tokens_context_window]
26-
model_info = genai.get_model("models/gemini-1.0-pro-001")
26+
model_info = genai.get_model("models/gemini-1.5-flash")
2727

2828
# Returns the "context window" for the model,
2929
# which is the combined input and output token limits.
@@ -91,7 +91,7 @@ def test_tokens_multimodal_image_inline(self):
9191
model = genai.GenerativeModel("models/gemini-1.5-flash")
9292

9393
prompt = "Tell me about this image"
94-
your_image_file = PIL.Image.open("image.jpg")
94+
your_image_file = PIL.Image.open(media / "organ.jpg")
9595

9696
# Call `count_tokens` to get the input token count
9797
# of the combined text and file (`total_tokens`).
@@ -115,7 +115,7 @@ def test_tokens_multimodal_image_file_api(self):
115115
model = genai.GenerativeModel("models/gemini-1.5-flash")
116116

117117
prompt = "Tell me about this image"
118-
your_image_file = genai.upload_file(path="image.jpg")
118+
your_image_file = genai.upload_file(path=media / "organ.jpg")
119119

120120
# Call `count_tokens` to get the input token count
121121
# of the combined text and file (`total_tokens`).

samples/rest/code_execution.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ set -eu
22

33
echo "[START code_execution_basic]"
44
# [START code_execution_basic]
5-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
5+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
66
-H 'Content-Type: application/json' \
77
-d ' {"tools": [{'code_execution': {}}],
88
"contents": {
@@ -16,7 +16,7 @@ curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-lat
1616

1717
echo "[START code_execution_chat]"
1818
# [START code_execution_chat]
19-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
19+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
2020
-H 'Content-Type: application/json' \
2121
-d '{"tools": [{'code_execution': {}}],
2222
"contents": [

samples/rest/controlled_generation.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ set -eu
22

33
echo "json_controlled_generation"
44
# [START json_controlled_generation]
5-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
5+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
66
-H 'Content-Type: application/json' \
77
-d '{
88
"contents": [{
@@ -27,7 +27,7 @@ curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-lat
2727

2828
echo "json_no_schema"
2929
# [START json_no_schema]
30-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
30+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
3131
-H 'Content-Type: application/json' \
3232
-d '{
3333
"contents": [{

samples/rest/models.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,5 @@ curl https://generativelanguage.googleapis.com/v1beta/models?key=$GOOGLE_API_KEY
77

88
echo "[START models_get]"
99
# [START models_get]
10-
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-pro?key=$GOOGLE_API_KEY
10+
curl https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash?key=$GOOGLE_API_KEY
1111
# [END models_get]

samples/rest/safety_settings.sh

Lines changed: 14 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,37 +2,33 @@ set -eu
22

33
echo "[START safety_settings]"
44
# [START safety_settings]
5-
echo '{
5+
echo '{
66
"safetySettings": [
7-
{'category': HARM_CATEGORY_HARASSMENT, 'threshold': BLOCK_ONLY_HIGH}
7+
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"}
88
],
99
"contents": [{
1010
"parts":[{
1111
"text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json
1212

13-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$GOOGLE_API_KEY" \
13+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
1414
-H 'Content-Type: application/json' \
1515
-X POST \
16-
-d @request.json 2> /dev/null > tee response.json
17-
18-
jq .promptFeedback > response.json
16+
-d @request.json 2> /dev/null
1917
# [END safety_settings]
2018

2119
echo "[START safety_settings_multi]"
2220
# [START safety_settings_multi]
23-
echo '{
24-
"safetySettings": [
25-
{'category': HARM_CATEGORY_HARASSMENT, 'threshold': BLOCK_ONLY_HIGH},
26-
{'category': HARM_CATEGORY_HATE_SPEECH, 'threshold': BLOCK_MEDIUM_AND_ABOVE}
27-
],
28-
"contents": [{
29-
"parts":[{
30-
"text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json
21+
echo '{
22+
"safetySettings": [
23+
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_ONLY_HIGH"},
24+
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"}
25+
],
26+
"contents": [{
27+
"parts":[{
28+
"text": "'I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write a ironic phrase about them.'"}]}]}' > request.json
3129

32-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=$GOOGLE_API_KEY" \
30+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
3331
-H 'Content-Type: application/json' \
3432
-X POST \
35-
-d @request.json 2> /dev/null > response.json
36-
37-
jq .promptFeedback > response.json
33+
-d @request.json 2> /dev/null
3834
# [END safety_settings_multi]

samples/rest/system_instruction.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ set -eu
22

33
echo "[START system_instruction]"
44
# [START system_instruction]
5-
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent?key=$GOOGLE_API_KEY" \
5+
curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=$GOOGLE_API_KEY" \
66
-H 'Content-Type: application/json' \
77
-d '{ "system_instruction": {
88
"parts":

0 commit comments

Comments
 (0)