Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JavaScript examples for Amazon Nova and Amazon Nova Canvas #7253

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions .doc_gen/metadata/bedrock-runtime_metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,14 @@ bedrock-runtime_Converse_AmazonNovaText:
- description: Send a text message to Amazon Nova, using Bedrock's Converse API.
snippet_tags:
- bedrock-runtime.java2.Converse_AmazonNovaText
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Send a text message to Amazon Nova, using Bedrock's Converse API.
snippet_tags:
- javascript.v3.bedrock-runtime.Converse_AmazonTitanText
services:
bedrock-runtime: {Converse}

Expand Down Expand Up @@ -335,6 +343,14 @@ bedrock-runtime_ConverseStream_AmazonNovaText:
- description: Send a text message to Amazon Nova using Bedrock's Converse API and process the response stream in real-time.
snippet_tags:
- bedrock-runtime.java2.ConverseStream_AmazonNovaText
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Send a text message to Amazon Nova using Bedrock's Converse API and process the response stream in real-time.
snippet_tags:
- javascript.v3.bedrock-runtime.Converse_Mistral
services:
bedrock-runtime: {ConverseStream}

Expand Down Expand Up @@ -1123,6 +1139,14 @@ bedrock-runtime_InvokeModel_AmazonNovaImageGeneration:
- description: Create an image with Amazon Nova Canvas.
snippet_tags:
- bedrock-runtime.java2.InvokeModel_AmazonNovaImageGeneration
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Create an image with Amazon Nova Canvas.
snippet_tags:
- javascript.v3.bedrock-runtime.InvokeModel_AmazonNovaImageGeneration
services:
bedrock-runtime: {InvokeModel}

Expand Down
1 change: 1 addition & 0 deletions javascriptv3/example_code/bedrock-runtime/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
/tempx/
/output/
9 changes: 9 additions & 0 deletions javascriptv3/example_code/bedrock-runtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,15 @@ functions within the same service.
- [Converse](models/ai21LabsJurassic2/converse.js#L4)
- [InvokeModel](models/ai21LabsJurassic2/invoke_model.js)

### Amazon Nova

- [Converse](models/amazonTitanText/converse.js#L4)
- [ConverseStream](models/mistral/converse.js#L4)

### Amazon Nova Canvas

- [InvokeModel](models/amazonNovaCanvas/invokeModel.js#L4)

### Amazon Titan Text

- [Converse](models/amazonTitanText/converse.js#L4)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

// snippet-start:[javascript.v3.bedrock-runtime.InvokeModel_AmazonNovaImageGeneration]

import {
BedrockRuntimeClient,
InvokeModelCommand,
} from "@aws-sdk/client-bedrock-runtime";
import { saveImage } from "../../utils/image-creation.js";
import { fileURLToPath } from "node:url";

/**
* This example demonstrates how to use Amazon Nova Canvas to generate images.
* It shows how to:
* - Set up the Amazon Bedrock runtime client
* - Configure the image generation parameters
* - Send a request to generate an image
* - Process the response and handle the generated image
*
* @returns {Promise<string>} Base64-encoded image data
*/
export const invokeModel = async () => {
// Step 1: Create the Amazon Bedrock runtime client
// Credentials will be automatically loaded from the environment
const client = new BedrockRuntimeClient({ region: "us-east-1" });

// Step 2: Specify which model to use
// For the latest available models, see:
// https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
const modelId = "amazon.nova-canvas-v1:0";

// Step 3: Configure the request payload
// First, set the main parameters:
// - prompt: Text description of the image to generate
// - seed: Random number for reproducible generation (0 to 858,993,459)
const prompt = "A stylized picture of a cute old steampunk robot";
const seed = Math.floor(Math.random() * 858993460);

// Then, create the payload using the following structure:
// - taskType: TEXT_IMAGE (specifies text-to-image generation)
// - textToImageParams: Contains the text prompt
// - imageGenerationConfig: Contains optional generation settings (seed, quality, etc.)
// For a list of available request parameters, see:
// https://docs.aws.amazon.com/nova/latest/userguide/image-gen-req-resp-structure.html
const payload = {
taskType: "TEXT_IMAGE",
textToImageParams: {
text: prompt,
},
imageGenerationConfig: {
seed,
quality: "standard",
},
};

// Step 4: Send and process the request
// - Embed the payload in a request object
// - Send the request to the model
// - Extract and return the generated image data from the response
try {
const request = {
modelId,
body: JSON.stringify(payload),
};
const response = await client.send(new InvokeModelCommand(request));

const decodedResponseBody = new TextDecoder().decode(response.body);
// The response includes an array of base64-encoded PNG images
/** @type {{images: string[]}} */
const responseBody = JSON.parse(decodedResponseBody);
return responseBody.images[0]; // Base64-encoded image data
} catch (error) {
console.error(`ERROR: Can't invoke '${modelId}'. Reason: ${error.message}`);
throw error;
}
};

// If run directly, execute the example and save the generated image
if (process.argv[1] === fileURLToPath(import.meta.url)) {
console.log("Generating image. This may take a few seconds...");
invokeModel()
.then(async (imageData) => {
const imagePath = await saveImage(imageData, "nova-canvas");
// Example path: javascriptv3/example_code/bedrock-runtime/output/nova-canvas/image-01.png
console.log(`Image saved to: ${imagePath}`);
})
.catch((error) => {
console.error("Execution failed:", error);
process.exitCode = 1;
});
}
// snippet-end:[javascript.v3.bedrock-runtime.InvokeModel_AmazonNovaImageGeneration]
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

// snippet-start:[javascript.v3.bedrock-runtime.Converse_AmazonNovaText]
// This example demonstrates how to use the Amazon Nova foundation models to generate text.
// It shows how to:
// - Set up the Amazon Bedrock runtime client
// - Create a message
// - Configure and send a request
// - Process the response

import {
BedrockRuntimeClient,
ConversationRole,
ConverseCommand,
} from "@aws-sdk/client-bedrock-runtime";

// Step 1: Create the Amazon Bedrock runtime client
// Credentials will be automatically loaded from the environment
const client = new BedrockRuntimeClient({ region: "us-east-1" });

// Step 2: Specify which model to use:
// Available Amazon Nova models and their characteristics:
// - Amazon Nova Micro: Text-only model optimized for lowest latency and cost
// - Amazon Nova Lite: Fast, low-cost multimodal model for image, video, and text
// - Amazon Nova Pro: Advanced multimodal model balancing accuracy, speed, and cost
//
// For the most current model IDs, see:
// https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
const modelId = "amazon.nova-lite-v1:0";

// Step 3: Create the message
// The message includes the text prompt and specifies that it comes from the user
const inputText =
"Describe the purpose of a 'hello world' program in one line.";
const message = {
content: [{ text: inputText }],
role: ConversationRole.USER,
};

// Step 4: Configure the request
// Optional parameters to control the model's response:
// - maxTokens: maximum number of tokens to generate
// - temperature: randomness (max: 1.0, default: 0.7)
// OR
// - topP: diversity of word choice (max: 1.0, default: 0.9)
// Note: Use either temperature OR topP, but not both
const request = {
modelId,
messages: [message],
inferenceConfig: {
maxTokens: 500, // The maximum response length
temperature: 0.5, // Using temperature for randomness control
//topP: 0.9, // Alternative: use topP instead of temperature
},
};

// Step 5: Send and process the request
// - Send the request to the model
// - Extract and return the generated text from the response
try {
const response = await client.send(new ConverseCommand(request));
console.log(response.output.message.content[0].text);
} catch (error) {
console.error(`ERROR: Can't invoke '${modelId}'. Reason: ${error.message}`);
throw error;
}
// snippet-end:[javascript.v3.bedrock-runtime.Converse_AmazonNovaText]
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

// snippet-start:[javascript.v3.bedrock-runtime.ConverseStream_AmazonNovaText]
// This example demonstrates how to use the Amazon Nova foundation models
// to generate streaming text responses.
// It shows how to:
// - Set up the Amazon Bedrock runtime client
// - Create a message
// - Configure a streaming request
// - Process the streaming response

import {
BedrockRuntimeClient,
ConversationRole,
ConverseStreamCommand,
} from "@aws-sdk/client-bedrock-runtime";

// Step 1: Create the Amazon Bedrock runtime client
// Credentials will be automatically loaded from the environment
const client = new BedrockRuntimeClient({ region: "us-east-1" });

// Step 2: Specify which model to use
// Available Amazon Nova models and their characteristics:
// - Amazon Nova Micro: Text-only model optimized for lowest latency and cost
// - Amazon Nova Lite: Fast, low-cost multimodal model for image, video, and text
// - Amazon Nova Pro: Advanced multimodal model balancing accuracy, speed, and cost
//
// For the most current model IDs, see:
// https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
const modelId = "amazon.nova-lite-v1:0";

// Step 3: Create the message
// The message includes the text prompt and specifies that it comes from the user
const inputText =
"Describe the purpose of a 'hello world' program in one paragraph";
const message = {
content: [{ text: inputText }],
role: ConversationRole.USER,
};

// Step 4: Configure the streaming request
// Optional parameters to control the model's response:
// - maxTokens: maximum number of tokens to generate
// - temperature: randomness (max: 1.0, default: 0.7)
// OR
// - topP: diversity of word choice (max: 1.0, default: 0.9)
// Note: Use either temperature OR topP, but not both
const request = {
modelId,
messages: [message],
inferenceConfig: {
maxTokens: 500, // The maximum response length
temperature: 0.5, // Using temperature for randomness control
//topP: 0.9, // Alternative: use topP instead of temperature
},
};

// Step 5: Send and process the streaming request
// - Send the request to the model
// - Process each chunk of the streaming response
try {
const response = await client.send(new ConverseStreamCommand(request));

for await (const chunk of response.stream) {
if (chunk.contentBlockDelta) {
// Print each text chunk as it arrives
process.stdout.write(chunk.contentBlockDelta.delta?.text || "");
}
}
} catch (error) {
console.error(`ERROR: Can't invoke '${modelId}'. Reason: ${error.message}`);
process.exitCode = 1;
}
// snippet-end:[javascript.v3.bedrock-runtime.ConverseStream_AmazonNovaText]
4 changes: 2 additions & 2 deletions javascriptv3/example_code/bedrock-runtime/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
"integration-test": "vitest run integration --reporter=junit --outputFile=test_results/bedrock-runtime-test-results.junit.xml"
},
"devDependencies": {
"vitest": "^1.6.0"
"vitest": "^1.6.1"
},
"dependencies": {
"@aws-sdk/client-bedrock-runtime": "^3.658.1"
"@aws-sdk/client-bedrock-runtime": "^3.751.0"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,19 @@ describe("Converse with text generation models", () => {
const baseDirectory = path.join(__dirname, "..", "models");
const fileName = "converse.js";

const subdirectories = [
"ai21LabsJurassic2",
"amazonTitanText",
"anthropicClaude",
"cohereCommand",
"metaLlama",
"mistral",
];
const models = {
ai21LabsJurassic2: "AI21 Labs Jurassic-2",
amazonNovaText: "Amazon Nova",
amazonTitanText: "Amazon Titan",
anthropicClaude: "Anthropic Claude",
cohereCommand: "Cohere Command",
metaLlama: "Meta Llama",
mistral: "Mistral",
};

test.each(subdirectories)(
"should invoke the model and return text",
async (subdirectory) => {
test.each(Object.entries(models).map(([sub, name]) => [name, sub]))(
"should invoke %s and return text",
async (_, subdirectory) => {
const script = path.join(baseDirectory, subdirectory, fileName);
const consoleLogSpy = vi.spyOn(console, "log");

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,18 @@ describe("ConverseStream with text generation models", () => {
const fileName = "converseStream.js";
const baseDirectory = path.join(__dirname, "..", "models");

const subdirectories = [
"amazonTitanText",
"anthropicClaude",
"cohereCommand",
"metaLlama",
"mistral",
];
const models = {
amazonNovaText: "Amazon Nova",
amazonTitanText: "Amazon Titan",
anthropicClaude: "Anthropic Claude",
cohereCommand: "Cohere Command",
metaLlama: "Meta Llama",
mistral: "Mistral",
};

test.each(subdirectories)(
"should invoke the model and return text",
async (subdirectory) => {
test.each(Object.entries(models).map(([sub, name]) => [name, sub]))(
"should invoke %s and return text",
async (_, subdirectory) => {
let output = "";
const outputStream = new Writable({
write(/** @type string */ chunk, encoding, callback) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

import { describe, it } from "vitest";
import { invokeModel } from "../models/amazonNovaCanvas/invokeModel.js";
import { expectToBeANonEmptyString } from "./test_tools.js";

describe("Invoking Amazon Nova Canvas", () => {
it("should return a response", async () => {
const response = await invokeModel();
expectToBeANonEmptyString(response);
});
});
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ import { expect } from "vitest";
*/
export const expectToBeANonEmptyString = (string) => {
expect(typeof string).toBe("string");
expect(string.length).not.toBe(0);
expect(string).not.toHaveLength(0);
};
Loading
Loading