You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: samples/contract-compliance-analysis/back-end/README.md
+11-6Lines changed: 11 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -128,17 +128,22 @@ You can then go the Amazon Cognito page at the AWS Console, search for the User
128
128
#### Enable access to Bedrock models
129
129
130
130
Models are not enabled by default on Amazon Bedrock, so if this is the first time you are going to use Amazon Bedrock,
131
-
it is recommended to first check if the access is already enabled.
131
+
it is recommended to first check if the access is already enabled.
132
132
133
-
Go to the AWS Console, then go to Amazon Bedrock
133
+
The default model is Anthropic Claude 3 Haiku v1, being used in [cross-region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) mode. Please ensure this model is enabled in the regions listed in the **US Anthropic Claude 3 Haiku** section from the [Supported Regions and models for inference profiles](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html) page.
134
134
135
-
Click Model access at the left side
135
+
Steps:
136
+
137
+
- Go to the AWS Console, then go to Amazon Bedrock
138
+
139
+
- Click Model access at the left side
136
140
137
141

138
142
139
-
Click the **Enable specific models** button and enable the checkbox for Anthropic Claude models
143
+
- Click the **Enable specific models** button and enable the checkbox for Anthropic Claude models
144
+
145
+
- Click **Next** and **Submit** buttons
140
146
141
-
Click **Next** and **Submit** buttons
142
147
143
148
## How to customize contract analysis according to your use case
144
149
@@ -172,7 +177,7 @@ The recommended sequence of steps:
172
177
173
178
By default, the application uses Anthropic Claude 3 Haiku v1. Here are steps explaining how to update the model to use. For this example, we will use [Amazon Nova Pro v1](https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/):
174
179
175
-
- Open the [app_properties.yaml](./app_properties.yaml) file and update the field ```claude_model_id``` to use the model you selected. In this case, we update the field to ```us.amazon.nova-pro-v1:0```. Replace it with the model id you want to use. The list of model ids available through Amazon Bedrock is available in the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). Ensure the model you are selecting is enabled in the console (Amazon Bedrock -> Model access) and available in your region.
180
+
- Open the [app_properties.yaml](./app_properties.yaml) file and update the field ```claude_model_id``` to use the model you selected. In this case, we update the field to ```us.amazon.nova-pro-v1:0```. Replace it with the model id you want to use. The list of model ids available through Amazon Bedrock is available in the [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). Ensure the model you are selecting is enabled in the console (Amazon Bedrock -> Model access) and available in your region. In case of using a predefined Inference Profile to use a model in a cross-region fashion, consult [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html) of all regions that needs to have model access enabled.
176
181
- Depending on the model selected, you might need to update some hardcoded values regarding the max number of new tokens generated. For instance, Amazon Nova Pro v1 supports 5000 output tokens, which doesn't require any modifications. However, some models might have a max output tokens of 3000, which requires some changes in the sample. Update the following lines if required:
177
182
- In file [fn-preprocess-contract/index.py](./stack/sfn/preprocessing/fn-preprocess-contract/index.py), update line 96 to change the chunks size to a value smaller than the max tokens output for your model, as well as line 107 to match your model's max output tokens.
178
183
- In file [scripts/utils/llm.py](./scripts/utils/llm.py), update the max tokens output line 28.
0 commit comments