diff --git a/docs/assets/images/guides/fs/storage_connector/s3_creation.png b/docs/assets/images/guides/fs/storage_connector/s3_creation.png index 106a3d4f1..e67dafe45 100644 Binary files a/docs/assets/images/guides/fs/storage_connector/s3_creation.png and b/docs/assets/images/guides/fs/storage_connector/s3_creation.png differ diff --git a/docs/user_guides/fs/storage_connector/creation/s3.md b/docs/user_guides/fs/storage_connector/creation/s3.md index cf09dbb4a..c8105d744 100644 --- a/docs/user_guides/fs/storage_connector/creation/s3.md +++ b/docs/user_guides/fs/storage_connector/creation/s3.md @@ -17,6 +17,8 @@ When you're finished, you'll be able to read files using Spark through HSFS APIs Before you begin this guide you'll need to retrieve the following information from your AWS S3 account and bucket: - **Bucket:** You will need a S3 bucket that you have access to. The bucket is identified by its name. +- **Path (Optional):** If needed, a path can be defined to ensure that all operations are restricted to a specific location within the bucket. +- **Region (Optional):** You will need an S3 region to have complete control over data when managing the feature group that relies on this storage connector. The region is identified by its code. - **Authentication Method:** You can authenticate using Access Key/Secret, or use IAM roles. If you want to use an IAM role it either needs to be attached to the entire Hopsworks cluster or Hopsworks needs to be able to assume the role. See [IAM role documentation](../../../../admin/roleChaining.md) for more information. - **Server Side Encryption details:** If your bucket has server side encryption (SSE) enabled, make sure you know which algorithm it is using (AES256 or SSE-KMS). If you are using SSE-KMS, you need the resource ARN of the managed key.