You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user_guides/fs/data_source/creation/s3.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,6 +76,8 @@ Here you can specify any additional spark options that you wish to add to the sp
76
76
77
77
To connect to a S3 compatiable storage other than AWS S3, you can add the option with key as `fs.s3a.endpoint` and the endpoint you want to use as value. The data source will then be able to read from your specified S3 compatible storage.
78
78
79
+
You can also add options to configure the S3A client. For example, to disable SSL certificate verification, you can add the option with key as `fs.s3a.connection.ssl.enabled` and value as `false`. You can also configure other options such as `fs.s3a.path.style.access` if you use s3 compliant storage which does not support virtual hosting.
80
+
79
81
!!! warning "Spark Configuration"
80
82
When using the data source within a Spark application, the credentials are set at application level. This allows users to access multiple buckets with the same data source within the same application (assuming the credentials allow it).
81
83
You can disable this behaviour by setting the option `fs.s3a.global-conf` to `False`. If the `global-conf` option is disabled, the credentials are set on a per-bucket basis and users will be able to use the credentials to access data only from the bucket specified in the data source configuration.
0 commit comments