-
-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow using AWS_QUERYSTRING_AUTH & AWS_S3_CUSTOM_DOMAIN at the same time #839
base: master
Are you sure you want to change the base?
Conversation
hi, can someone try to merge this in. #587 only solved issue if we use CloudFront. If we want to use our own proxy solution like nginx this is important |
Looks like it's merged now |
Is this fixed yet? |
@rahulkp220 you can probably use this #587 (comment) from #587 |
Ohkay, I will have a look at it. |
The suggestion to use #587 doesn't work for us, it would be great if this could be merged in 🙂 |
@cjnething there are merge conflicts and if it indeed doesn't work for the author, the author will need to update this PR so it can be considered (and to be clear, I would not be the person considering it as I only suggested a PR that I wrote may fix the issue. If these are independent issues the author will need to update and the maintainer(s) would need to approve this PR). |
Based on the more detailed report in #165 (comment) I understand a little more what type of problem this is trying to fix (not sure if it's really the same problem others are running into), but if @cpoetter has a chance to update this PR I added some comments about the different states that need to be accounted for #165 (comment) |
The merge conflict should be resolved ✅ |
Just a heads up, there are two |
Hey folks won't replacing the S3 URL with the custom URL after signing cause the signature to be invalid? |
The signature is not signing the whole URL but part of it. The part of it that's signed is not changing. I believe this PR still needs an update based on my last comment. |
@cpoetter are you still willing to work on this? |
Is this something that the maintainers would still want merged in? We have similar use cases, both with S3Mock and using nginx as a reverse proxy, and the fix in #165 (comment) seems to work for now. I might be able to work a PR to get this into the main library though if it is something people still want. Edit: Actually looking into this a bit more and it looks like this functionality will cause a possible breaking change if others are using a custom domain when |
@dvail Yes, I will take something like this since it seems there is a fair amount of demand and I think it will ease integrating with other S3-compatible services. That comment seems more elegant than this method, mind opening a fix? |
Hello, I'd still like to get this in and am thinking of something like the following which incorporates seemingly the best of the proposed solutions. However, I don't fully understand why diff --git a/storages/backends/s3boto3.py b/storages/backends/s3boto3.py
index 7231750..fb79c30 100644
--- a/storages/backends/s3boto3.py
+++ b/storages/backends/s3boto3.py
@@ -562,33 +562,53 @@ class S3Boto3Storage(CompressStorageMixin, BaseStorage):
split_url = split_url._replace(query='&'.join(joined_qs))
return split_url.geturl()
+ def _get_custom_domain_url(self, name, params, expire):
+ url = '{}//{}/{}{}'.format(
+ self.url_protocol,
+ self.custom_domain,
+ filepath_to_uri(name),
+ ('?' + urlencode(params)) if params else '',
+ )
+
+ if self.querystring_auth:
+ if self.cloudfront_signer:
+ expiration = datetime.utcnow() + timedelta(seconds=expire)
+ url = self.cloudfront_signer.generate_presigned_url(
+ url,
+ date_less_than=expiration
+ )
+ else:
+ # TODO
+ url = self.bucket.meta.client.generate_presigned_url(
+ 'get_object',
+ Params=params,
+ ExpiresIn=expire,
+ HttpMethod=http_method
+ )
+
+ return url
+
def url(self, name, parameters=None, expire=None, http_method=None):
- # Preserve the trailing slash after normalizing the path.
name = self._normalize_name(clean_name(name))
params = parameters.copy() if parameters else {}
+
if expire is None:
expire = self.querystring_expire
if self.custom_domain:
- url = '{}//{}/{}{}'.format(
- self.url_protocol,
- self.custom_domain,
- filepath_to_uri(name),
- '?{}'.format(urlencode(params)) if params else '',
- )
-
- if self.querystring_auth and self.cloudfront_signer:
- expiration = datetime.utcnow() + timedelta(seconds=expire)
- return self.cloudfront_signer.generate_presigned_url(url, date_less_than=expiration)
-
- return url
+ return self._get_custom_domain_url(name, params, expire)
params['Bucket'] = self.bucket.name
params['Key'] = name
- url = self.bucket.meta.client.generate_presigned_url('get_object', Params=params,
- ExpiresIn=expire, HttpMethod=http_method)
+ url = self.bucket.meta.client.generate_presigned_url(
+ 'get_object', Params=params,
+ ExpiresIn=expire,
+ HttpMethod=http_method
+ )
+
if self.querystring_auth:
return url
+
return self._strip_signing_parameters(url)
def get_available_name(self, name, max_length=None): |
Sorry folks this didn't make it into the most recent version still. It would be very helpful if someone could list out the configuration they want to have INCLUDING the DNS / domain mapping because the info here conflicts in several ways for me. |
this is my use case... given
x-task: &task
# docker stack deploy restart policy for tasks
deploy:
restart_policy:
condition: on-failure
replicas: 1
# docker compose stand-alone restart policy for tasks
restart: on-failure
x-task-ensure-bucket: &x-task-ensure-bucket
<<: *task
image: minio/mc
entrypoint: |
sh -c "
mc config host add s3 http://s3:9000 dev password;
mc mb --ignore-existing s3/$${BUCKET_NAME}"
volumes:
s3-data:
services:
s3:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
volumes:
- s3-data:/data
environment:
MINIO_ROOT_USER: dev
MINIO_ROOT_PASSWORD: password
command: server --console-address ":9001" /data
ensure-s3-bucket:
<<: *x-task-ensure-bucket
environment:
- BUCKET_NAME=example
depends_on:
- s3
Here is what my storage looks like. class CustomStorage(S3Boto3Storage):
bucket_name = 'example'
custom_domain = 'host.docker.internal:9000/example'
url_protocol = 'http'
querystring_auth = True
default_acl = "private" In my test case I'm using minio running on docker locally) With the configuration above I get urls in the form. I expect URLS in the form of If I comment out custom_domain I get urls in the form. |
Commenting here to report that there is a current hack of replacing the endpoint URL after it has been generated, but only works with Signature Version 2. More info here #165 (comment). |
Hi,
I am currently trying to mock S3 in development with the help of https://github.com/adobe/S3Mock. Unfortunately I can't use signed URLs and a custom domain at the same time. This should fix that.