-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I connect to S3 using IAM? #131
Comments
Hey thanks for the issue. What does it look like in the S3 JS client to connect using an IAM role (vs using the access keys?). Any sample code for using roles with S3 would be helpful! |
hey,
of course is not going to work from your local dev environment. |
Ah okay, so just leaving them blank will let you connect? Yup I think we can do that. |
TBH, I need to try. Can I create a branch, then I will checkout that and upload it to aws to test it? Or can we do it in some other way? |
Yup it'd be good to test. Go ahead and fork the repo! |
Hi all, I saw the associated MR has been closed since it didn't produce the desired result. I am too looking into how to use this in a K8s environment in which the pod itself has an IAM role it is assuming. Hence I do not have credentials to pass into the config of next-s3-upload. I was thinking, similar to how AWS SDK resolves credentials, this library could do the same. Or perhaps it does, but I am missing it how to do so. Also, my accounts have MFA, where my AWS creds also come with a token. The configure object only accepts a key and secret. How can I set-up the token? I believe that the credential resolver should default to the AWS SDK one and not force users to utilize one of the multiple allowed methods of the SDK. Thank you, |
To my understanding the solution is very simple. The AWS SDKs pick up the EC2 instance roles automatically as outlined in the docs:
and here:
If the S3Client class is instantiated without credentials it uses this default provider chain: # use-s3-upload.tsx
let client = new S3Client({
- credentials: {
- accessKeyId: config.accessKeyId,
- secretAccessKey: config.secretAccessKey,
- },
region: config.region,
...(config.forcePathStyle ? { forcePathStyle: config.forcePathStyle } : {}),
...(config.endpoint ? { endpoint: config.endpoint } : {}),
}); This chains checks these sources in order:
I suggest to just not pass a credentials object to the S3Client class when users of this library don't provide them. |
Hi guys, thanks for taking the time to provide the explanations and suggestions! I appreciate them. For the longest time we had to use custom ENVs because vercel didn't allow users to set their own This would be a breaking change since we'd be ignoring the current set of Again thanks for providing all the info/explanation! |
Ok @DriesCruyskens and @manuelsechi I just published a beta release that should use IAM credentials. Here's now to use it:
That's it! The only other thing to note is that you should probably use the usePresignedUpload() hook instead of the useS3Upload(). That's because useS3Upload() relies on STS, which I don't believe will work with instance credentials. Feel free to test both though. Let me know if it works and don't hesitate to post any questions or issues you run into. |
Hey @alexanderluiscampino hopefully the above solves your issue as well, but let me know if you need something else passed to the client. |
@ryanto Works like a charm. I provide AWS credentials as environment variables in development to access an S3 bucket. In production I provide no variables but attached an instance profile with an access to S3 policy. (using Thank you for your work on this awesome package! |
Awesome! Thanks so much for testing it. You can continue to use that version, but hopefully I'll get this released as a non beta in the next week or two. Thanks again for the test! |
@ryanto sorry for being late to the party, I tested it as well and and it works as expected. I can only suggest to change the usePresignedUpload method to receive an endpoint option like you already did in useS3Upload to dinamically decide where to upload the object.
Thanks for you support with this! |
Hi @manuelsechi Glad to hear it works! For presigned uploads you can pass an endpoint using the function MyComponent() {
let { uploadToS3 } = usePresignedUpload();
function handleFileChange(file) {
await uploadToS3(file, {
endpoint: {
request: {
url: "/api/my-custom-endpoint"
}
}
});
}
// ...
} |
@ryanto thanks, this is working as expected. can you share when this will get released ? |
Hello,
can I connect to S3 just using a IAM role?
It seems like AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are always required, maybe can be disabled in the conf in some way?
thanks
M
The text was updated successfully, but these errors were encountered: