Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I connect to S3 using IAM? #131

Open
manuelsechi opened this issue Feb 5, 2023 · 15 comments
Open

Can I connect to S3 using IAM? #131

manuelsechi opened this issue Feb 5, 2023 · 15 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@manuelsechi
Copy link

Hello,
can I connect to S3 just using a IAM role?
It seems like AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are always required, maybe can be disabled in the conf in some way?

thanks
M

@ryanto ryanto added the question Further information is requested label Feb 6, 2023
@ryanto
Copy link
Owner

ryanto commented Feb 6, 2023

Hey thanks for the issue.

What does it look like in the S3 JS client to connect using an IAM role (vs using the access keys?). Any sample code for using roles with S3 would be helpful!

@manuelsechi
Copy link
Author

manuelsechi commented Feb 6, 2023

hey,
I was hoping that I could leave ID and SECRET empty so if you have a role set-up for that EC2 to connect to a specific S3 aws will let you connect

let missingEnvs = (config: Record<string, any>): string[] => {
  let required = ['bucket', 'region'];

  return required.filter(key => !config[key] || config.key === '');
};

of course is not going to work from your local dev environment.

@ryanto
Copy link
Owner

ryanto commented Feb 6, 2023

Ah okay, so just leaving them blank will let you connect? Yup I think we can do that.

@ryanto ryanto added the enhancement New feature or request label Feb 6, 2023
@manuelsechi
Copy link
Author

manuelsechi commented Feb 6, 2023

TBH, I need to try. Can I create a branch, then I will checkout that and upload it to aws to test it? Or can we do it in some other way?

@ryanto
Copy link
Owner

ryanto commented Feb 6, 2023

Yup it'd be good to test. Go ahead and fork the repo!

@alexanderluiscampino
Copy link

alexanderluiscampino commented Mar 19, 2023

Hi all,

I saw the associated MR has been closed since it didn't produce the desired result. I am too looking into how to use this in a K8s environment in which the pod itself has an IAM role it is assuming. Hence I do not have credentials to pass into the config of next-s3-upload.

I was thinking, similar to how AWS SDK resolves credentials, this library could do the same. Or perhaps it does, but I am missing it how to do so.

Also, my accounts have MFA, where my AWS creds also come with a token. The configure object only accepts a key and secret. How can I set-up the token? I believe that the credential resolver should default to the AWS SDK one and not force users to utilize one of the multiple allowed methods of the SDK.

Thank you,

@DriesCruyskens
Copy link

To my understanding the solution is very simple. The AWS SDKs pick up the EC2 instance roles automatically as outlined in the docs:

All SDKs have a series of places (or sources) that they check in order to find valid credentials to use to make a request to an AWS service. [...]. This systematic search is called the default credential provider chain. Although the distinct chain used by each SDK varies, they most often include sources such as the following
[...]

  • Amazon Elastic Compute Cloud (Amazon EC2) instance profile credentials (IMDS credential provider)

and here:

V3 provides a default credential provider in Node.js. So you are not required to supply a credential provider explicitly. The default credential provider attempts to resolve the credentials from a variety of different sources in a given precedence, until a credential is returned from the one of the sources. If the resolved credential is from a dynamic source, which means the credential can expire, the SDK will only use the specific source to refresh the credential.

If the S3Client class is instantiated without credentials it uses this default provider chain:

# use-s3-upload.tsx
let client = new S3Client({
-    credentials: {
-      accessKeyId: config.accessKeyId,
-      secretAccessKey: config.secretAccessKey,
-    },
    region: config.region,
    ...(config.forcePathStyle ? { forcePathStyle: config.forcePathStyle } : {}),
    ...(config.endpoint ? { endpoint: config.endpoint } : {}),
});

This chains checks these sources in order:

  1. Environment variables
  2. The shared credentials file
  3. Credentials loaded from the Amazon ECS credentials provider (if applicable)
  4. Credentials loaded from AWS Identity and Access Management using the credentials provider of the Amazon EC2 instance (if configured in the instance metadata)

I suggest to just not pass a credentials object to the S3Client class when users of this library don't provide them.

@ryanto
Copy link
Owner

ryanto commented Apr 25, 2023

Hi guys, thanks for taking the time to provide the explanations and suggestions! I appreciate them.

For the longest time we had to use custom ENVs because vercel didn't allow users to set their own AWS_* envs (these were considered off limits). However, it now looks like vercel allows AWS_ envs, so we can absolutely support the default client initialization.

This would be a breaking change since we'd be ignoring the current set of S3_UPLOAD_ envs. I'm happy to make it, but I just want to do a little testing before we go down this road.

Again thanks for providing all the info/explanation!

@ryanto
Copy link
Owner

ryanto commented Apr 26, 2023

Ok @DriesCruyskens and @manuelsechi I just published a beta release that should use IAM credentials. Here's now to use it:

  1. Install the beta release using: npm install next-s3-upload@beta.

  2. The beta release will not check for the ENVs, and it will not pass credentials to S3Client if you have not defined them. Also, you can use the default ENVs if you'd like (ie AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).

  3. You must have an AWS_REGION and S3_UPLOAD_BUCKET envs defined.

That's it!

The only other thing to note is that you should probably use the usePresignedUpload() hook instead of the useS3Upload(). That's because useS3Upload() relies on STS, which I don't believe will work with instance credentials. Feel free to test both though.

Let me know if it works and don't hesitate to post any questions or issues you run into.

@ryanto
Copy link
Owner

ryanto commented Apr 26, 2023

Hey @alexanderluiscampino hopefully the above solves your issue as well, but let me know if you need something else passed to the client.

@DriesCruyskens
Copy link

DriesCruyskens commented Apr 26, 2023

@ryanto Works like a charm. I provide AWS credentials as environment variables in development to access an S3 bucket. In production I provide no variables but attached an instance profile with an access to S3 policy. (using usePresignedUpload())

Thank you for your work on this awesome package!

@ryanto
Copy link
Owner

ryanto commented Apr 27, 2023

Awesome! Thanks so much for testing it.

You can continue to use that version, but hopefully I'll get this released as a non beta in the next week or two.

Thanks again for the test!

@manuelsechi
Copy link
Author

manuelsechi commented Apr 28, 2023

@ryanto sorry for being late to the party, I tested it as well and and it works as expected.

I can only suggest to change the usePresignedUpload method to receive an endpoint option like you already did in useS3Upload to dinamically decide where to upload the object.

export const usePresignedUpload = (options?: { endpoint: string }) => {
  let hook = useUploader('presigned', upload, options);
  return hook;
};

Thanks for you support with this!

@ryanto
Copy link
Owner

ryanto commented May 1, 2023

Hi @manuelsechi Glad to hear it works!

For presigned uploads you can pass an endpoint using the uploadToS3(file, options) function. This is a newer feature to the library: https://next-s3-upload.codingvalue.com/use-s3-upload

function MyComponent() {
  let { uploadToS3 } = usePresignedUpload();

  function handleFileChange(file) {
    await uploadToS3(file, {
      endpoint: {
        request: {
          url: "/api/my-custom-endpoint"
        }
      }
    });
  }

 // ...
}

@shubh-1999
Copy link

@ryanto thanks, this is working as expected. can you share when this will get released ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants