-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADR proposal "Requirements for container registry" #212
Conversation
7cb3cce
to
d1f0f65
Compare
| Automation | ✓ Webhooks | ✓ Webhooks, building images | ✗ | | ||
| Vulnerability scanning | ✓ Trivy, Clair | ✓ Clair | ✗ | | ||
| Content Trust and Validation | ✓ Notary, Cosign | ✓ Cosign | ✗ | | ||
| Multi-tenancy | ✓ | ✓ | ✓ | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the multi-tenancy bit may use a little bit more consideration.
For example, AFAIU, Harbor is indeed multi-tenancy-capable regarding RBAC controls and UI, making it a candidate for hosting a singleton "registry as a service" instance operated by a CSP.
However AFAIU, e. g. attributing used resources to each tenant for e. g. billing purposes may be kind of difficult (or at least different from established billing procedures for storage etc.), as Harbor operates on a single storage backend.
This specific aspect is e. g. addressed by https://github.com/sapcc/keppel#overview (a comparison between them and harbor is done directly in their README).
Not trying to say that this necessarily changes the overall picture. But there are multiple considerations regarding multi-tenancy and metering/billing is one of them; For the "registry as a service" use case, it may be one of the most important aspects from CSP-perspective and should be mentioned in this decision record (even if only as a design trade-off).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @joshmue
I got your points that may be difficult to charge e.g. storage in the case of Harbor multi-tenant as-a-service container registry solution (as Harbor operates on a single storage backend)
In my point of view, there are two options for how we can look at that:
- CSPs may offer the whole Harbor instance as-a-service per user
- In this case, the CSP may charge the whole storage to the user, because there is no shared one
- Based on our research I would say that this is a common option across CSPs that offer container registry as-a-service based on the Harbor project, e.g. container-registry.com, OVH cloud. OVH even developed a harbor-operator for that purpose, so they can control multiple harbor instances in a simple way
- CSPs may offer multi-tenant Harbor instance as-a-service, hence one Harbor instance is shared across multiple users
- Each user gets a separate tenant/project, e.g. Free plan for container-registry.com
- The deciding aspect for metering/billing here might be quotas. Quota may be set per tenant/project, and e.g. the storage could be charged based on it. See also Harbor project quotas
Overall I do not see the benefit here from considering various use cases in the context of multi-tenancy (as they are many and IMO it is out of the ADR scope). But I agree with you that would be precise to mention that the multi-tenancy in the projects like Harbor is not implemented on the storage level, but only on the virtual "project" level.
I mentioned the storage multi-tenancy here aa70de1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for mentioning it!
To explain my reasoning behind stressing this aspect:
I guess this decision record should not only find the container registry software which is generally the best one out there, but also evaluate it from SCS perspective and prospective use cases.
As you wrote in the "Motivation" part, we have two/three main use cases:
The specific use cases should be discussed, but overall the CSPs could offer a private container registry as a service or the CSPs could offer a recipe for customers to deploy the private registry themselves utilizing the CSP infrastructure.
...
In both cases, the private container registry should be SCS-compliant
Use case 1: CSP's offering a managed "container registry as a service", using recipes maintained by SCS
Use case 2: Users managing a container registry themselves, using recipes maintained by SCS 1
Use case 3: Defining an "SCS compliant container registry" to make switching registries easy
The third use case IMHO needs a bit more of clarification, but should have value for end users, as long as there are managed or self-hostable solutions ready to use, which are kind-of interchangeable.
Only considering the second use case, SCS IMHO can offer only little benefits over upstream recipes. There is no strict separation of personas/roles involved here, making the feature list the most important aspect. Hence, for this use case, it really only matters, which software is generally the best one.
Considering the first use case, SCS can offer a lot of value to both CSP's and end users by providing a ready-to-go "aaS" solution, while having some deployment automation (for use case 2) as by-product.
I'd consider that one the primary use case (together with use case 3).
From CSP perspective, the multi-tenancy/quota/... capabilities may outweigh any other feature, as different approaches on multi-tenancy most likely will have massive implications on pricing, product strategy and technical architecture.
As you mentioned, there are two general approaches to multi-tenancy:
- CSPs may offer the whole Harbor instance as-a-service per user
- In this case, the CSP may charge the whole storage to the user, because there is no shared one
- Based on our research I would say that this is a common option across CSPs that offer container registry as-a-service based on the Harbor project, e.g. container-registry.com, OVH cloud. OVH even developed a harbor-operator for that purpose, so they can control multiple harbor instances in a simple way
- CSPs may offer multi-tenant Harbor instance as-a-service, hence one Harbor instance is shared across multiple users
- Each user gets a separate tenant/project, e.g. Free plan for container-registry.com
- The deciding aspect for metering/billing here might be quotas. Quota may be set per tenant/project, and e.g. the storage could be charged based on it. See also Harbor project quotas
Both approaches differ a lot from each other.
In my head there is the following analogy: OpenStack.
Like e. g. Harbor, OpenStack can be deployed by an operator for each of their users separately. AFAIK, this usually is not done, as OpenStack is multi tenancy capable and sharing a single OpenStack installation allows to share resources, hence saving resources, hence saving money of the operator and the users.
As such (I imagine), when a CSP starts a cloud from scratch, and they evaluate different cloud solutions, they would most likely see lack of multi-tenancy not as a minor implementation detail, but as a potential deal breaker.
The same could be the case for container registries.
I'm not trying to say that e. g. Harbor's multi tenancy features are insufficient (in fact, quotas alone may already be enough even for billing), but I'm trying to say that multi-tenancy in general may be a very important factor for CSP's (AFAIK SCS's primary "consumer group") or even users who are not ready to pay for a relatively large amount of compute to store a few container images.
I hope some CSP's are able to contribute their views/priorities on this.
Footnotes
-
Sidenote: Who should really maintain the recipes? In the "Motivation", it says that the CSP should do it, to enable the user to deploy the registry on the CSP itself. This would (maybe I'm over-interpreting here) not mandate such recipes to be provider independent, hence making migration difficult. Avoiding such lock-in is an SCS priority. Maybe this could be clarified. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the detailed explanation of your thoughts. Overall, They make sense to us.
.. Who should really maintain the recipes? In the "Motivation", it says that the CSP should do it ...
Agree, the recipes should be CSP-agnostic and maintained by SCS. I mentioned that in the fixup commit 67b80cd
I'm not trying to say that e. g. Harbor's multi tenancy features are insufficient (in fact, quotas alone may already be enough even for billing), but I'm trying to say that multi-tenancy in general may be a very important factor for CSP's ...
Agree, the question could be here whether Harbor is able to operate as a single (potentially large) multi-tenant instance for an undefined number of users.
- @chess-knight picked out an interesting Harbor issue that summarizes real-world Harbor deployments by various companies
- There is a comment that mentioned CSP with 5k clients (not sure whether all clients shared one Harbor instance). Would be great if @Vad1mo could bring some light into it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CSPs may offer the whole Harbor instance as-a-service per user
- In this case, the CSP may charge the whole storage to the user, because there is no shared one
- Based on our research I would say that this is a common option across CSPs that offer container registry as-a-service based on the Harbor project, e.g. container-registry.com, OVH cloud. OVH even developed a harbor-operator for that purpose, so they can control multiple harbor instances in a simple way
CSPs may offer multi-tenant Harbor instance as-a-service, hence one Harbor instance is shared across multiple users
- Each user gets a separate tenant/project, e.g. Free plan for container-registry.com
- The deciding aspect for metering/billing here might be quotas. Quota may be set per tenant/project, and e.g. the storage could be charged based on it. See also Harbor project quotas
We are currently running option 1 and 2. The main reason for having 1 & 2 is that 1 has a mandatory resource baseline on CPU/RAM/DB connections per tenant, so we need to price that in. Option 2 has a nice linear pricing on the other hand.
Option 2 needs glue code and 3rd party systems to make it work, in a soft multi-tenant mode.
In the next ~6 months, we will only offer Option 1, as we now have fully multi-tenant capable instances, without allocating CPU/RAM per tenant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the next ~6 months, we will only offer Option 1, as we now have fully multi-tenant capable instances, without allocating CPU/RAM per tenant.
Could you please explain this in detail? (Option 1 = whole Harbor instance (including storage layer) per tenant, i.e. single-tenant instance)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please explain this in detail? (Option 1 = whole Harbor instance (including storage layer) per tenant, i.e. single-tenant instance)
As you guessed it: Whole Harbor instance (including storage layer) per tenant, but on a mutli-tenant instance. So, hence no additional CPU/RAM/DB connections per tenant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the multi-tenant instance you mentioned Kubernetes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, multi-tenant instance (deployment) of Harbor.
3a4f9a2
to
bab0990
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great overview of the open-source container registry ecosystem mentioning important aspects and use-cases.
I've provided a couple of mostly stylistic suggestions.
941058a
to
6a174bc
Compare
This looks good to me! |
I think the multi-tenancy/registry-as-a-service topic does not have a clear result yet. As IIRC @berendt mentioned, "as a service" capabilities are important to them as CSP; bringing up keppel as potential solution. @Vad1mo already shared some insights on how
@Vad1mo can you share whether the same thing that you are planning would also be possible for other providers/hosters? If not, it IMHO should be explicitly mentioned in the decision record as trade-off before merging it 1. Also, some CSP's might opt to deviate from this decision then, because their product/pricing strategy may not envision dedicated instances. Footnotes
|
This commit adds the document that should select an appropriate container registry implementation that meets all defined requirements and makes an architectural decision on what implementation is fully SCS-compliant and recommended by the SCS. This commit adds the document structure and is focused to the OSS health check part. Signed-off-by: Matej Feder <[email protected]>
This commit adds list of required and desirable features of container registry as well as table comparison for selected container registries(Harbor, Quay, Dragonfly) Signed-off-by: Roman Hros <[email protected]>
Remove Docker as an orchestration platform. Split Authentication - required feature to the auth of system identities and auth of users. Co-authored-by: Joshua Mühlfort <[email protected]> Signed-off-by: Roman Hros <[email protected]>
Split it between system identities and users. Related to 9c75ea2. Signed-off-by: Roman Hros <[email protected]>
This commit adds the conclusion and decision parts of ADR. Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Roman Hros <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
This commit drops occurrences of the term "SCS compliance" from the ADR. This term should be discussed and standardized (defined) first by the SCS community. Signed-off-by: Matej Feder <[email protected]>
This commit removes support of Notary and Chartmuseum from Harbor features as Harbor announced their deprecation. Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Roman Hros <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
3b735e8
to
8a35b89
Compare
Signed-off-by: Matej Feder <[email protected]>
I agree. It is better to inform CSPs correctly, as it is hard to predict CSP's product/pricing strategy. Harbor's shared storage backend architecture is explicitly mentioned in the fixup a67000f |
As per container team discussion:
In the end, we only choose the reference implementation here -- the SCS standard will NOT prescribe a solution but really only the presence of a registry with certain interfaces and a few features. |
Co-authored-by: Joshua Mühlfort <[email protected]> Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Matej Feder <[email protected]>
Signed-off-by: Roman Hros <[email protected]>
Signed-off-by: Kurt Garloff <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in the Container Team meeting on 2023-03-20, we want to merge this.
The one significant downside of harbor, the limitation to use shared storage (shared between tenants), was not raised as a real problem, despite asking CSPs (and reminding).
This PR adds the document that selects an appropriate container registry implementation that meets all defined requirements and makes an architectural decision on what implementation is fully SCS-compliant and recommended by the SCS.
Issue SovereignCloudStack/issues#263