Skip to content

Update test-infra for kubernetes-security/kubernetes #7588

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ritazh opened this issue Dec 6, 2024 · 7 comments
Open

Update test-infra for kubernetes-security/kubernetes #7588

ritazh opened this issue Dec 6, 2024 · 7 comments
Assignees
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@ritazh
Copy link
Member

ritazh commented Dec 6, 2024

Goals:

  • to reduce bugs in CVE patches
  • to reduce delays in CVE public disclosure
  • to reduce time to patch vulnerabilities

Outcome:

  • determine minimum set of tests to run
  • determine a set of members who will have access and ownership to maintain this test infrastructure to keep it up-to-date

cc @BenTheElder

@ritazh ritazh added the sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. label Dec 6, 2024
@ameukam
Copy link
Member

ameukam commented Dec 7, 2024

/assign @BenTheElder

/kind feature
/committee security-response
/priority backlog

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. committee/security-response Denotes an issue or PR intended to be handled by the product security committee. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 7, 2024
@ameukam ameukam moved this to Backlog in SIG K8S Infra Dec 7, 2024
@BenTheElder
Copy link
Member

... also related: #4981

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2025
@xmudrii
Copy link
Member

xmudrii commented Mar 18, 2025

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2025
@BenTheElder
Copy link
Member

BenTheElder commented Apr 10, 2025

Had a few more discussions about this, there's additionally concern that our test environments don't currently have concern of info leak and many of them are barely maintained (e.g. kube-up.sh), we could mitigate that cheaply by simply not using some of them here though.

But if/when we do this, we should be careful about e.g. the e2e test environments as well.
The security on them is ... lax, with the expectation that PR testing is essentially by design RCE on public code.

I'm still concerned about sigs.k8s.io/prow in particular, which basically has one active approver, and would have much higher expectations if it needs to secure a private repo. We need to fix that regardless, but "recruiting" for this work has been ... tough.

I had a new thought for this: We could select a subset of tests and wire them up with github actions as a stopgap?

It will be difficult to guarantee that we definitely get the same results for everything when the patches go public, and that we keep the configs in sync, but it's probably better than doing nothing, and a lot of our testing doesn't require cloud e2e anymore (since we're able to use sigs.k8s.io/kind in particular).

The biggest question is how to coordinate ongoing upkeep, maybe we could form a team that agrees to the embargo restrictions for this purpose? There are a relatively small set of maintainers particularly active on kubernetes/kubernetes PR test configuration (SIG Testing TLs, perhaps a few other folks).

@BenTheElder
Copy link
Member

cc @kubernetes/sig-testing-leads @kubernetes/sig-k8s-infra-leads @kubernetes/security-response-committee
/sig testing

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Apr 10, 2025
@ritazh
Copy link
Member Author

ritazh commented Apr 10, 2025

@Vyom-Yadav FYI who will be looking at this going forward 🙇

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
committee/security-response Denotes an issue or PR intended to be handled by the product security committee. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/k8s-infra Categorizes an issue or PR as relevant to SIG K8s Infra. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
Status: Backlog
Development

No branches or pull requests

6 participants