Skip to content

Commit 42ca9ab

Browse files
butler54claude
andauthored
feat: add bare metal support for Intel TDX and AMD SEV-SNP (#73)
* feat: add bare metal support for Intel TDX and AMD SEV-SNP Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: update baremetal values to use released charts Replace git branch references (repoURL/targetRevision/path) with released Helm chart references (chart/chartVersion) for trustee, sandboxed-containers, and sandboxed-policies in values-baremetal.yaml. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add TDX kernel flag and enable intel-dcap for baremetal Add tdx.enabled flag (default true) to baremetal chart to conditionally set kvm_intel.tdx=1 kernel argument. Without this, the kvm_intel module does not activate TDX and NFD cannot detect it. Enable intel-dcap application in values-baremetal.yaml for PCCS/QGS attestation services. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove unused runtime class, kernel params, and commented-out templates Address PR review feedback: - Remove detect-runtime-class.yaml (OSC operator manages RuntimeClass) - Remove bm-kernel-params.yaml and kernel-params-mco.yaml (config should be provided via initdata or pod annotations to avoid inconsistencies) - Remove commented-out runtimeclass templates for AMD SNP and Intel TDX Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: update to OSC 1.12 / Trustee 1.1.0 Signed-off-by: Chris Butler <chris.butler@redhat.com> * feat: integrate Kyverno and update trustee config for baremetal - Add Kyverno chart and coco-kyverno-policies to baremetal values - Update trustee chart to 0.3.* with kbs.admin.format v1.1 - Remove bypassAttestation (proper attestation via init_data) - Remove explicit runtimeClassName overrides (auto-detected by platform) - Add syncPolicy prune to hello-openshift and kbs-access - Reset default clusterGroupName to simple Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: set clusterGroupName to baremetal for deployment testing Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add UPDATE operation to initdata injection policy The policy only fired on Pod/Deployment CREATE, so pods created before the initdata ConfigMap existed never got the cc_init_data annotation. Adding UPDATE allows Kyverno to inject the annotation when a Deployment is updated (e.g. by ArgoCD sync), triggering a rolling restart with the correct initdata. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add intel-device-plugins-operator subscription for SGX/TDX quote generation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: enable TDX config in trustee to point QCNL at local PCCS service Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: store raw SHA-256 hash alongside PCR8 hash in initdata ConfigMaps Adds RAW_HASH field to both initdata and debug-initdata ConfigMaps. PCR8_HASH = SHA256(zeros || SHA256(toml)) — used by Azure vTPM attestation RAW_HASH = SHA256(toml) — used by baremetal TDX/SNP attestation Both are needed because Azure and baremetal present initdata differently in their attestation evidence. A single Trustee attestation server must accept both formats to support multi-platform deployments. Future: integrate veritas for comprehensive reference value generation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: point trustee at feature branch for baremetal attestation testing Temporarily uses butler54/trustee-chart feature/baremetal-attestation branch instead of released chart. This branch includes: - Baremetal TDX and SNP attestation rules - Conditional pcr-stash (no error on baremetal without vTPM) - Raw init_data hash (zero-padded) for baremetal attestation - TDX QCNL config with use_secure_cert: false for local PCCS Revert to chartVersion after merging and releasing trustee chart. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: increase kata VM memory for kbs-access to 8192MB The kbs-access-app container image is ~1GB which causes container creation timeouts with the default 2GB kata VM memory. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: target Pods only for cc_init_data injection, disable autogen The autogen Deployment rule causes admission failures when the initdata ConfigMap hasn't been propagated to the workload namespace yet. By targeting Pods only (autogen-controllers: none), Deployments are admitted without ConfigMap resolution. Pods get cc_init_data injected at creation time when the ConfigMap is available. A rollout restart picks up new initdata values. Also removes UPDATE operation — only CREATE is needed since a rollout restart creates new Pods. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use ${initial_pcr} braces in PCR8 hash computation Without braces, bash treats $initial_pcr followed by the hex hash as a single undefined variable name, producing SHA-256 of empty string instead of the correct PCR extend value. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: address PR #73 review comments and merge PR #75 documentation This commit addresses all review comments from bpradipt and pawelpros on PR #73, merges documentation from PR #75, and updates container images. Documentation changes: - README: Replace "peer-pod infrastructure" wording to clarify Azure vs bare metal - README: Update OCP version requirements from 4.17+ to 4.19.28+ (OSC 1.12 requirement) - README: Clarify PCR collection differs for Azure (get-pcr.sh) vs bare metal (manual) - README: Distinguish Azure (kata-remote) from bare metal (kata-cc) runtime classes - values-secret.yaml.template: Add missing kbsPrivateKey secret - values-secret.yaml.template: Reorganize with clear section headers and improved docs - gen-secrets.sh: Add prominent alert when values-secret file is created - Merge docs/nfd-matchall-bug.md from PR #75 (NFD matchAll bug report) - Merge docs/pcr-reference-values-bare-metal.md from PR #75 (PCR collection guide) Code cleanup: - Delete obsolete qgs-config-cm.yaml (QGS args now inline) - Delete obsolete qgs-sgx-cm.yaml (QCNL config via downwardAPI) - Remove commented-out detect-runtime-class reference in values-baremetal.yaml Image updates: - intel-dpo-sgx.yaml: Update intel-sgx-plugin to sha256:4ac8769c (v0.35.0) - pccs-deployment.yaml: Update osc-pccs to sha256:edf57087 (v1.12) - qgs-ds.yaml: Update osc-tdx-qgs to sha256:308d66da (v1.12) Resolves review comments from: - bpradipt: peer-pod wording, OCP versions, PCR clarification - pawelpros: obsolete ConfigMaps, image digests, PCR requirements Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: revert clusterGroupName to simple for main branch merge The clusterGroupName was changed to 'baremetal' in commit a601af0 for deployment testing. Reverting to 'simple' as the default so existing users are not affected when this PR merges to main. The baremetal clusterGroup remains available by setting clusterGroupName: baremetal in user overrides or CI. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: update trustee chart to use upstream 0.3.3 release Replace butler54/trustee-chart.git fork reference with upstream chart reference now that validatedpatterns/trustee-chart#21 has merged and released as v0.3.3. The 0.3.3 release includes baremetal TDX/SNP attestation support and NVIDIA GPU attestation via NRAS remote verifier. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Signed-off-by: Chris Butler <chris.butler@redhat.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 717c854 commit 42ca9ab

40 files changed

Lines changed: 1559 additions & 80 deletions

README.md

Lines changed: 46 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,27 +2,29 @@
22

33
Validated pattern for deploying confidential containers on OpenShift using the [Validated Patterns](https://validatedpatterns.io/) framework.
44

5-
Confidential containers use hardware-backed Trusted Execution Environments (TEEs) to isolate workloads from cluster and hypervisor administrators. This pattern deploys and configures the Red Hat CoCo stack — including the sandboxed containers operator, Trustee (Key Broker Service), and peer-pod infrastructure — on Azure.
5+
Confidential containers use hardware-backed Trusted Execution Environments (TEEs) to isolate workloads from cluster and hypervisor administrators. This pattern deploys and configures the Red Hat CoCo stack — including the sandboxed containers operator, Trustee (Key Broker Service) operator, and Kata infrastructure — on Azure cloud instances and bare metal.
66

77
## Topologies
88

9-
The pattern provides two deployment topologies:
9+
The pattern provides three deployment topologies:
1010

11-
1. **Single cluster** (`simple` clusterGroup) — deploys all components (Trustee, Vault, ACM, sandboxed containers, workloads) in one cluster. This breaks the RACI separation expected in a remote attestation architecture but simplifies testing and demonstrations.
11+
1. **Single cluster** (`simple` clusterGroup) — deploys all components (Trustee, Vault, ACM, sandboxed containers, workloads) in one cluster on Azure. This breaks the RACI separation expected in a remote attestation architecture but simplifies testing and demonstrations.
1212

1313
2. **Multi-cluster** (`trusted-hub` + `spoke` clusterGroups) — separates the trusted zone from the untrusted workload zone:
1414
- **Hub** (`trusted-hub`): Runs Trustee (KBS + attestation service), HashiCorp Vault, ACM, and cert-manager. This cluster is the trust anchor.
1515
- **Spoke** (`spoke`): Runs the sandboxed containers operator and confidential workloads. The spoke is imported into ACM and managed from the hub.
1616

17+
3. **Bare metal** (`baremetal` clusterGroup) — deploys all components on bare metal hardware with Intel TDX or AMD SEV-SNP support. NFD (Node Feature Discovery) auto-detects the CPU architecture and configures the appropriate runtime. Supports SNO (Single Node OpenShift) and multi-node clusters.
18+
1719
The topology is controlled by the `main.clusterGroupName` field in `values-global.yaml`.
1820

19-
Currently supports Azure via peer-pods. Peer-pods provision confidential VMs (`Standard_DCas_v5` family) directly on the Azure hypervisor rather than nesting VMs inside worker nodes.
21+
Azure deployments use peer-pods, which provision confidential VMs (`Standard_DCas_v5` family) directly on the Azure hypervisor. Bare metal deployments use layered images and hardware TEE features directly.
2022

2123
## Current version (4.*)
2224

2325
Breaking change from v3. This is the first version using GA (Generally Available) releases of the CoCo stack:
2426

25-
- **OpenShift Sandboxed Containers 1.12+** (requires OCP 4.17+)
27+
- **OpenShift Sandboxed Containers 1.12+** (requires OCP 4.19.28+)
2628
- **Red Hat Build of Trustee 1.1** (GA release; all versions prior to 1.0 were Technology Preview)
2729
- External chart repositories for [Trustee](https://github.com/validatedpatterns/trustee-chart), [sandboxed-containers](https://github.com/validatedpatterns/sandboxed-containers-chart), and [sandboxed-policies](https://github.com/validatedpatterns/sandboxed-policies-chart)
2830
- Self-signed certificates via cert-manager (Let's Encrypt no longer required)
@@ -42,9 +44,21 @@ All previous versions used pre-GA (Technology Preview) releases of Trustee:
4244

4345
### Prerequisites
4446

45-
- OpenShift 4.17+ cluster on Azure (self-managed via `openshift-install` or ARO)
47+
**Azure deployments:**
48+
49+
- OpenShift 4.19.28+ cluster on Azure (self-managed via `openshift-install` or ARO)
4650
- Azure `Standard_DCas_v5` VM quota in your target region (these are confidential computing VMs and are not available in all regions). See the note below for more details.
4751
- Azure DNS hosting the cluster's DNS zone
52+
53+
**Bare metal deployments:**
54+
55+
- OpenShift 4.19.28+ cluster on bare metal with Intel TDX or AMD SEV-SNP hardware
56+
- BIOS/firmware configured to enable TDX or SEV-SNP
57+
- Available block devices for LVMS storage (auto-discovered)
58+
- For Intel TDX: an Intel PCS API key from [api.portal.trustedservices.intel.com](https://api.portal.trustedservices.intel.com/)
59+
60+
**Common:**
61+
4862
- Tools on your workstation: `podman`, `yq`, `jq`, `skopeo`
4963
- OpenShift pull secret saved at `~/pull-secret.json` (download from [console.redhat.com](https://console.redhat.com/openshift/downloads))
5064
- Fork the repository — ArgoCD reconciles cluster state against your fork, so changes must be pushed to your remote
@@ -53,29 +67,48 @@ All previous versions used pre-GA (Technology Preview) releases of Trustee:
5367

5468
These scripts generate the cryptographic material and attestation measurements needed by Trustee and the peer-pod VMs. Run them once before your first deployment.
5569

56-
1. `bash scripts/gen-secrets.sh` — generates KBS key pairs, attestation policy seeds, and copies `values-secret.yaml.template` to `~/values-secret-coco-pattern.yaml`
57-
2. `bash scripts/get-pcr.sh` — retrieves PCR measurements from the peer-pod VM image and stores them at `~/.coco-pattern/measurements.json` (requires `podman`, `skopeo`, and `~/pull-secret.json`)
58-
3. Review and customise `~/values-secret-coco-pattern.yaml` — this file is loaded into Vault and provides secrets to the pattern
70+
1. `bash scripts/gen-secrets.sh` — generates KBS key pairs, PCCS certificates/tokens (for bare metal), and copies `values-secret.yaml.template` to `~/values-secret-coco-pattern.yaml`
71+
2. `bash scripts/get-pcr.sh` — retrieves PCR measurements from the peer-pod VM image and stores them at `~/.coco-pattern/measurements.json` (requires `podman`, `skopeo`, and `~/pull-secret.json`). **Azure only.** Bare metal uses manual PCR collection — see [docs/pcr-reference-values-bare-metal.md](docs/pcr-reference-values-bare-metal.md) for the procedure. Store the measurements at `~/.coco-pattern/measurements.json`.
72+
3. Review and customise `~/values-secret-coco-pattern.yaml` — this file is loaded into Vault and provides secrets to the pattern. For bare metal, uncomment the PCCS secrets section and provide your Intel PCS API key.
5973

6074
> **Note:** `gen-secrets.sh` will not overwrite existing secrets. Delete `~/.coco-pattern/` if you need to regenerate.
6175
62-
### Single cluster deployment
76+
### Single cluster deployment (Azure)
6377

6478
1. Set `main.clusterGroupName: simple` in `values-global.yaml`
6579
2. Ensure your Azure configuration is populated in `values-global.yaml` (see `global.azure.*` fields)
6680
3. `./pattern.sh make install`
6781
4. Wait for the cluster to reboot all nodes (the sandboxed containers operator triggers a MachineConfig update). Monitor progress in the ArgoCD UI.
6882

69-
### Multi-cluster deployment
83+
### Multi-cluster deployment (Azure)
7084

7185
1. Set `main.clusterGroupName: trusted-hub` in `values-global.yaml`
7286
2. Deploy the hub cluster: `./pattern.sh make install`
7387
3. Wait for ACM (`MultiClusterHub`) to reach `Running` state on the hub
74-
4. Provision a second OpenShift 4.17+ cluster on Azure for the spoke
88+
4. Provision a second OpenShift 4.19.28+ cluster on Azure for the spoke
7589
5. Import the spoke into ACM with label `clusterGroup=spoke`
7690
(see [importing a cluster](https://validatedpatterns.io/learn/importing-a-cluster/))
7791
6. ACM will automatically deploy the `spoke` clusterGroup applications (sandboxed containers, workloads) to the imported cluster
7892

93+
### Bare metal deployment
94+
95+
1. Set `main.clusterGroupName: baremetal` in `values-global.yaml`
96+
2. Run `bash scripts/gen-secrets.sh` to generate KBS keys and PCCS secrets
97+
3. For Intel TDX: uncomment the PCCS secrets in `~/values-secret-coco-pattern.yaml` and provide your Intel PCS API key
98+
4. `./pattern.sh make install`
99+
5. Wait for the cluster to reboot nodes (MachineConfig updates for TDX kernel parameters and vsock)
100+
101+
The system auto-detects your hardware:
102+
103+
- **NFD** discovers Intel TDX or AMD SEV-SNP capabilities and labels nodes
104+
- **LVMS** auto-discovers available block devices for storage
105+
- **RuntimeClass** `kata-cc` is created automatically pointing to the correct handler (`kata-tdx` or `kata-snp`)
106+
- Both `kata-tdx` and `kata-snp` RuntimeClasses are deployed; only the one matching your hardware has schedulable nodes
107+
- MachineConfigs are deployed for both `master` and `worker` roles (safe on SNO where only master exists)
108+
- PCCS and QGS services deploy unconditionally; DaemonSets only schedule on Intel nodes via NFD labels
109+
110+
Optional: pin PCCS to a specific node with `bash scripts/get-pccs-node.sh` and set `baremetal.pccs.nodeSelector` in the baremetal chart values.
111+
79112
## Sample applications
80113

81114
Two sample applications are deployed on the cluster running confidential workloads (the single cluster in `simple` mode, or the spoke in multi-cluster mode):
@@ -85,7 +118,7 @@ Two sample applications are deployed on the cluster running confidential workloa
85118
- `secure` — a confidential container with a strict policy; `oc exec` is denied even for `kubeadmin`
86119
- `insecure-policy` — a confidential container with a relaxed policy allowing `oc exec` (useful for testing the Confidential Data Hub)
87120

88-
Each confidential pod runs on its own `Standard_DC2as_v5` Azure VM (visible in the Azure portal). Pods use `runtimeClassName: kata-remote`.
121+
On Azure, each confidential pod runs on its own `Standard_DC2as_v5` Azure VM (visible in the Azure portal) using `runtimeClassName: kata-remote`. On bare metal, pods use `runtimeClassName: kata-cc` and run directly on the underlying TDX or SEV-SNP hardware.
89122

90123
- **kbs-access**: A web service that retrieves and presents secrets obtained from the Trustee Key Broker Service (KBS) via the Confidential Data Hub (CDH). Useful for verifying end-to-end attestation and secret delivery in locked-down environments.
91124

ansible/init-data-gzipper.yaml

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -114,21 +114,33 @@
114114
register: debug_initdata_encoded
115115
changed_when: false
116116

117+
- name: Compute raw SHA-256 hash of default initdata
118+
ansible.builtin.shell: |
119+
set -o pipefail
120+
sha256sum "{{ rendered_path }}" | cut -d' ' -f1
121+
register: raw_hash
122+
changed_when: false
123+
124+
- name: Compute raw SHA-256 hash of debug initdata
125+
ansible.builtin.shell: |
126+
set -o pipefail
127+
sha256sum "{{ debug_rendered_path }}" | cut -d' ' -f1
128+
register: debug_raw_hash
129+
changed_when: false
130+
117131
- name: Register init data pcr into a var
118132
ansible.builtin.shell: |
119133
set -o pipefail
120-
hash=$(sha256sum "{{ rendered_path }}" | cut -d' ' -f1)
121134
initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
122-
PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
135+
PCR8_HASH=$(echo -n "${initial_pcr}{{ raw_hash.stdout }}" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
123136
register: pcr8_hash
124137
changed_when: false
125138

126139
- name: Register debug init data pcr into a var
127140
ansible.builtin.shell: |
128141
set -o pipefail
129-
hash=$(sha256sum "{{ debug_rendered_path }}" | cut -d' ' -f1)
130142
initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
131-
PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
143+
PCR8_HASH=$(echo -n "${initial_pcr}{{ debug_raw_hash.stdout }}" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
132144
register: debug_pcr8_hash
133145
changed_when: false
134146

@@ -147,6 +159,7 @@
147159
data:
148160
INITDATA: "{{ initdata_encoded.stdout }}"
149161
PCR8_HASH: "{{ pcr8_hash.stdout }}"
162+
RAW_HASH: "{{ raw_hash.stdout }}"
150163
version: "0.1.0"
151164
algorithm: "sha256"
152165
aa.toml: "{{ raw_aa_toml.stdout }}"
@@ -168,6 +181,7 @@
168181
data:
169182
INITDATA: "{{ debug_initdata_encoded.stdout }}"
170183
PCR8_HASH: "{{ debug_pcr8_hash.stdout }}"
184+
RAW_HASH: "{{ debug_raw_hash.stdout }}"
171185
version: "0.1.0"
172186
algorithm: "sha256"
173187
aa.toml: "{{ raw_aa_toml.stdout }}"

charts/all/baremetal/Chart.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
apiVersion: v2
2+
description: Bare metal platform configuration (NFD rules, MachineConfigs, RuntimeClasses, Intel device plugin).
3+
keywords:
4+
- pattern
5+
- upstream
6+
- sandbox
7+
- baremetal
8+
name: baremetal
9+
version: 0.0.1
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
apiVersion: nfd.openshift.io/v1alpha1
2+
kind: NodeFeatureRule
3+
metadata:
4+
name: consolidated-hardware-features
5+
namespace: openshift-nfd
6+
spec:
7+
rules:
8+
- name: "runtime.kata"
9+
labels:
10+
feature.node.kubernetes.io/runtime.kata: "true"
11+
matchAny:
12+
- matchFeatures:
13+
- feature: cpu.cpuid
14+
matchExpressions:
15+
SSE42: { op: Exists }
16+
VMX: { op: Exists }
17+
- feature: kernel.loadedmodule
18+
matchExpressions:
19+
kvm: { op: Exists }
20+
kvm_intel: { op: Exists }
21+
- matchFeatures:
22+
- feature: cpu.cpuid
23+
matchExpressions:
24+
SSE42: { op: Exists }
25+
SVM: { op: Exists }
26+
- feature: kernel.loadedmodule
27+
matchExpressions:
28+
kvm: { op: Exists }
29+
kvm_amd: { op: Exists }
30+
31+
- name: "amd.sev-snp"
32+
labels:
33+
amd.feature.node.kubernetes.io/snp: "true"
34+
extendedResources:
35+
sev-snp.amd.com/esids: "@cpu.security.sev.encrypted_state_ids"
36+
matchFeatures:
37+
- feature: cpu.cpuid
38+
matchExpressions:
39+
SVM: { op: Exists }
40+
- feature: cpu.security
41+
matchExpressions:
42+
sev.snp.enabled: { op: Exists }
43+
44+
- name: "intel.sgx"
45+
labels:
46+
intel.feature.node.kubernetes.io/sgx: "true"
47+
extendedResources:
48+
sgx.intel.com/epc: "@cpu.security.sgx.epc"
49+
matchFeatures:
50+
- feature: cpu.cpuid
51+
matchExpressions:
52+
SGX: { op: Exists }
53+
SGXLC: { op: Exists }
54+
- feature: cpu.security
55+
matchExpressions:
56+
sgx.enabled: { op: IsTrue }
57+
- feature: kernel.config
58+
matchExpressions:
59+
X86_SGX: { op: Exists }
60+
61+
- name: "intel.tdx"
62+
labels:
63+
intel.feature.node.kubernetes.io/tdx: "true"
64+
extendedResources:
65+
tdx.intel.com/keys: "@cpu.security.tdx.total_keys"
66+
matchFeatures:
67+
- feature: cpu.cpuid
68+
matchExpressions:
69+
VMX: { op: Exists }
70+
- feature: cpu.security
71+
matchExpressions:
72+
tdx.enabled: { op: Exists }
73+
74+
- name: "ibm.se.enabled"
75+
labels:
76+
ibm.feature.node.kubernetes.io/se: "true"
77+
matchFeatures:
78+
- feature: cpu.security
79+
matchExpressions:
80+
se.enabled: { op: IsTrue }
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
apiVersion: nfd.openshift.io/v1
2+
kind: NodeFeatureDiscovery
3+
metadata:
4+
name: nfd-instance
5+
namespace: openshift-nfd
6+
spec:
7+
operand:
8+
image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.20
9+
imagePullPolicy: Always
10+
servicePort: 12000
11+
workerConfig:
12+
configData: |
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
{{- range list "master" "worker" }}
2+
---
3+
apiVersion: machineconfiguration.openshift.io/v1
4+
kind: MachineConfig
5+
metadata:
6+
labels:
7+
machineconfiguration.openshift.io/role: {{ . }}
8+
name: 99-enable-coco-{{ . }}
9+
spec:
10+
kernelArguments:
11+
- nohibernate
12+
{{- if $.Values.tdx.enabled }}
13+
- kvm_intel.tdx=1
14+
{{- end }}
15+
config:
16+
ignition:
17+
version: 3.2.0
18+
storage:
19+
files:
20+
- path: /etc/modules-load.d/vsock.conf
21+
mode: 0644
22+
contents:
23+
source: data:text/plain;charset=utf-8;base64,dnNvY2stbG9vcGJhY2sK
24+
{{- end }}

charts/all/baremetal/values.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
tdx:
2+
enabled: true

charts/all/coco-kyverno-policies/templates/inject-coco-initdata.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ metadata:
66
policies.kyverno.io/title: Inject CoCo InitData
77
policies.kyverno.io/category: Confidential Computing
88
policies.kyverno.io/severity: medium
9-
policies.kyverno.io/subject: Pod,Deployment
9+
policies.kyverno.io/subject: Pod
1010
policies.kyverno.io/description: >-
1111
Injects cc_init_data annotation into pods with a kata runtime class
1212
by reading from a ConfigMap specified via the coco.io/initdata-configmap
13-
annotation. Kyverno autogen extends this to Deployments, StatefulSets,
14-
DaemonSets, and Jobs automatically.
13+
annotation. Targets Pods only (not Deployments) so that Deployments
14+
remain stable and a rollout restart resolves the latest initdata.
1515
argocd.argoproj.io/sync-wave: "1"
16-
pod-policies.kyverno.io/autogen-controllers: Deployment,StatefulSet,DaemonSet,Job
16+
pod-policies.kyverno.io/autogen-controllers: none
1717
spec:
1818
rules:
1919
- name: inject-initdata

charts/all/intel-dcap/Chart.yaml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
apiVersion: v2
2+
description: Intel DCAP services (PCCS and QGS) for TDX remote attestation.
3+
keywords:
4+
- pattern
5+
- intel
6+
- tdx
7+
- pccs
8+
- qgs
9+
name: intel-dcap
10+
version: 0.0.1
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
apiVersion: deviceplugin.intel.com/v1
2+
kind: SgxDevicePlugin
3+
metadata:
4+
name: sgxdeviceplugin-sample
5+
spec:
6+
image: registry.connect.redhat.com/intel/intel-sgx-plugin@sha256:4ac8769c4f0a82b3ea04cf1532f15e9935c71fe390ff5a9dc3ee57f970a65f0b
7+
enclaveLimit: 110
8+
provisionLimit: 110
9+
logLevel: 4
10+
nodeSelector:
11+
intel.feature.node.kubernetes.io/sgx: "true"

0 commit comments

Comments
 (0)