-
Notifications
You must be signed in to change notification settings - Fork 64
Added self-hosted bcm install #1456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v2.19
Are you sure you want to change the base?
Conversation
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
apiVersion: metallb.io/v1beta1 | ||
kind: IPAddressPool | ||
metadata: | ||
name: ingress-pool | ||
namespace: metallb-system | ||
spec: | ||
addresses: | ||
- <RESERVED IP>/32 | ||
autoAssign: false | ||
serviceAllocation: | ||
priority: 50 | ||
namespaces: | ||
- ingress-nginx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apiVersion: metallb.io/v1beta1 | |
kind: IPAddressPool | |
metadata: | |
name: ingress-pool | |
namespace: metallb-system | |
spec: | |
addresses: | |
- <RESERVED IP>/32 | |
autoAssign: false | |
serviceAllocation: | |
priority: 50 | |
namespaces: | |
- ingress-nginx | |
apiVersion: metallb.io/v1beta1 | |
kind: IPAddressPool | |
metadata: | |
name: ingress-pool | |
namespace: metallb-system | |
spec: | |
addresses: | |
- 192.168.0.250-192.168.0.251 # Example of two ip address - | |
autoAssign: false | |
serviceAllocation: | |
priority: 61 | |
namespaces: | |
- ingress-nginx | |
- knative-serving |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can select the EXACT IP you want for each service -
kubectl -n kourier-system patch svc kourier
--type='merge'
-p '{"spec": {"type": "LoadBalancer", "loadBalancerIP": "192.168.0.250"}}'
kubectl -n ingress-nginx patch svc ingress-nginx-controller --type='merge' -p '{"spec": {"type": "LoadBalancer", "loadBalancerIP": "192.168.0.251"}}'
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/ |
| Component | Required Capacity | | ||
| ---------- | ----------------- | | ||
| CPU | 2 cores | | ||
| Memory | 16GB | | ||
| Disk space | 100GB | | ||
|
||
### NVIDIA Run:ai - System Nodes | ||
|
||
This configuration is the minimum requirement you need to install and use NVIDIA Run:ai. | ||
|
||
| Component | Required Capacity | | ||
| ---------- | ----------------- | | ||
| CPU | 20 cores | | ||
| Memory | 42GB | | ||
| Disk space | 160GB | | ||
|
||
|
||
To designate nodes to NVIDIA Run:ai system services, follow the instructions as described in [Label the NVIDIA Run:ai System Nodes](#label-the-nvidia-runai-system-nodes). | ||
|
||
|
||
### NVIDIA Run:ai - Worker Nodes | ||
|
||
NVIDIA Run:ai supports NVIDIA SuperPods built on the A100, H100, H200, and B200 GPU architectures. These systems are optimized for high-performance AI workloads at scale. | ||
|
||
The following configuration represents the minimum hardware requirements for installing and operating NVIDIA Run:ai on worker nodes. Each node must meet these specifications: | ||
|
||
| Component | Required Capacity | | ||
| --------- | ----------------- | | ||
| CPU | 2 cores | | ||
| Memory | 4GB | | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imo this section is confusing.
there are 3 tables that tell different numbers.
i'm a simple customer, let me know what i need to do with only a single table and not 3
No description provided.