Skip to content

Added self-hosted bcm install #1456

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: v2.19
Choose a base branch
from

Conversation

SherinDaher-Runai
Copy link
Collaborator

No description provided.

Copy link
Contributor

github-actions bot commented Apr 7, 2025

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

github-actions bot commented Apr 8, 2025

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

github-actions bot commented Apr 8, 2025

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Comment on lines 16 to 28
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ingress-pool
namespace: metallb-system
spec:
addresses:
- <RESERVED IP>/32
autoAssign: false
serviceAllocation:
priority: 50
namespaces:
- ingress-nginx

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ingress-pool
namespace: metallb-system
spec:
addresses:
- <RESERVED IP>/32
autoAssign: false
serviceAllocation:
priority: 50
namespaces:
- ingress-nginx
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: ingress-pool
namespace: metallb-system
spec:
addresses:
- 192.168.0.250-192.168.0.251 # Example of two ip address -
autoAssign: false
serviceAllocation:
priority: 61
namespaces:
- ingress-nginx
- knative-serving

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can select the EXACT IP you want for each service -

kubectl -n kourier-system patch svc kourier
--type='merge'
-p '{"spec": {"type": "LoadBalancer", "loadBalancerIP": "192.168.0.250"}}'

kubectl -n ingress-nginx patch svc ingress-nginx-controller --type='merge' -p '{"spec": {"type": "LoadBalancer", "loadBalancerIP": "192.168.0.251"}}'

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Copy link
Contributor

Preview environment URL: https://d161wck8lc3ih2.cloudfront.net/PR-1456/

Comment on lines +39 to +70
| Component | Required Capacity |
| ---------- | ----------------- |
| CPU | 2 cores |
| Memory | 16GB |
| Disk space | 100GB |

### NVIDIA Run:ai - System Nodes

This configuration is the minimum requirement you need to install and use NVIDIA Run:ai.

| Component | Required Capacity |
| ---------- | ----------------- |
| CPU | 20 cores |
| Memory | 42GB |
| Disk space | 160GB |


To designate nodes to NVIDIA Run:ai system services, follow the instructions as described in [Label the NVIDIA Run:ai System Nodes](#label-the-nvidia-runai-system-nodes).


### NVIDIA Run:ai - Worker Nodes

NVIDIA Run:ai supports NVIDIA SuperPods built on the A100, H100, H200, and B200 GPU architectures. These systems are optimized for high-performance AI workloads at scale.

The following configuration represents the minimum hardware requirements for installing and operating NVIDIA Run:ai on worker nodes. Each node must meet these specifications:

| Component | Required Capacity |
| --------- | ----------------- |
| CPU | 2 cores |
| Memory | 4GB |


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo this section is confusing.
there are 3 tables that tell different numbers.
i'm a simple customer, let me know what i need to do with only a single table and not 3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants