diff --git a/README.md b/README.md index 902244b..6a81179 100644 --- a/README.md +++ b/README.md @@ -45,6 +45,7 @@ Deploy Memgraph using methods that suit your environment, whether it's container - [Docker deployment](./ha/docker_deployment/) - [Docker compose deployment](./ha/docker_compose_deployment/) - [Restoration of snapshosts on an HA cluster](./ha/k8s_restore_snapshot/) +- [Helm upgrade of Memgraph version](./ha/k8s_upgrade_memgraph_version/) ### Import - [Importing data from Arrow Flight](./import/migrate/arrow-flight/) diff --git a/ha/k8s_upgrade_memgraph_version/README.md b/ha/k8s_upgrade_memgraph_version/README.md new file mode 100644 index 0000000..6ce8bb1 --- /dev/null +++ b/ha/k8s_upgrade_memgraph_version/README.md @@ -0,0 +1,100 @@ +# Memgraph 3.2.1 to 3.3 Upgrade Demo with Kubernetes + +This example demonstrates how to deploy Memgraph 3.2.1 in a high availability mode with Kubernetes (using Minikube), connect the cluster, and then perform a seamless upgrade to Memgraph 3.3 using Helm. + +## 🚀 Overview + +The upgrade process follows these steps: + +1. **Initial Deployment**: Deploy Memgraph 3.2.1 in HA mode with 3 coordinators and 2 data instances +2. **Cluster Connection**: Connect all coordinators and register data instances +3. **Data Injection**: Add sample data to verify replication works +4. **Helm Upgrade**: Perform the upgrade to Memgraph 3.3 +5. **Verification**: Confirm data integrity and cluster connectivity after upgrade + +## 🚀 Prerequisites + +- Minikube installed and running +- kubectl configured +- Helm v3 installed +- Memgraph Enterprise license +- mgconsole installed + +## 🚀 How to Run the Upgrade Demo + +1. **Make the script executable**: + ```bash + chmod +x upgrade_memgraph_3.2.1_to_3.3.sh + ``` + +2. **Update the values files** with your Memgraph Enterprise details: + - Edit `values_3.2.1.yaml` and `values_3.3.yaml` + - Replace `` with your Memgraph Enterprise license key + - Replace `` with your organization name + +3. **Run the upgrade demo**: + ```bash + ./upgrade_memgraph.sh + ``` + +## 📋 What the Script Does + +### Step 1: Deploy Memgraph 3.2.1 +- Starts Minikube if not running +- Loads both Memgraph 3.2.1 and 3.3 images into Minikube +- Deploys Memgraph 3.2.1 using Helm with HA configuration +- Waits for all pods to be ready + +### Step 2: Connect the Cluster +- Adds all 3 coordinators to the cluster +- Registers both data instances (main and replica) +- Sets instance_0 as the main instance +- Establishes replication between data instances + +### Step 3: Inject Sample Data +- Creates sample Person nodes (Alice, Bob, Charlie) +- Creates relationships between them +- Verifies data replication between main and replica instances + +### Step 4: Upgrade to Memgraph 3.3 +- Performs Helm upgrade using the 3.3 values file +- Waits for all pods to restart and be ready +- Maintains data persistence through the upgrade + +### Step 5: Verify Upgrade +- Checks Memgraph version after upgrade +- Verifies data integrity across all instances +- Tests cluster connectivity +- Confirms replication is working + +## 🔖 Version Compatibility + +This example was built and tested with: + +- **Memgraph MAGE v3.2.1** → **v3.3.0** +- **Kubernetes v1.28+** +- **Helm v3.12+** + +## 🏢 Enterprise or Community? + +> 🛑 This example **requires Memgraph Enterprise**. + +### Useful Commands + +```bash +# Check pod status +kubectl get pods + +# View pod logs +kubectl logs + +# Check services +kubectl get svc + +# Access Minikube +minikube dashboard +``` + +## 📞 Support + +If you run into any issues or have questions, feel free to reach out on the [Memgraph Discord server](https://discord.gg/memgraph). We're happy to help! \ No newline at end of file diff --git a/ha/k8s_upgrade_memgraph_version/upgrade_memgraph.sh b/ha/k8s_upgrade_memgraph_version/upgrade_memgraph.sh new file mode 100755 index 0000000..20c47be --- /dev/null +++ b/ha/k8s_upgrade_memgraph_version/upgrade_memgraph.sh @@ -0,0 +1,167 @@ +#!/bin/bash + +# Exit on error +set -e + +# Configuration +RELEASE_NAME="memgraph-ha" + +echo "=== Memgraph 3.2.1 to 3.3 Upgrade Demo ===" +echo "This script will:" +echo "1. Deploy Memgraph 3.2.1 in HA mode" +echo "2. Connect the cluster" +echo "3. Inject sample data" +echo "4. Upgrade to Memgraph 3.3" +echo "" + +# Check if Minikube is running +if ! minikube status | grep -q "Running"; then + echo "Starting Minikube..." + minikube start --cpus 4 --memory 8192 +fi + +# Load both images into Minikube +echo "Loading Memgraph images into Minikube..." +minikube image load memgraph/memgraph-mage:3.2.1 +minikube image load memgraph/memgraph-mage:3.3 + +# Add Memgraph Helm repository if not already added +if ! helm repo list | grep -q "memgraph"; then + echo "Adding Memgraph Helm repository..." + helm repo add memgraph https://memgraph.github.io/helm-charts + helm repo update +fi + +# Step 1: Deploy Memgraph 3.2.1 +echo "" +echo "=== Step 1: Deploying Memgraph 3.2.1 ===" +helm upgrade --install $RELEASE_NAME memgraph/memgraph-high-availability \ + --values values_3.2.1.yaml \ + --wait + +# Wait for pods to be ready +echo "Waiting for pods to be ready..." +kubectl wait --for=condition=ready pod -l role=data --timeout=300s +kubectl wait --for=condition=ready pod -l role=coordinator --timeout=300s + +# Get the service URLs +MINIKUBE_IP=$(minikube ip) +echo "Minikube IP: $MINIKUBE_IP" + +# Get NodePorts for coordinators +COORD1_NODE_PORT=$(kubectl get svc memgraph-coordinator-1-external -o jsonpath='{.spec.ports[?(@.name=="bolt")].nodePort}') +COORD2_NODE_PORT=$(kubectl get svc memgraph-coordinator-2-external -o jsonpath='{.spec.ports[?(@.name=="bolt")].nodePort}') +COORD3_NODE_PORT=$(kubectl get svc memgraph-coordinator-3-external -o jsonpath='{.spec.ports[?(@.name=="bolt")].nodePort}') + +# Get NodePorts for data nodes +DATA0_NODE_PORT=$(kubectl get svc memgraph-data-0-external -o jsonpath='{.spec.ports[?(@.name=="bolt")].nodePort}') +DATA1_NODE_PORT=$(kubectl get svc memgraph-data-1-external -o jsonpath='{.spec.ports[?(@.name=="bolt")].nodePort}') + +echo "Coordinator 1 node port: $COORD1_NODE_PORT" +echo "Coordinator 2 node port: $COORD2_NODE_PORT" +echo "Coordinator 3 node port: $COORD3_NODE_PORT" +echo "Data 0 node port: $DATA0_NODE_PORT" +echo "Data 1 node port: $DATA1_NODE_PORT" + +sleep 5 + +# Step 2: Connect the cluster +echo "" +echo "=== Step 2: Connecting the cluster ===" + +# Add coordinators +echo "Adding coordinators to the cluster..." +echo "ADD COORDINATOR 1 WITH CONFIG {'bolt_server': '$MINIKUBE_IP:$COORD1_NODE_PORT', 'coordinator_server': 'memgraph-coordinator-1.default.svc.cluster.local:12000', 'management_server': 'memgraph-coordinator-1.default.svc.cluster.local:10000'};" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT +echo "ADD COORDINATOR 2 WITH CONFIG {'bolt_server': '$MINIKUBE_IP:$COORD2_NODE_PORT', 'coordinator_server': 'memgraph-coordinator-2.default.svc.cluster.local:12000', 'management_server': 'memgraph-coordinator-2.default.svc.cluster.local:10000'};" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT +echo "ADD COORDINATOR 3 WITH CONFIG {'bolt_server': '$MINIKUBE_IP:$COORD3_NODE_PORT', 'coordinator_server': 'memgraph-coordinator-3.default.svc.cluster.local:12000', 'management_server': 'memgraph-coordinator-3.default.svc.cluster.local:10000'};" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT + +# Register data instances +echo "Registering data instances..." +echo "REGISTER INSTANCE instance_0 WITH CONFIG {'bolt_server': '$MINIKUBE_IP:$DATA0_NODE_PORT', 'management_server': 'memgraph-data-0.default.svc.cluster.local:10000', 'replication_server': 'memgraph-data-0.default.svc.cluster.local:20000'};" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT +echo "REGISTER INSTANCE instance_1 WITH CONFIG {'bolt_server': '$MINIKUBE_IP:$DATA1_NODE_PORT', 'management_server': 'memgraph-data-1.default.svc.cluster.local:10000', 'replication_server': 'memgraph-data-1.default.svc.cluster.local:20000'};" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT + +# Set main instance +echo "Setting main instance..." +echo "SET INSTANCE instance_0 TO MAIN;" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT + +sleep 5 + +# Step 3: Inject sample data +echo "" +echo "=== Step 3: Injecting sample data ===" +echo "Creating sample nodes and relationships..." + +# Create some sample data +echo "CREATE (alice:Person {name: 'Alice', age: 30}), (bob:Person {name: 'Bob', age: 25}), (charlie:Person {name: 'Charlie', age: 35});" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT +echo "MATCH (a:Person {name: 'Alice'}), (b:Person {name: 'Bob'}) CREATE (a)-[:KNOWS {since: 2020}]->(b);" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT +echo "MATCH (b:Person {name: 'Bob'}), (c:Person {name: 'Charlie'}) CREATE (b)-[:WORKS_WITH {project: 'Memgraph'}]->(c);" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT + +# Verify data is replicated +echo "Verifying data replication..." +echo "Data in main instance (instance_0):" +echo "MATCH (n) RETURN n.name as name, n.age as age;" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT + +sleep 3 + +echo "Data in replica instance (instance_1):" +echo "MATCH (n) RETURN n.name as name, n.age as age;" | mgconsole --host $MINIKUBE_IP --port $DATA1_NODE_PORT + +sleep 5 + +# Step 4: Upgrade to Memgraph 3.3 +echo "" +echo "=== Step 4: Upgrading to Memgraph 3.3 ===" +echo "Performing Helm upgrade..." + +helm upgrade $RELEASE_NAME memgraph/memgraph-high-availability \ + --values values_3.3.yaml \ + --wait + +# Wait for pods to be ready after upgrade +echo "Waiting for pods to be ready after upgrade..." +kubectl wait --for=condition=ready pod -l role=data --timeout=300s +kubectl wait --for=condition=ready pod -l role=coordinator --timeout=300s + +sleep 10 + +# Step 5: Verify upgrade and data integrity +echo "" +echo "=== Step 5: Verifying upgrade and data integrity ===" + +# Check version +echo "Checking Memgraph version after upgrade:" +echo "SHOW VERSION;" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT + +# Verify data is still there +echo "" +echo "Verifying data integrity after upgrade..." +echo "Data in main instance (instance_0):" +echo "MATCH (n) RETURN n.name as name, n.age as age;" | mgconsole --host $MINIKUBE_IP --port $DATA0_NODE_PORT + +sleep 3 + +echo "Data in replica instance (instance_1):" +echo "MATCH (n) RETURN n.name as name, n.age as age;" | mgconsole --host $MINIKUBE_IP --port $DATA1_NODE_PORT + +# Test cluster connectivity +echo "" +echo "Testing cluster connectivity..." +echo "SHOW INSTANCES;" | mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT + +echo "" +echo "=== Upgrade Complete! ===" +echo "Memgraph has been successfully upgraded from 3.2.1 to 3.3" +echo "" +echo "Connection details:" +echo "Coordinator: $MINIKUBE_IP:$COORD1_NODE_PORT" +echo "Main Data Instance: $MINIKUBE_IP:$DATA0_NODE_PORT" +echo "Replica Data Instance: $MINIKUBE_IP:$DATA1_NODE_PORT" +echo "" +echo "To check the status of your pods, run:" +echo "kubectl get pods" +echo "" +echo "To view logs of a specific pod, run:" +echo "kubectl logs " +echo "" +echo "To connect to Memgraph using mgconsole:" +echo "mgconsole --host $MINIKUBE_IP --port $COORD1_NODE_PORT" \ No newline at end of file diff --git a/ha/k8s_upgrade_memgraph_version/values_3.2.1.yaml b/ha/k8s_upgrade_memgraph_version/values_3.2.1.yaml new file mode 100644 index 0000000..696883b --- /dev/null +++ b/ha/k8s_upgrade_memgraph_version/values_3.2.1.yaml @@ -0,0 +1,216 @@ +image: + repository: memgraph/memgraph + # It is a bad practice to set the image tag name to latest as it can trigger automatic upgrade of the charts + # With some of the pullPolicy values. Please consider fixing the tag to a specific Memgraph version + tag: 3.2.1 + pullPolicy: IfNotPresent + +env: + MEMGRAPH_ENTERPRISE_LICENSE: "mglk-IQAAAAgAAAAAAAAATWVtZ3JhcGgAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" + MEMGRAPH_ORGANIZATION_NAME: "Memgraph" + +storage: + data: + libPVCSize: "1Gi" + libStorageAccessMode: "ReadWriteOnce" + # By default the name of the storage class isn't set which means that the default storage class will be used. + # If you set any name, such storage class must exist. + libStorageClassName: + logPVCSize: "1Gi" + logStorageAccessMode: "ReadWriteOnce" + logStorageClassName: + ## Create a Persistant Volume Claim for core dumps. + createCoreDumpsClaim: false + coreDumpsStorageClassName: + coreDumpsStorageSize: 10Gi + coreDumpsMountPath: /var/core/memgraph + coordinators: + libPVCSize: "1Gi" + libStorageAccessMode: "ReadWriteOnce" + # By default the name of the storage class isn't set which means that the default storage class will be used. + # If you set any name, such storage class must exist. + libStorageClassName: + logPVCSize: "1Gi" + logStorageAccessMode: "ReadWriteOnce" + logStorageClassName: + ## Create a Persistant Volume Claim for core dumps. + createCoreDumpsClaim: false + coreDumpsStorageClassName: + coreDumpsStorageSize: 10Gi + coreDumpsMountPath: /var/core/memgraph + +ports: + boltPort: 7687 # If you change this value, change it also in probes definition + managementPort: 10000 + replicationPort: 20000 + coordinatorPort: 12000 # If you change this value, change it also in probes definition + +externalAccessConfig: + dataInstance: + # Empty = no external access service will be created + serviceType: "NodePort" + annotations: {} + coordinator: + # Empty = no external access service will be created + serviceType: "NodePort" + annotations: {} + +headlessService: + enabled: false # If set to true, each data and coordinator instance will use headless service + +# Affinity controls the scheduling of the memgraph-high-availability pods. +# By default data pods will avoid being scheduled on the same node as other data pods, +# and coordinator pods will avoid being scheduled on the same node as other coordinator pods. +# Deployment won't fail if there is no sufficient nodes. +affinity: + # The unique affinity, will schedule the pods on different nodes in the cluster. + # This means coordinators and data nodes will not be scheduled on the same node. If there are more pods than nodes, deployment will fail. + unique: false + # The parity affinity, will enable scheduling of the pods on the same node, but with the rule that one node can host pair made of coordinator and data node. + # This means each node can have max two pods, one coordinator and one data node. If not sufficient nodes, deployment will fail. + parity: false + # The nodeSelection affinity, will enable scheduling of the pods on the nodes with specific labels. So the coordinators will be scheduled on the nodes with label coordinator-node and data nodes will be scheduled on the nodes with label data-node. If not sufficient nodes, deployment will fail. + nodeSelection: false + roleLabelKey: "role" + dataNodeLabelValue: "data-node" + coordinatorNodeLabelValue: "coordinator-node" + +# If you are experiencing issues with the sysctlInitContainer, you can disable it here. +# This is made to increase the max_map_count, necessary for high memory loads in Memgraph +# If you are experiencing crashing pod with the: Max virtual memory areas vm.max_map_count is too low +# you can increase the maxMapCount value. +# You can see what's the proper value for this parameter by reading +# https://memgraph.com/docs/database-management/system-configuration#recommended-values-for-the-vmmax_map_count-parameter +sysctlInitContainer: + enabled: true + maxMapCount: 262144 + image: + repository: library/busybox + tag: latest + pullPolicy: IfNotPresent + +# The explicit user and group setup is required because at the init container +# time, there is not yet a user created. This seems fine because under both +# Memgraph and Mage images we actually hard-code the user and group id. The +# config is used to chown user storage and core dumps claims' month paths. +memgraphUserGroupId: "101:103" + +secrets: + enabled: false + name: memgraph-secrets + userKey: USER + passwordKey: PASSWORD + +container: + data: + readinessProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + livenessProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + # When restoring Memgraph from a backup, it is important to give enough time app to start. Here, we set it to 2h by default. + startupProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 1440 + timeoutSeconds: 10 + periodSeconds: 5 + coordinators: + readinessProbe: + tcpSocket: + port: 12000 # If you change coordinator port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + livenessProbe: + tcpSocket: + port: 12000 # If you change coordinator port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + startupProbe: + tcpSocket: + port: 12000 + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + +resources: + data: {} + coordinators: {} + +prometheus: + enabled: false + namespace: monitoring # Namespace where K8s resources from mg-exporter.yaml will be installed and where your kube-prometheus-stack chart is installed + memgraphExporter: + port: 9115 + pullFrequencySeconds: 5 + repository: memgraph/prometheus-exporter + tag: 0.2.1 + serviceMonitor: + kubePrometheusStackReleaseName: kube-prometheus-stack + interval: 15s + +# If setting the --memory-limit flag under data instances, check that the amount of resources that a pod has been given is more than the actual memory limit you give to Memgraph +# Setting the Memgraph's memory limit to more than the available resources can trigger pod eviction and restarts before Memgraph can make a query exception and continue running +# the pod. +data: +- id: "0" + args: + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--log-file=/var/log/memgraph/memgraph.log" + +- id: "1" + args: + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--log-file=/var/log/memgraph/memgraph.log" + +coordinators: +- id: "1" + args: + - "--coordinator-id=1" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-1.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" + +- id: "2" + args: + - "--coordinator-id=2" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-2.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" + +- id: "3" + args: + - "--coordinator-id=3" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-3.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" \ No newline at end of file diff --git a/ha/k8s_upgrade_memgraph_version/values_3.3.yaml b/ha/k8s_upgrade_memgraph_version/values_3.3.yaml new file mode 100644 index 0000000..497cabd --- /dev/null +++ b/ha/k8s_upgrade_memgraph_version/values_3.3.yaml @@ -0,0 +1,216 @@ +image: + repository: memgraph/memgraph + # It is a bad practice to set the image tag name to latest as it can trigger automatic upgrade of the charts + # With some of the pullPolicy values. Please consider fixing the tag to a specific Memgraph version + tag: 3.3.0 + pullPolicy: IfNotPresent + +env: + MEMGRAPH_ENTERPRISE_LICENSE: "mglk-IQAAAAgAAAAAAAAATWVtZ3JhcGgAAAAAAAAAAAAAAAAAAAAAAAAAAAA=" + MEMGRAPH_ORGANIZATION_NAME: "Memgraph" + +storage: + data: + libPVCSize: "1Gi" + libStorageAccessMode: "ReadWriteOnce" + # By default the name of the storage class isn't set which means that the default storage class will be used. + # If you set any name, such storage class must exist. + libStorageClassName: + logPVCSize: "1Gi" + logStorageAccessMode: "ReadWriteOnce" + logStorageClassName: + ## Create a Persistant Volume Claim for core dumps. + createCoreDumpsClaim: false + coreDumpsStorageClassName: + coreDumpsStorageSize: 10Gi + coreDumpsMountPath: /var/core/memgraph + coordinators: + libPVCSize: "1Gi" + libStorageAccessMode: "ReadWriteOnce" + # By default the name of the storage class isn't set which means that the default storage class will be used. + # If you set any name, such storage class must exist. + libStorageClassName: + logPVCSize: "1Gi" + logStorageAccessMode: "ReadWriteOnce" + logStorageClassName: + ## Create a Persistant Volume Claim for core dumps. + createCoreDumpsClaim: false + coreDumpsStorageClassName: + coreDumpsStorageSize: 10Gi + coreDumpsMountPath: /var/core/memgraph + +ports: + boltPort: 7687 # If you change this value, change it also in probes definition + managementPort: 10000 + replicationPort: 20000 + coordinatorPort: 12000 # If you change this value, change it also in probes definition + +externalAccessConfig: + dataInstance: + # Empty = no external access service will be created + serviceType: "NodePort" + annotations: {} + coordinator: + # Empty = no external access service will be created + serviceType: "NodePort" + annotations: {} + +headlessService: + enabled: false # If set to true, each data and coordinator instance will use headless service + +# Affinity controls the scheduling of the memgraph-high-availability pods. +# By default data pods will avoid being scheduled on the same node as other data pods, +# and coordinator pods will avoid being scheduled on the same node as other coordinator pods. +# Deployment won't fail if there is no sufficient nodes. +affinity: + # The unique affinity, will schedule the pods on different nodes in the cluster. + # This means coordinators and data nodes will not be scheduled on the same node. If there are more pods than nodes, deployment will fail. + unique: false + # The parity affinity, will enable scheduling of the pods on the same node, but with the rule that one node can host pair made of coordinator and data node. + # This means each node can have max two pods, one coordinator and one data node. If not sufficient nodes, deployment will fail. + parity: false + # The nodeSelection affinity, will enable scheduling of the pods on the nodes with specific labels. So the coordinators will be scheduled on the nodes with label coordinator-node and data nodes will be scheduled on the nodes with label data-node. If not sufficient nodes, deployment will fail. + nodeSelection: false + roleLabelKey: "role" + dataNodeLabelValue: "data-node" + coordinatorNodeLabelValue: "coordinator-node" + +# If you are experiencing issues with the sysctlInitContainer, you can disable it here. +# This is made to increase the max_map_count, necessary for high memory loads in Memgraph +# If you are experiencing crashing pod with the: Max virtual memory areas vm.max_map_count is too low +# you can increase the maxMapCount value. +# You can see what's the proper value for this parameter by reading +# https://memgraph.com/docs/database-management/system-configuration#recommended-values-for-the-vmmax_map_count-parameter +sysctlInitContainer: + enabled: true + maxMapCount: 262144 + image: + repository: library/busybox + tag: latest + pullPolicy: IfNotPresent + +# The explicit user and group setup is required because at the init container +# time, there is not yet a user created. This seems fine because under both +# Memgraph and Mage images we actually hard-code the user and group id. The +# config is used to chown user storage and core dumps claims' month paths. +memgraphUserGroupId: "101:103" + +secrets: + enabled: false + name: memgraph-secrets + userKey: USER + passwordKey: PASSWORD + +container: + data: + readinessProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + livenessProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + # When restoring Memgraph from a backup, it is important to give enough time app to start. Here, we set it to 2h by default. + startupProbe: + tcpSocket: + port: 7687 # If you change bolt port, change this also + failureThreshold: 1440 + timeoutSeconds: 10 + periodSeconds: 5 + coordinators: + readinessProbe: + tcpSocket: + port: 12000 # If you change coordinator port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + livenessProbe: + tcpSocket: + port: 12000 # If you change coordinator port, change this also + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + startupProbe: + tcpSocket: + port: 12000 + failureThreshold: 20 + timeoutSeconds: 10 + periodSeconds: 5 + +resources: + data: {} + coordinators: {} + +prometheus: + enabled: false + namespace: monitoring # Namespace where K8s resources from mg-exporter.yaml will be installed and where your kube-prometheus-stack chart is installed + memgraphExporter: + port: 9115 + pullFrequencySeconds: 5 + repository: memgraph/prometheus-exporter + tag: 0.2.1 + serviceMonitor: + kubePrometheusStackReleaseName: kube-prometheus-stack + interval: 15s + +# If setting the --memory-limit flag under data instances, check that the amount of resources that a pod has been given is more than the actual memory limit you give to Memgraph +# Setting the Memgraph's memory limit to more than the available resources can trigger pod eviction and restarts before Memgraph can make a query exception and continue running +# the pod. +data: +- id: "0" + args: + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--log-file=/var/log/memgraph/memgraph.log" + +- id: "1" + args: + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--log-file=/var/log/memgraph/memgraph.log" + +coordinators: +- id: "1" + args: + - "--coordinator-id=1" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-1.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" + +- id: "2" + args: + - "--coordinator-id=2" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-2.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" + +- id: "3" + args: + - "--coordinator-id=3" + - "--coordinator-port=12000" + - "--management-port=10000" + - "--bolt-port=7687" + - "--also-log-to-stderr" + - "--log-level=TRACE" + - "--coordinator-hostname=memgraph-coordinator-3.default.svc.cluster.local" + - "--log-file=/var/log/memgraph/memgraph.log" + - "--nuraft-log-file=/var/log/memgraph/memgraph.log" \ No newline at end of file