You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.
Describe the bug
Ingress doesn't work no matter what I do. Port forwarding does.
I've got cert-manager on my k3s cluster with traefik, and whether I add it with letsencrypt through annotations and the tls block
or use the cert-manager options below, it doesn't allow connection. It always keeps on saying, Internal Server Error
Version of Helm, Kubernetes and the Nifi chart: helm version
k3s version v1.27.6+k3s1 (bd04941a)
go version go1.20.8
helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-nifi nifi 2 2023-11-22 23:47:54.521727 +0000 UTC deployed nifi-1.1.4 1.16.3
What happened:
Ingress keeps causing Internal Server Error
What you expected to happen:
To just get access to the front end
How to reproduce it (as minimally and precisely as possible):
Use these values
---
# Number of nifi nodesreplicaCount: 1## Set default image, imageTag, and imagePullPolicy.## ref: https://hub.docker.com/r/apache/nifi/##image:
repository: apache/nifitag: "1.16.3"pullPolicy: "IfNotPresent"## Optionally specify an imagePullSecret.## Secret must be manually created in the namespace.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/### pullSecret: myRegistrKeySecretNamesecurityContext:
runAsUser: 1000fsGroup: 1000## @param useHostNetwork - boolean - optional## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might## not be supported. The ports need to be available on all hosts. It can be## used for custom metrics instead of a service endpoint.#### WARNING: Make sure that hosts using this are properly firewalled otherwise## metrics and traces are accepted from any host able to connect to this host.#sts:
# Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.podManagementPolicy: ParallelAntiAffinity: softuseHostNetwork: nullhostPort: nullpod:
annotations:
security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000#prometheus.io/scrape: "true"serviceAccount:
create: false#name: nifiannotations: {}hostAliases: []# - ip: "1.2.3.4"# hostnames:# - example.com# - examplestartupProbe:
enabled: falsefailureThreshold: 60periodSeconds: 10## Useful if using any custom secrets## Pass in some secrets to use (if required)# secrets:# - name: myNifiSecret# keys:# - key1# - key2# mountPath: /opt/nifi/secret## Useful if using any custom configmaps## Pass in some configmaps to use (if required)# configmaps:# - name: myNifiConf# keys:# - myconf.conf# mountPath: /opt/nifi/custom-configproperties:
# https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#nifi_sensitive_props_keysensitiveKey: changeMechangeMe # Must have at least 12 characters# NiFi assumes conf/nifi.properties is persistent but this helm chart# recreates it every time. Setting the Sensitive Properties Key# (nifi.sensitive.props.key) is supposed to happen at the same time# /opt/nifi/data/flow.xml.gz sensitive properties are encrypted. If that# doesn't happen then NiFi won't start because decryption fails.# So if sensitiveKeySetFile is configured but doesn't exist, assume# /opt/nifi/flow.xml.gz hasn't been encrypted and follow the procedure# https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#updating-the-sensitive-properties-key# to simultaneously encrypt it and set nifi.sensitive.props.key.# sensitiveKeySetFile: /opt/nifi/data/sensitive-props-key-applied# If sensitiveKey was already set, then pass in sensitiveKeyPrior with the old key.# sensitiveKeyPrior: OldPasswordToChangeFromalgorithm: NIFI_PBKDF2_AES_GCM_256# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanismexternalSecure: trueisNode: falsehttpsPort: 8443webProxyHost: # <clusterIP>:<NodePort> (If Nifi service is NodePort or LoadBalancer)clusterPort: 6007provenanceStorage: "8 GB"provenanceMaxStorageTime: "10 days"siteToSite:
port: 10000# use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configurationsafetyValve:
#nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"nifi.web.http.network.interface.default: eth0# listen to loopback interface so "kubectl port-forward ..." worksnifi.web.http.network.interface.lo: lo## Include aditional processors# customLibPath: "/opt/configuration_resources/custom_lib"## Include additional libraries in the Nifi containers by using the postStart handler## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar# Nifi User Authenticationauth:
# If set while LDAP is enabled, this value will be used for the initial admin and not the ldap bind dn / adminadmin: CN=admin, OU=NIFISSL:
keystorePasswd: changeMetruststorePasswd: changeMe# Automaticaly disabled if OIDC or LDAP enabledsingleUser:
username: usernamepassword: changemechangeme # Must to have at least 12 charactersclientAuth:
enabled: falseldap:
enabled: falsehost: #ldap://<hostname>:<port>searchBase: #CN=Users,DC=ldap,DC=example,DC=beadmin: #cn=admin,dc=ldap,dc=example,dc=bepass: #ChangeMesearchFilter: (objectClass=*)userIdentityAttribute: cnauthStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.identityStrategy: USE_DNauthExpiration: 12 hoursuserSearchScope: ONE_LEVEL # Search scope for searching users (ONE_LEVEL, OBJECT, or SUBTREE). Required if searching users.groupSearchScope: ONE_LEVEL # Search scope for searching groups (ONE_LEVEL, OBJECT, or SUBTREE). Required if searching groups.oidc:
enabled: falsediscoveryUrl: #http://<oidc_provider_address>:<oidc_provider_port>/auth/realms/<client_realm>/.well-known/openid-configurationclientId: #<client_name_in_oidc_provider>clientSecret: #<client_secret_in_oidc_provider>claimIdentifyingUser: emailadmin: [email protected]preferredJwsAlgorithm:
## Request additional scopes, for example profileadditionalScopes:
openldap:
enabled: falsepersistence:
enabled: trueenv:
LDAP_ORGANISATION: # name of your organization e.g. "Example"LDAP_DOMAIN: # your domain e.g. "ldap.example.be"LDAP_BACKEND: "hdb"LDAP_TLS: "true"LDAP_TLS_ENFORCE: "false"LDAP_REMOVE_CONFIG_AFTER_SETUP: "false"adminPassword: #ChengeMeconfigPassword: #ChangeMecustomLdifFiles:
1-default-users.ldif: |- # You can find an example ldif file at https://github.com/cetic/fadi/blob/master/examples/basic/example.ldif## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.## ref: http://kubernetes.io/docs/user-guide/services/### headless serviceheadless:
type: ClusterIPannotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"# ui serviceservice:
type: ClusterIPhttpsPort: 8443# nodePort: 30236annotations: {}# loadBalancerIP:## Load Balancer sources## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service### loadBalancerSourceRanges:# - 10.10.10.0/24## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation# sessionAffinity: ClientIP# sessionAffinityConfig:# clientIP:# timeoutSeconds: 10800# Enables additional port/ports to nifi service for internal processorsprocessors:
enabled: falseports:
- name: processor01port: 7001targetPort: 7001#nodePort: 30701
- name: processor02port: 7002targetPort: 7002#nodePort: 30702## Configure containerPorts section with following attributes: name, containerport and protocol.containerPorts: []# - name: example# containerPort: 1111# protocol: TCP## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/##ingress:
enabled: true#className: traefikannotations: {}tls: []hosts:
- "nifi.mlx.institute"path: /# If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22# Amount of memory to give the NiFi java heapjvmMemory: 2g# Separate image for tailing each log separately and checking zookeeper connectivitysidecar:
image: busyboxtag: "1.32.0"imagePullPolicy: "IfNotPresent"## Enable persistence using Persistent Volume Claims## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/##persistence:
enabled: true# When creating persistent storage, the NiFi helm chart can either reference an already-defined# storage class by name, such as "standard" or can define a custom storage class by specifying# customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".# For example, to use SSD storage on Google Compute Engine see values-gcp.yaml## To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.# For example:# storageClass: standard## The default storage class is used if this variable is not set.accessModes: [ReadWriteOnce]## Use subPath and have 1 persistent volume instead of 7 volumes - use when your k8s nodes have limited volume slots, to limit waste of space,## or your available volume sizes are quite large# The one disk will have a directory folder for each volumeMount, but this is hidden. Run 'mount' to view each mount.subPath:
enabled: falsename: datasize: 30Gi## Storage Capacities for persistent volumes (these are ignored if using one volume with subPath)configStorage:
size: 100MiauthconfStorage:
size: 100Mi# Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.dataStorage:
size: 10Gi# Storage capacity for the FlowFile repositoryflowfileRepoStorage:
size: 50Gi# Storage capacity for the Content repositorycontentRepoStorage:
size: 50Gi# Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.provenanceRepoStorage:
size: 10Gi# Storage capacity for nifi logslogStorage:
size: 5Gi## Configure resource requests and limits## ref: http://kubernetes.io/docs/user-guide/compute-resources/##resources: {}# We usually recommend not to specify default resources and to leave this as a conscious# choice for the user. This also increases chances charts run on environments with little# resources, such as Minikube. If you do want to specify resources, uncomment the following# lines, adjust them as necessary, and remove the curly braces after 'resources:'.# limits:# cpu: 100m# memory: 128Mi# requests:# cpu: 100m# memory: 128Milogresources:
requests:
cpu: 10mmemory: 10Milimits:
cpu: 50mmemory: 50Mi## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity## You need to set the value of sts.AntiAffinity other than "soft" and "hard"affinity: {}nodeSelector: {}tolerations: []initContainers: {}# foo-init: # <- will be used as container name# image: "busybox:1.30.1"# imagePullPolicy: "IfNotPresent"# command: ['sh', '-c', 'echo this is an initContainer']# volumeMounts:# - mountPath: /tmp/foo# name: fooextraVolumeMounts: []extraVolumes: []## Extra containersextraContainers: []terminationGracePeriodSeconds: 30## Extra environment variables that will be pass onto deployment podsenv: []## Extra environment variables from secrets and config mapsenvFrom: []## Extra options to add to the bootstrap.conf fileextraOptions: []# envFrom:# - configMapRef:# name: config-name# - secretRef:# name: mysecret## Openshift support## Use the following varables in order to enable Route and Security Context Constraint creationopenshift:
scc:
enabled: falseroute:
enabled: false#host: www.test.com#path: /nifi# ca server details# Setting this true would create a nifi-toolkit based ca server# The ca server will be used to generate self-signed certificates required setting up secured clusterca:
## If true, enable the nifi-toolkit certificate authorityenabled: falsepersistence:
enabled: trueserver: ""service:
port: 9090token: sixteenCharactersadmin:
cn: adminserviceAccount:
create: false#name: nifi-caopenshift:
scc:
enabled: false# cert-manager support# Setting this true will have cert-manager create a private CA for the cluster# as well as the certificates for each cluster node.certManager:
enabled: trueclusterDomain: cluster.localkeystorePasswd: keystorePasswordtruststorePasswd: truststorePasswordreplaceDefaultTrustStore: falseadditionalDnsNames:
- localhostrefreshSeconds: 300resources:
requests:
cpu: 100mmemory: 128Milimits:
cpu: 100mmemory: 128Mi# cert-manager takes care of rotating the node certificates, so default# their lifetime to 90 days. But when the CA expires you may need to# 'helm delete' the cluster, delete all the node certificates and secrets,# and then 'helm install' the NiFi cluster again. If a site-to-site trusted# CA or a NiFi Registry CA certificate expires, you'll need to restart all# pods to pick up the new version of the CA certificate. So default the CA# lifetime to 10 years to avoid that happening very often.# c.f. https://github.com/cert-manager/cert-manager/issues/2478#issuecomment-1095545529certDuration: 2160hcaDuration: 87660h# ------------------------------------------------------------------------------# Zookeeper:# ------------------------------------------------------------------------------zookeeper:
## If true, install the Zookeeper chart## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yamlenabled: true## If the Zookeeper Chart is disabled a URL and port are required to connecturl: ""port: 2181replicaCount: 3# ------------------------------------------------------------------------------# Nifi registry:# ------------------------------------------------------------------------------registry:
## If true, install the Nifi registryenabled: falseurl: ""port: 80## Add values for the nifi-registry here## ref: https://github.com/dysnix/charts/blob/main/dysnix/nifi-registry/values.yaml# Configure metricsmetrics:
prometheus:
# Enable Prometheus metricsenabled: false# Port used to expose Prometheus metricsport: 9092serviceMonitor:
# Enable deployment of Prometheus Operator ServiceMonitor resourceenabled: false# namespace: monitoring# Additional labels for the ServiceMonitorlabels: {}
Here are some information that help troubleshooting:
if relevant, provide your values.yaml or the changes made to the default one (after removing sensitive information)
Inspect the pod, check the "Events" section at the end for anything suspicious.
kubectl describe pod myrelease-nifi-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned nifi/my-nifi-0 to k3s-agent-2
Normal Pulling 31m kubelet Pulling image "apache/nifi:1.16.3"
Normal Pulled 31m kubelet Successfully pulled image "apache/nifi:1.16.3"in 28.525852283s (28.525864623s including waiting)
Normal Pulling 31m kubelet Pulling image "busybox:1.32.0"
Normal Pulled 31m kubelet Successfully pulled image "busybox:1.32.0"in 4.328879809s (4.328890309s including waiting)
Normal Created 31m kubelet Created container app-log
Normal Started 31m kubelet Started container app-log
Normal Pulled 31m kubelet Container image "busybox:1.32.0" already present on machine
Normal Created 31m kubelet Created container bootstrap-log
Normal Started 31m kubelet Started container bootstrap-log
Normal Pulled 31m kubelet Container image "busybox:1.32.0" already present on machine
Normal Created 31m kubelet Created container user-log
Normal Started 31m kubelet Started container user-log
Normal Pulled 31m kubelet Container image "apache/nifi:1.16.3" already present on machine
Normal Created 31m kubelet Created container cert-manager
Normal Started 31m kubelet Started container cert-manager
Normal Pulled 31m kubelet Container image "apache/nifi:1.16.3" already present on machine
Normal Created 31m (x2 over 31m) kubelet Created container server
Normal Started 31m (x2 over 31m) kubelet Started container server
kubectl logs my-nifi-0 server
2023-11-22 23:44:36,440 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2023-11-22 23:44:36,440 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/nifi/nifi-current
2023-11-22 23:44:36,440 INFO [main] org.apache.nifi.bootstrap.Command Command: /usr/local/openjdk-8/bin/java -classpath /opt/nifi/nifi-current/./conf:/opt/nifi/nifi-current/./lib/nifi-properties-1.16.3.jar:/opt/nifi/nifi-current/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi/nifi-current/./lib/nifi-nar-utils-1.16.3.jar:/opt/nifi/nifi-current/./lib/nifi-server-api-1.16.3.jar:/opt/nifi/nifi-current/./lib/nifi-runtime-1.16.3.jar:/opt/nifi/nifi-current/./lib/logback-core-1.2.11.jar:/opt/nifi/nifi-current/./lib/log4j-over-slf4j-1.7.36.jar:/opt/nifi/nifi-current/./lib/nifi-stateless-api-1.16.3.jar:/opt/nifi/nifi-current/./lib/nifi-framework-api-1.16.3.jar:/opt/nifi/nifi-current/./lib/jetty-schemas-5.2.jar:/opt/nifi/nifi-current/./lib/slf4j-api-1.7.36.jar:/opt/nifi/nifi-current/./lib/jcl-over-slf4j-1.7.36.jar:/opt/nifi/nifi-current/./lib/jul-to-slf4j-1.7.36.jar:/opt/nifi/nifi-current/./lib/nifi-stateless-bootstrap-1.16.3.jar:/opt/nifi/nifi-current/./lib/nifi-property-utils-1.16.3.jar:/opt/nifi/nifi-current/./lib/logback-classic-1.2.11.jar:/opt/nifi/nifi-current/./lib/nifi-api-1.16.3.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx2g -Xms2g -Djava.security.egd=file:/dev/urandom -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/nifi/nifi-current/./conf/nifi.properties -Dnifi.bootstrap.listen.port=35539 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/nifi/nifi-current/logs org.apache.nifi.NiFi
2023-11-22 23:44:36,512 INFO [main] org.apache.nifi.bootstrap.Command Launched Apache NiFi with Process ID 131
The text was updated successfully, but these errors were encountered:
Do you mean HTTP:200 with internal server error? The good news is that your NiFi deployment is working perfectly, it's just your ingress that isn't resolving to the correct service ('/nifi'), which is why you get the error (or I used to at least 😄)
You're serving nifi on a non literal ip ("nifi.mlx.institute".example.com). You have to specify the desired host using both the ingress.host field & the properties.webProxyHost field, like this:
Describe the bug
Ingress doesn't work no matter what I do. Port forwarding does.
I've got cert-manager on my k3s cluster with traefik, and whether I add it with letsencrypt through annotations and the tls block
or use the cert-manager options below, it doesn't allow connection. It always keeps on saying, Internal Server Error
Version of Helm, Kubernetes and the Nifi chart:
helm version
k3s -version
helm list
What happened:
Ingress keeps causing Internal Server Error
What you expected to happen:
To just get access to the front end
How to reproduce it (as minimally and precisely as possible):
Use these values
Here are some information that help troubleshooting:
values.yaml
or the changes made to the default one (after removing sensitive information)Check if a pod is in error:
Inspect the pod, check the "Events" section at the end for anything suspicious.
The text was updated successfully, but these errors were encountered: