Description
Describe the bug
Hello I am trying used path based routing within my kubernetes cluster to point to my nginx-s3-gateway pods. Whenever I use an ingress rather than a service I exceed 5+s timeouts when trying to connect to the server (pod). If i use a direct service to the pod the time to load is less than .5s. I am wondering if there is some header or setting proxy to proxy that is causing me issues.
To reproduce
Steps to reproduce the behavior:
# Deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-nginx-s3
namespace: example-namespace
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: "100%"
maxUnavailable: 0
selector:
matchLabels:
app: example-nginx-s3
replicas: 2
template:
metadata:
labels:
app: "example-nginx-s3"
spec:
serviceAccountName: example-service-account
containers:
- name: example-nginx-s3
image: "!REGISTRY_URL!/nginxinc/nginx-s3-gateway"
lifecycle:
preStop:
exec:
command: ["sleep", "10"]
imagePullPolicy: Always
envFrom:
- configMapRef:
name: example-config
- secretRef:
name: example-secrets
resources:
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 500m
memory: 256Mi
imagePullSecrets:
- name: example-dtr
---
# Service definition for server
kind: Service
apiVersion: v1
metadata:
name: example-server
namespace: example-namespace
spec:
selector:
app: example-server
ports:
- name: tcp-example
protocol: TCP
port: 8080
targetPort: 8080
---
# Service definition for nginx-s3
kind: Service
apiVersion: v1
metadata:
labels:
app: example-nginx-s3
name: example-nginx-s3
namespace: example-namespace
spec:
ports:
- name: tcp-node
port: 8082
protocol: TCP
targetPort: 80
selector:
app: example-nginx-s3
---
# ConfigMap definition
kind: ConfigMap
apiVersion: v1
metadata:
name: example-config
namespace: example-namespace
data:
AWS_DEFAULT_REGION: "us-east-1"
FLASK_APP: /opt/example/python/example/__init__.py
FLASK_SETTINGS_PATH: /opt/example/python/example/config/integration_config.py
RO_FLASK_SETTINGS_PATH: /opt/example/python/example/config/ro_integration_config.py
GUNICORN_WORKERS: "4"
# Additional config for nginx-s3
# https://github.com/nginxinc/nginx-s3-gateway/blob/master/settings.example#L1-L20
S3_BUCKET_NAME: "example-prod"
S3_SERVER: s3-us-east-1.amazonaws.com
S3_REGION: us-east-1
S3_SERVER_PORT: "443"
S3_SERVER_PROTO: "https"
S3_STYLE: "virtual"
DEBUG: "false"
AWS_SIGS_VERSION: "4"
ALLOW_DIRECTORY_LIST: "false"
PROVIDE_INDEX_PAGE: "true"
APPEND_SLASH_FOR_POSSIBLE_DIRECTORY: "true"
PROXY_CACHE_MAX_SIZE: "10g"
PROXY_CACHE_INACTIVE: "5s"
PROXY_CACHE_VALID_OK: "5s"
PROXY_CACHE_VALID_NOTFOUND: "5s"
PROXY_CACHE_VALID_FORBIDDEN: "30s"
AWS_REGION: us-east-1
JS_TRUSTED_CERT_PATH: /etc/ssl/certs/Amazon_Root_CA_1.pem
---
# Ingress definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: example-namespace
annotations:
cert-manager.io/cluster-issuer: example-sectigo
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/proxy-body-size: "4096m"
external-dns.alpha.kubernetes.io/hostname: example.com
spec:
ingressClassName: nginx-internal
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: example-server
port:
number: 8080
- path: /docs
pathType: Prefix
backend:
service:
name: example-nginx-s3
port:
number: 8082
tls:
- hosts:
- example.com
secretName: prod-example-service-tls
Expected behavior
A clear and concise description of what you expected to happen.
Previously we were using istio and the below worked the way using a direct service does now, isntant s3 loads, but for some reason nginx to nginx is not properly loading
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: example-service
namespace: example-namespace
spec:
hosts:
- "example.com"
gateways:
- example-gateway.istio-system
http:
- match:
- uri:
prefix: /api
name: route-to-api
route:
- destination:
host: example-server.example-namespace.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /docs
name: route-to-docs
route:
- destination:
host: example-nginx-s3.example-namespace.svc.cluster.local
port:
number: 8082
Your environment
Using image id nginx-s3-gateway@sha256:7c0712b828b3089c2ca0ecc0cac9a9d48ba052f5612b36ebdf0d8f8f9131f431
AWS Role based access
Nginx ingress registry.k8s.io/ingress-nginx/controller:v1.11.3@sha256:d56f135b6462cfc476447cfe564b83a45e8bb7da2774963b00d12161112270b7
Additional context
Add any other context about the problem here.
Sensitive Information
Remember to redact any sensitive information such as authentication credentials or license keys.