You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All the pods are running but registry server is unresponsive at some point after installation.
(no response at curl https://localhost:8443)
I have to restart the pods or even have to reboot the host to get it working.
All the pods are running:
[root@bastion ~]# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
db266da38b9c registry.access.redhat.com/ubi8/pause:8.7-6 infinity 13 hours ago Up 13 hours 0.0.0.0:8443->8443/tcp 5e70ee01733b-infra
767d8f665354 registry.redhat.io/rhel8/redis-6:1-92.1669834635 run-redis 13 hours ago Up 13 hours 0.0.0.0:8443->8443/tcp quay-redis
73b03983db2f registry.redhat.io/rhel8/postgresql-10:1-203.1669834630 run-postgresql 13 hours ago Up 13 hours 0.0.0.0:8443->8443/tcp quay-postgres
41c21e84bb3e registry.redhat.io/quay/quay-rhel8:v3.8.14 registry 13 hours ago Up 13 hours 0.0.0.0:8443->8443/tcp quay-app
New logs are comming up, so the containers are running fine... I guess?
[root@bastion ~]# podman logs --tail=10 -f quay-app
exportactionlogsworker stdout | 2024-03-26 00:28:00,067 [52] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2024-03-26 00:29:00 UTC)" (scheduled at 2024-03-26 00:28:00.067443+00:00)
exportactionlogsworker stdout | 2024-03-26 00:28:00,071 [52] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:01:00], next run at: 2024-03-26 00:29:00 UTC)" executed successfully
notificationworker stdout | 2024-03-26 00:28:04,724 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2024-03-26 00:28:14 UTC)" (scheduled at 2024-03-26 00:28:04.724010+00:00)
notificationworker stdout | 2024-03-26 00:28:04,727 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2024-03-26 00:28:14 UTC)" executed successfully
repositorygcworker stdout | 2024-03-26 00:28:11,768 [75] [INFO] [apscheduler.executors.default] Running job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2024-03-26 00:29:11 UTC)" (scheduled at 2024-03-26 00:28:11.767795+00:00)
repositorygcworker stdout | 2024-03-26 00:28:11,769 [75] [INFO] [apscheduler.executors.default] Job "QueueWorker.run_watchdog (trigger: interval[0:01:00], next run at: 2024-03-26 00:29:11 UTC)" executed successfully
gcworker stdout | 2024-03-26 00:28:12,861 [53] [INFO] [apscheduler.executors.default] Running job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2024-03-26 00:28:42 UTC)" (scheduled at 2024-03-26 00:28:12.860612+00:00)
gcworker stdout | 2024-03-26 00:28:12,868 [53] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2024-03-26 00:28:42 UTC)" executed successfully
notificationworker stdout | 2024-03-26 00:28:14,724 [63] [INFO] [apscheduler.executors.default] Running job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2024-03-26 00:28:24 UTC)" (scheduled at 2024-03-26 00:28:14.724010+00:00)
notificationworker stdout | 2024-03-26 00:28:14,731 [63] [INFO] [apscheduler.executors.default] Job "QueueWorker.poll_queue (trigger: interval[0:00:10], next run at: 2024-03-26 00:28:24 UTC)" executed successfully
Nothing strange on the quay-app container deatails.
Hey team, we just ran into this same exact issue, same symptoms as well. I thought perhaps we just had a one-off issue, but then noticed this issue, so I thought I'd add a comment. I'll get some troubleshooting logs posted here. I can connect via netcat to port 8443 and have ruled out selinux, fapolicyd, etc as potential contributors.
I should have captured the output, but failed to - I did notice that a curl results in something similar to the following:
curl -vvv https://<quay-server>:8443 | head
* Rebuilt URL to: https://<quay-server>:8443/
* TCP_NODELAY set* Connected to <quay-server> port 8443 (#0)* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
< hangs right here where we should get a Server hello>
We never get the server hello back, nor anything beyond that - and, as noted above the port is open and responds via nc and the logs keep on rolling by for journalctl -fu quay-app.service or podman logs -f <pod_id>
All the pods are running but registry server is unresponsive at some point after installation.
(no response at
curl https://localhost:8443
)I have to restart the pods or even have to reboot the host to get it working.
All the pods are running:
New logs are comming up, so the containers are running fine... I guess?
Nothing strange on the
quay-app
container deatails.The text was updated successfully, but these errors were encountered: