Skip to content

[BUG] auto-uptime-kuma - infinite boot loop #1014

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
phyzical opened this issue Mar 2, 2025 · 3 comments
Open
1 task done

[BUG] auto-uptime-kuma - infinite boot loop #1014

phyzical opened this issue Mar 2, 2025 · 3 comments

Comments

@phyzical
Copy link

phyzical commented Mar 2, 2025

Is there an existing issue for this?

  • I have searched the existing issues

Name of mod

auto-uptime-kuma

Name of base container

swag

Current Behavior

When i boot, it runs into a timeout trying to update too many things

sometimes ill get lucky and itll boot say the 10th time but by then i have a whole lot of duplicated monitors

is there anyway we can provide an env to increase this timeout?

Image

Expected Behavior

should boot without timing out

Steps To Reproduce

i guess try lots of monitors? not exactly sure what the specific cause is

Environment

- OS: unraid
- How docker service was installed: dockerman via unraid

CPU architecture

x86-64

Docker creation

..

Container logs

[mod-auto-uptime-kuma] Packages already installed, skipping...
[mod-auto-uptime-kuma] Executing SWAG auto-uptime-kuma mod
[mod-auto-uptime-kuma] The following notifications are enabled by default: ['1:My Discord Alert']
[mod-auto-uptime-kuma] Adding Monitor 'Artifactmmo' for container 'artifactmmo'
Traceback (most recent call last):
  File "/app/auto-uptime-kuma.py", line 127, in <module>
    add_or_update_monitors(dockerService, configService, uptimeKumaService)
  File "/app/auto-uptime-kuma.py", line 23, in add_or_update_monitors
    uptime_kuma_service.create_monitor(container_name, container_config)
  File "/app/auto_uptime_kuma/uptime_kuma_service.py", line 135, in create_monitor
    monitor = self.api.add_monitor(**monitor_data)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lsiopy/lib/python3.12/site-packages/uptime_kuma_api/api.py", line 1472, in add_monitor
    return self._call('add', data)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/lsiopy/lib/python3.12/site-packages/uptime_kuma_api/api.py", line 547, in _call
    r = self.sio.call(event, data, timeout=self.timeout)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/lsiopy/lib/python3.12/site-packages/socketio/client.py", line 297, in call
    raise exceptions.TimeoutError()
socketio.exceptions.TimeoutError
s6-rc: warning: unable to start service init-mod-swag-auto-uptime-kuma-install: command exited 1
/run/s6/basedir/scripts/rc.init: warning: s6-rc failed to properly bring all the services up! Check your logs (in /run/uncaught-logs/current if you have in-container logging) for more information.
@phyzical
Copy link
Author

phyzical commented Mar 2, 2025

cc @labmonkey

@phyzical
Copy link
Author

phyzical commented Mar 2, 2025

i think all we need to do is wire up https://github.com/linuxserver/docker-mods/blob/swag-auto-uptime-kuma/root/app/auto_uptime_kuma/uptime_kuma_service.py#L36C8-L36C38

to become self.api = UptimeKumaApi(url, ENV[TIMEOUT])

@LinuxServer-CI
Copy link

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Issues
Development

No branches or pull requests

2 participants