-
-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] "chmod: changing permissions of '/XXX': Bad address" error message #514
Comments
Same issue with streamlined compose swag:
container_name: swag
image: ghcr.io/linuxserver/swag
ports:
- 81:81
volumes:
- ${APPDATA_DIR}/Swag:/config
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TIMEZONE}
- URL=${DOMAIN}
- SUBDOMAINS=wildcard
networks:
container_network:
restart: unless-stopped additional details in discord thread: |
I can't reproduce. Are you on qnap as well? |
Yes, with latest firmware and container station versions. |
I see the same chmod errors with a lot of my containers including swag. It seems to have messed up permission on nextcloud container. Also qnap Docker version 27.1.2-qnap2, build d46fd47. |
Can you provice the output of Also what filesystem is your QNAP storage using? ext4? btrfs? etc. |
Docker was recently upgraded, but system hasn't been updated in a while. I have not seen this errors before, though I don't look at the logs unless I expect problems with an docker container upgrade for instance. Filesystem is zfs
|
Sorry for the delay, was away for few days. Here the outputs : uname -a : and docker info :
|
Hello, same thing for me in Storage Driver: overlay2 [~] # docker info Server: |
Hello, |
My guess is it'll affect any image that performs a chmod on init. Can you post the relevant logs from the |
here you go : `[migrations] started
Brought to you by linuxserver.io To support the app dev(s) visit: To support LSIO projects visit: ─────────────────────────────────────── User UID: 1000 using keys found in /config/keys |
This is definitely an issue that affects chmod in docker on qnap specifically. Something to do with the docker install there or the kernel, or an incompatibility between the two. I don't believe there is anything we can do. It likely needs to be updated/fixed by qnap. Iirc we had similar issues reported with chmod some time ago (maybe more than a couple of years ago) and the issue was later resolved by updates. In this case the container still goes through the rest of the init and the services seem to start. |
It's almost impossible to usefully troubleshoot because 99% of search results for "Bad Address" are DNS related; I've even gone through the coreutils source and it doesn't seem to be a native error message, which suggests it's being returned by the OS/kernel. I did find this: alpinelinux/docker-alpine#342 but it seems to be specific to 32bit arm. The gitlab tracking issue is https://gitlab.alpinelinux.org/alpine/aports/-/issues/15167 |
Thank you. I will open a case at qnap |
Could you do the following as well.
Then run (then |
May be worth testing if chmod works in Ubuntu as well |
here the result : `docker run -it --rm alpine:3.20 sh / # ulimit -a |
So if you do
Do you get the same error? If so can you exit then do
Do you get the same error? |
No message appears : same result after a container restart |
Same here, no message /# docker run -it --rm alpine:3.20 sh And still the same situation after a container restart (ie speedtest) |
Interesting, so it's not just any chmod operation that causes it. Can you try running
Then do a Edit: The image is just our Alpine base image that then installs logrotate and does the same chmod as swag etc. and then also performs some other chmods to see which (if any) trigger the errors. |
Sure. Here the result : /# docker run -d --rm --name=chmod thespad/playground:chmod | | | | | | | | Based on images from linuxserver.io To support LSIO projects visit: ─────────────────────────────────────── User UID: 911 |
OK, so across the board. And can you try with |
Here the outcome : /# docker kill chmod | | | | | | | | Based on images from linuxserver.io To support LSIO projects visit: ─────────────────────────────────────── User UID: 911 |
OK so no difference between busybox chmod and gnu chmod. Final test for now, I promise.
|
No issue at all, my pleasure to help as I can Here the result : / # docker kill chmod |
OK, turns out I lied because the tests we did with the basic alpine image weren't identical so can you do the same test again but with My guess is it's only affecting the recursive part of the chmod as there's no error for the parent folder in any of the tests and the one you did earlier with |
Well, not fully sure about the exact command you expect me to run. Tying the following one didn't go through, but this might not be correct /# docker kill chmod |
This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions. |
I too began having this issue all of a sudden. I hadn't updated my docker container since November, but I did update my QNAP nas a few days ago. I am currently on QTS 5.2.2.2950. Downgrading to |
And Updates here? I have the Same Problems starting today. Yesterday it was working. I did not Update my Qnap System. |
Please pin to One of the affected users has reported the issue to QNAP and they've acknowledged it, so we're now waiting to see if they actually provide a resolution or not. |
@thespad Is there a kind of public bug tracker at QNAP? So, we all can see the status of this bug? After PR #523 the image was working again. I ignored the |
No QNAP don't seem to have anything like that. From experience, ticket information is private. |
Similar problem here: running on Raspberry Pi, not working anymore since 3.0.1-ls348. Keeps failing like this:
|
Same issue here. QNAP OS: QTS 5.2.2.2958
output I get:
very unfortunate because the container just quits, I can't get it to run at all. I'm happy to troubleshoot any test any commands if helpful. |
Please see #514 (comment) We are aware of the cause (Upstream QNAP bug) and there isn't a straightforward workaround available to us that wouldn't impact non-QNAP users, so for the moment please pin to the older tag. |
This is a different issue unrelated to the QNAP bug, please open a separate issue or visit our Discord server for support. |
|
Could everyone affected by this issue please try the latest Swag release and see if it's working for you. You'll still get the errors in the logs, nothing we can do about that until QNAP sort their end out. |
Hi, just tried and the container is fully starting again! Of course the chmod error do still appear, but at least SWAG is again working with the latest version. Thanks a lot for this fix!!! Concerning QNAP, the last feedback that I've received few days ago is that R&D replicated the issue and is now investigating the issue. Unfortunately, no information on a possible timeline. |
For me it’s also working! Thanks for your help! |
Confirmed: it's working again, thank you! |
This will also ultimately apply to any other images using nginx 3.21 base but will take a few days to roll out so if you've had problems with those give them a try again next time there's an updated image. Going to leave this issue open in the hope that QNAP sort themselves out and fix the underlying issue. |
I have since switched to caddy which works, but I have come back to test the new version However I am now facing the same problem I had when I tried
The `/config/log/letsencrypt/letsencrypt.log` file:
I realize this is a different issue, but I'm both willing to help fix this project up and curious why this container is being so problematic - what special things do you need to do? Why even chmod anything on startup? Why can every container I ever started get network access on this system but yours cannot? Perhaps it's time to go back to basics, use less custom stuff .... ? I've done some additional troubleshooting on this and it seems that the problem is that this container takes 30 seconds to get its networking up and functional. I've tested this like this:
for comparison, here is the output from running the same test against my caddy container:
It, like every other container, has network connectivity instantly. Bugs like this just make this project hard to use when its promise is supposed to make everything easier... |
Is there an existing issue for this?
Current Behavior
During swag container start, several chmod error message appear in the log. I've checked all related file and folder permission and find them all correct for the user.
However, SWAG proxy seems to be working fine and proxied container are reachable as expected
Not really sure when the issue started as I saw it few days ago only, but can confirm the issue was not there few months ago, with the same config.
Expected Behavior
start without error message.
Steps To Reproduce
Happens at every start.
Environment
CPU architecture
x86-64
Docker creation
Container logs
The text was updated successfully, but these errors were encountered: