-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Add Health-Check to DockerFile #37
base: main
Are you sure you want to change the base?
Conversation
Added health-check script for ZeroTier Docker
Added Healthcheck to DockerFile
Changed ENV Variable Names for better understanding
Firstly, thank you so much for choosing "MIN" rather than perpetuating my typo of "MINUMUM". So embarrassing! I considered editing all the posts in #33 but I thought that might confuse the issue. Next, while it's a very minor thing, perhaps:
I'm thinking about .env files, how it's best to group variables, and how prefix consistency aids that goal. Now, on to the PR proper. I have tested both However, omitting both variables to get the "default behaviour" does not seem to work as documented. I think the problem lies with this code snippet:
Scenario. Assume two network IDs xxx and yyy. You will define:
Assume the container starts and joins both networks. In that situation, the extra echo statement will write:
Test 1Detach from one of the networks:
The result is:
The subsequent for-loop succeeds even though Test 2Assume nothing has changed from Test 1. Detach from the other network:
The result is:
The for-loop doesn't execute at all so you get a normal exit even though the container hasn't joined either network. The container remains healthy when it is in fact unhealthy. ProposalIt seems to me that this code might better express your design intention:
I have tested that, successfully, with Where this breaks down is if you don't use
In words:
That works but has the potential to be misleading. Say you have two networks and you do a manual leave of one of them. The container will go unhealthy. Now you recreate the container. The join/leave status of each network is part of the persistent store so the container will come up and only re-join the network it was joined to when it went down. That means seeding from I can just imagine some user seeing unhealthy, deciding to recreate the container, which promptly goes healthy, and the user concludes "all's well" when, in fact, whatever caused the container to leave the network(s) is unresolved. Perhaps a better solution is to simply change the semantics of
It would be a good idea to also detect the situation in
None of this is perfect. Because you can always use Last point. You've provided the script and the mods to the Dockerfile but you haven't updated the Hope some of this helps because, as before, I think container health-checks are a very good idea. |
Just a brief Reply (Will be posting a detailed reply later as I am travelling):
The Above seems to be correct as the container should be Healthy since you have left the network xxx and only yyy remains which is connected. The container should be unhealthy only when you are connected to all joined networks. you cannot call is unhealthy if you have left the network.
You are correct. We need to make it unhealthy if there are no joined networks. |
Added additional check for healthcheck failure in case of no joined networks.
Updated README.md to reflect new Environment Variables for Health-Checks
Thank you soooo much for your suggestions ! I have incorporated ALL your suggestions in the script. Please use CHK_ZT_SPECIFIC_NETWORKS and specify multiple networks here. This should solve all use cases. I am not using ZEROTIER_ONE_CHECK_SPECIFIC_NETWORKS as ZEROTIER_ is I think reserved for the Original ZT Functions/Declarations and since this is a custom script, It is better we don't use variables which ZT Container is using. Please check and revert. |
Well, I think you are wrong about that. I hope you won't be too upset with me if I say it is always better to take the time to check your facts rather than assume a pattern that may work in one place (eg all official environment variables defined by Zigbee2MQTT start with So let's ask my zerotier-router container which environment variables it knows about that start with Z:
If you take a look at README-router.md you'll see that all of those are defined there. They are specific to this How about the zerotier binary running inside the container?
Those are probably keys for doing lookups in the process environment and, indeed, if I saw either obvious root keys like Now, what about ZeroTier's documentation? Try Googling:
I get zero hits. If I remove the quotes, I get some hits but they all say "Missing ZEROTIER_ONE". In fact, the only environment variable I have been able to find defined anywhere in the ZeroTier documentation is So, far from potentially conflicting with anything ZeroTier might be doing, if we stick with the I hope that makes sense. |
On a different topic, I see you are proposing mods for Unfortunately, what I am about to say will sound circular but that's because there are several moving parts. Let me begin by going back to the point I made in my previous post. These variables:
are specific to the To put this another way, the ability to auto-join ZeroTier Cloud networks on first install is a feature of the router container, not the client container. If you run the current client, the only way to join a network is with:
So, yes, The only (current) way to provide that information to the router container is via Although you could simply pass The reason we didn't add support for You could amend the PR to copy the auto-join functionality from The existing wording in
Wording for
Now, please think about that last sentence and, in particular, for people who use the client. Thus far, their experience is that they launch the container and then You can ameliorate that to some extent with the code I suggested yesterday:
Now let me focus on your revised proposed code:
First, please study this:
In words:
Rule of thumb: If and only if you can be certain that a variable will only ever contain the string representation of a valid number should you omit quotes. For example:
The result code will always be numeric so it can be compared with a numeric and it is safe to omit the quotes from both sides, which then affords the opportunity to use numeric comparison operators like Now, I will get back to the intent of the code (rather than its syntax). I am still not convinced that you actually understand how this code will behave at run-time. Three scenarios. scenario 1The container is not joined to any networks. It doesn't matter whether this is because the container has never joined any networks or because the networks it has joined have gone away.
scenario 2The container is joined to exactly one network but that network goes away.
scenario 3The container is joined to two or more networks and all save one of those networks goes away.
However, if your intention is that the container should go "unhealthy" when any of its expected networks goes away then you need another source of information so that the container knows what its "expected networks" are. The That's why I suggested using
If the user doesn't define If the user defines one or more networks then the for-loop will be executed for each. If an expected network is missing then the If and only if Isn't that what you actually want? |
I have fixed the quotes. (Thanks for pointing it out).
I am skeptical in implementing it through ZEROTIER_ONE_NETWORK_IDS, as the variable does other functions too (eg. joining the Network) which is not a standard protocol by Zerotier as you yourself mentioned. Lastly, I would incorporate ZEROTIER_ONE_NETWORK_IDS only when @zyclonite agrees to change
Right now, the way this is implemented is that it just checks if directory exists and if not then it just creates a blank config file to join the network. Also the same should be executed in client as well for consistency. As of now, I am creating a separate variable |
first, thank you both for the contribution and detailed discussion! i would recommend keeping two different variables, not everyone want's to join the network via variables but might want to have it health-checked still. i am ok with adding the same join logic from the router to the other entrypoint file to get consistent behavior. |
By now, you're probably sick of the sound of my keystrokes. Sorry about that. You'll probably be even more tired once you get to the end of this. Sorry about that too. I am still 100% dedicated to the idea of a healthcheck. I just want it to work. I'd like to walk back (slightly) a comment I made earlier on the topic of environment variable names. One source I neglected to check was this repo itself. It turns out that
These, however, are not documented anywhere I can see. Nevertheless, we can generalise the following convention for variable names:
It is also true to say that the usage of these existing environment variables is confined to the respective entry point scripts, which means
I think you might not fully appreciate the relationship between the client and the router. You've proposed mods to So, the build process starts with an Alpine Linux image and then builds ZeroTier in situ. I assume you understand that part. Now go and look at the In other words, whatever you do to enhance the client is also inherited by the router. The health check scaffolding you might have thought you were only adding to the client also gets added to the router. You can use either the client or the router to test health checking. It makes absolutely no difference. What is the difference, then, between the client and the router? In one sense, none. The compiled The difference between the containers is found by comparing the two entry point scripts. The one for the client is pretty much what you'll find in most Docker containers: a bit of setup followed by an The entry point script for the router is a good deal more complicated but a lot of that is the result of how Docker starts containers. At its core, the router's entry point script is just defining a bunch of iptables filters to control how traffic is routed at Layer Three. The way the router launches the If the container doesn't manage its iptables filters from birth to death then multiple successive launches of the router container would see duplicate filters being added to iptables. ZeroTier (the container) runs in "host mode" which means the container's networking and the host's networking are the same, irrespective of whether you are running the client or the router. What that also means is the iptables filters being created by the router container are applied to the host, not the container.
It might also help to remember what Layer 3 routing is. It is the process of a computer (or silicon) making decisions about how to forward packets between network interfaces. Say you have a remote device like an iPad running the ZeroTier client, and a Raspberry Pi at home also running the ZeroTier client. Once you start the ZeroTier tunnel on the iPad, it can reach the Raspberry Pi. Suppose you also want to reach another computer (eg a Mac) that isn't running the ZeroTier client but happens to be connected to the same subnet as the Pi. You can open an SSH connection to the Raspberry Pi and, from there, open another SSH connection to the Mac. And, if you're into SSH port-forwarding, you can get SSH to do that for you. All the zerotier-router does is help automate things so you can go straight from the iPad to the Mac for all protocols. The "routing" action depends on a combination of the iptables filters and static routes. I want to continue on the theme of how the difference between the client and router is largely cosmetic. If they are more-or-less identical, how can you tell whether you are running the client or the router container? The most obvious way is to execute a
router's operational environmentWith the router running, what does the container know?
The container knows about two joined networks. What does the host know?
First, the host knows about the same network interfaces that appear in the Second, the host knows that those interfaces represent routes to the two ZeroTier Cloud networks. This is how, when you do something like SSH from iPad to Pi, then from Pi to Mac, that the reply traffic knows how to find its way back into the tunnel and across the ZeroTier Cloud network.
client's operational environmentNow let's ask the same questions of the client.
Identical answer. What about the host?
Same again. The fact that a ZeroTier network (the thing you join) is expressed as an entry in the host's routing table, is independent of whether you are running the client or the router, and is why "counting routes" is going to be more reliable than any other scheme. With the client still running, I'm now going to make a mess:
I've clobbered the network interface associated with one of the joined networks. There's no way any traffic can be forwarded anywhere. What does the host know?
It knows the interface is dead and, because the interface has gone away, the associated route has been withdrawn. This is the host doing this work, not the container. What does the container know?
The container still thinks the Test network (interface
I've wreaked utter havoc here but the container is none-the-wiser. Bottom line:
Lukas makes a good point - a separate variable for the networks to be checked. You've already done that:
These lines in your if [[ -n "${CHK_ZT_SPECIFIC_NETWORKS}" ]] ; then
for network in $CHK_ZT_SPECIFIC_NETWORKS; do
#If Network is OK, continue, else exit
[[ "$(zerotier-cli get ${network} status)" = "OK" ]] || exit 1
#echo "${CHK_ZT_SPECIFIC_NETWORKS} Connected."
done
exit 0 Which brings me to these lines (which I'll call the problematic lines): else
#echo "Checking All Networks"
joined_networks=$(zerotier-cli listnetworks | awk 'NR>1 {print$3}')
#If there are no Networks, exit Failure
[[ -n "${joined_networks}" ]] || exit 1
for network in $joined_networks; do
[[ "$(zerotier-cli get ${network} status)" = "OK" ]] || exit 1
#echo "$network Connected."
done
fi In all my experimentation, I have never been able to contrive the conditions where both of the following are true:
Can you show me an example or tell me how to create the necessary conditions? Until you can, I don't believe the problematic lines serve any purpose. If, instead of using
proposed health check scriptWith that in mind, I would like to propose a different approach: #!/bin/sh
#This health-check script is sponsored by PMGA TECH LLP
#Exit Codes
# 0= Success
# 1= Failure
#Environment Variables
# CHK_ZT_SPECIFIC_NETWORKS= <Enter Networks to check with space in between each entry; All networks entered here would be matched; CHK_ZT_MIN_ROUTES_FOR_HEALTH is ignored if this is used.>
# CHK_ZT_MIN_ROUTES_FOR_HEALTH= <Should be a Number greater than 0>
# minimum routes for health defaults to 1 route
CHK_ZT_MIN_ROUTES_FOR_HEALTH=${CHK_ZT_MIN_ROUTES_FOR_HEALTH:-1}
# Check if specified Networks are all Connected
if [[ -n "${CHK_ZT_SPECIFIC_NETWORKS}" ]] ; then
for network in $CHK_ZT_SPECIFIC_NETWORKS; do
#If Network is OK, continue, else exit
[[ "$(zerotier-cli get ${network} status)" = "OK" ]] || exit 1
done
else # Check for Minimum Networks
# count zerotier-associated direct routes
routes=$(ip r | grep "dev zt" | grep -cv "via")
# sense less than minimum
[[ ${routes} -lt ${CHK_ZT_MIN_ROUTES_FOR_HEALTH} ]] && exit 1
fi
exit 0
testingService definition (client): zerotier:
container_name: zerotier
image: "zyclonite/zerotier:local"
restart: unless-stopped
network_mode: host
volumes:
- ./volumes/zerotier-one:/var/lib/zerotier-one
devices:
- "/dev/net/tun:/dev/net/tun"
cap_add:
- NET_ADMIN
- SYS_ADMIN No environment variables. That means the minimum route count defaults to 1. The only other thing to be aware of is that the persistent store already exists and has already joined two networks. Those will be resumed when the container comes up.
Container started. Knows about two networks. Reports healthy. Expected routes. Remove one network:
So, the network has gone, the routes associated with it have gone, but the container remains healthy. Because the "no environment variables" condition means we are looking for a minimum route count of 1, that's the expected result. Now let's get rid of the second network:
All relevant routes have gone and the container eventually goes unhealthy. Perfect! Now I'll re-join one of the networks and wait for the container to go healthy:
And now I'll do the serious damage I did before by nuking the interface:
Does the container go unhealthy?
Youbetcha! Why? Because of the routes:
What does the container itself actually think?
It thinks the network is still there and OK. If we used the Counting routes really is better, and a default of "at least one route" is probably a sensible default in the vast majority of user cases. one more thingThis is only a minor point but, seeing as you're changing the Dockerfile, I noticed these warnings during builds:
It is caused by the lower-case "as" in: FROM ${ALPINE_IMAGE}:${ALPINE_VERSION} as builder Changing to upper case causes the problem to go away: FROM ${ALPINE_IMAGE}:${ALPINE_VERSION} AS builder |
@Paraphraser
@zyclonite I have Specifically not used ZEROTIER_ONE_NETWORK_IDS so as not to make this a breaking change! ( I believe we can deprecate the previous values in the next version to give users a chance to change their settings) |
Regarding Environment Variables
I think you have already mentioned the 'fact' that ZeroTier uses :
Therefore I didnt want to use anything starting with Regarding sh
You may want to believe that Regarding Code
Well you may or may not be convinced, but the code is doing what I intend it to do (apart from the slight syntax mistake that you pointed out which may happen since we all are humans). The code you call 'problematic' is actually how the original ZeroTier docker does the health-checks which @zyclonite has already pointed out in the previous thread. I did not want to eliminate their code completely. Rather I wanted to build on their code. Hence I have kept their code as default behavior. General Suggestion for @ParaphraserWhile I do appreciate your enthusiasm, imho, it would be motivating for all the contributors if your messages are a little on the 'politer' tone rather than an accusatory tone ( 'I am still not convinced that you actually understand how this code will behave at run-time') Using the current code update, you can use the combination of |
With all possible respect, I'd simply say that if you find it troublesome, upsetting or impolite to be told that the writer isn't convinced that you understand something, the best solution is to demonstrate understanding. I've tried to be gentle, using persuasion, offering guidance, explanations and examples. I think it's time to be a bit more direct. Before I do, I'd like to make something clear. When it comes to this repository, I have no special role. I'm Joe Ordinary and my voice has the same weight as everyone else's. Lukas owns the repo so he makes the final decisions on everything.
I think you might be mistaking "enthusiasm" for "enlightened self interest". I depend on ZeroTier and I want it to work. That's why I have a watch on this repo. If I see a proposal which is likely to have an impact on me, I feel obliged to test it. If my testing reveals what I think are deficiencies, I feel it's incumbent upon me to explain what those are. On the subject of environment variable names. I think you are being disingenuous and selective in your example. I have already explained that In my view, one of the worst things any container designer can do is to introduce changes which are, arguably, simply for the sake of change. I see your proposal to deprecate I really do not understand why you seem to be so fixated on using Perhaps I should put it like this: the Maintaining consistency with existing usage is important. I am not going to endorse this PR while you persist with this. However, I refer you back to what I said about how I have no special role. It's Lukas' decision. My recommendations:
For the record:
On the other hand:
Thus, standardising on Saying that Yes, in this case, the container only has a single shell. That's entirely beside the point. I'm talking about best practice. I'm talking about the habits that will protect both you and the people who use your code when a container does have multiple shells and where invoking the correct shell matters. It's simple:
The reason why The correct solution is:
When you make that change, the script is classified the same as the entry point scripts. Still on
The
If you do the If you really did need a I don't see the need for the new changes you have just proposed for Elsewhere you wrote:
If you want to be guided by a principle of "least elimination" then lines 81..89 were the source of the auto-join code which is currently in The existing support for It is only safe to automate joining in the first run situation. In particular, leaving a network resets its internal configuration ( Once the container is running, I really think you need to leave it up to the user to execute explicit I have been reflecting on the overall utility of At best it will detect the situation where a user has done an explicit At worst, it will report the container healthy when it is not (false positive). That's because of the problem I've documented several times already where the container thinks a network is OK when it is non-viable. Which is also the point Lukas was making in #33 (more on that below). I think this could be improved with a two-stage check:
How about this: #!/usr/bin/env sh
#This health-check script is sponsored by PMGA TECH LLP
#Exit Codes
# 0= Success
# 1= Failure
#Environment Variables
# ZEROTIER_ONE_CHK_SPECIFIC_NETWORKS= <Enter Networks to check with space in between each entry; All networks entered here would be matched; ZEROTIER_ONE_CHK_MIN_ROUTES_FOR_HEALTH is ignored if this is used.>
# ZEROTIER_ONE_CHK_MIN_ROUTES_FOR_HEALTH= <Should be a Number greater than 0>
# minimum routes for health defaults to 1 route
ZEROTIER_ONE_CHK_MIN_ROUTES_FOR_HEALTH=${ZEROTIER_ONE_CHK_MIN_ROUTES_FOR_HEALTH:-1}
# Check if specified Networks are all Connected
if [[ -n "${ZEROTIER_ONE_CHK_SPECIFIC_NETWORKS}" ]] ; then
for network in $ZEROTIER_ONE_CHK_SPECIFIC_NETWORKS; do
[[ "$(zerotier-cli get ${network} status)" = "OK" ]] || exit 1
interface=$(zerotier-cli get ${network} portDeviceName)
routes=$(ip r | grep "dev ${interface}" | grep -cv "via")
[[ ${routes} -lt 1 ]] && exit 1
done
else # Check for Minimum Networks
# count zerotier-associated direct routes
routes=$(ip r | grep "dev zt" | grep -cv "via")
# sense less than minimum
[[ ${routes} -lt ${ZEROTIER_ONE_CHK_MIN_ROUTES_FOR_HEALTH} ]] && exit 1
fi
exit 0 Baseline:
Interpretation:
Test:
Interpretation:
Your latest push still has the lines including and after:
I am not going to endorse that structure. The additional variable is not needed.
I take "previous thread" to be #33 and, more specifically: I take "their code" to mean the referenced lines. First point. It is not you who would be "eliminating" code. The Second point. In the same comment, Lukas wrote:
This is exactly the problem I think we have been exploring. That's why I'm trying to lead you towards checking routes. From all the exchanges we have had thus far, I have formed the impression, rightly or wrongly, that you have a kind of mental block when it comes to the word "route". A route is simply a convenient "pinnacle artifact". It is the last thing added, by the host, to the host's routing table, when the host regards an interface as defined, UP and serviceable. Even in the dumbest host with exactly one network interface (eg Ethernet or WiFi), you will typically have at least two routes in the routing table:
The withdrawal of a route, by the host, doesn't tell you what is wrong, only that something is wrong. Route withdrawal is sensitive and reasonably fast. It's ideal for this kind of check. |
The point here was not the writer being convinced or not; nor did I take offence. My point was simply that being more polite and a better human makes it a better world. It was just a general advise given to you to avoid using accusatory phrases like 'you actually understand how this code will behave' ; 'you are being disingenious' ; 'you have a kind of mental block'
Well if someone has contributed to this code in some way (direct or indirect) should they not be recognized ? Would you mind Lukas writing 'This repo has been coded and maintained by Lukas' ?
I think there is a mix-up in understanding here. Lukas is taking about Now Regarding Routes : Although I am still not convinced that these are required, I have added the route checks.
Nonetheless, I have added Route check as an additional measure; For other things: |
This PR follows on from the extensive discussion associated with zyclonite#37. Never before have I even *contemplated* submitting a PR covering the same ground as an existing open PR. However, on this occasion I thought it might be useful to have a concrete proposal to compare and contrast with zyclonite#37. I sincerely hope that laying this on the (virtual) table and then minimising further interaction *might* help us converge on a solution. <hr> Changes: * `docker-compose.yml` and `docker-compose-router.yml`: - replaces deprecated `version` statement with `---`. - adds example environment variables. * `Dockerfile` - corrects case of "as" to "AS" (silences build warning). - adds and configures `healthcheck.sh` (as per zyclonite#37). - includes `tzdata` package (moved from `Dockerfile.router`) so messages have local timestamps. * `Dockerfile.router` - removes `tzdata` (moved to `Dockerfile`). * `entrypoint-router.sh`: - code for first launch auto join of listed networks expanded to include additional help material. * `entrypoint.sh`: - "first launch" auto join of listed networks (code copied from `entrypoint-router.sh`, as modified per above). - "self repair" of permissions in persistent store (code copied from `entrypoint-router.sh`). - adds launch-time message to make it clear that the client is launching (complements messages in `entrypoint-router.sh`). - abstracts some common strings to environment variables (opportunistic change). * `README.md`: - updates examples. - describes new environment variables (including move of `ZEROTIER_ONE_NETWORK_IDS` from `README-router.md`. - documents health-checking. * `README-router.md` - updates examples. - explains relationship of router and client. Added: * `healthcheck.sh`, based on original proposal in zyclonite#37 and subsequent suggestions for modification by me. I gave serious consideration to the code for synchronising networks in the entry point scripts. The idea is quite attractive. It is safe to automate joins in a "clean slate" situation. However, a *leave* followed by a *join* is not guaranteed to be idempotent. That's because the *leave* destroys the network-specific configuration options (`allowManaged`, `allowGlobal`, `allowDefault`, `allowDNS`). On balance I think it's better left to users to send explicit *leave* commands via the CLI and take responsibility for restoring lost configuration options on any subsequent *join*. I will post the results of testing this PR separately. Signed-off-by: Phill Kelley <[email protected]>
If an owner of any repo entered into a relationship with a sponsor and wanted to make that relationship clear by adding comments or messages to the code, that's their decision and they go into it in full knowledge of any Intellectual Property implications. It's slightly different when a second party proposes adding code which acknowledges a sponsorship relationship between the second party and a third party. That's the kind of thing that can undermine licence conditions. If Lukas applies such a PR, he implicitly accepts that his repo becomes a kind of MIT-with-possible-strings licence. That said, rather than risk inflaming the situation further, I'm going to avoid posting any more comments against this PR. I apologise for my impoliteness. I have created PR #38 as an alternative approach. At this stage, and because of its evolutionary path through this PR, |
I would like to humbly disagree with you. The settled law is : if there were any 'strings attached' the same needs to be mentioned clearly in the license agreement; the reference of which has to be made in the file. If nothing is mentioned, the original license policy is followed which in this case is MIT. Further I can confirm there are no strings attached with this code. It is simply a recognition of their contribution. |
That would be up to "PMGA TECH LLP" to declare though, in a way that'd hold against professional scrutiny. I'm an external observer and right now it's not clear to me how the script was 'sponsored' (is it the company's IP? What are the terms/license? Are they compatible with what this repo is using and will it remain so indefinitely? Is "PMGA TECH LLP" aware this is happening and is it in line with whatever policy they have on OSS contributions? Or was this maybe more like an OSS grant and the author is independent? At the moment I'm not even sure which parts of the PR come from an individual, which are 'sponsored' by an LLP etc. Also, projects like this one are tempting targets for supply chain attacks (recall xz utils & ssh backdoor). Is "PMGA TECH LLP" reputable enough to associate with? Is it an established business, who's behind it? What other activities does it engage in? Recognition is of course deserved, and it would go into git history, perhaps into release blurbs, maybe a dedicated CONTRIBUTORS.md etc. A number of tried-and-tested options to choose from. But it should not go into anything 'executable' if it can be avoided. Not to obscure the credit, but to make the security-sensitive parts easier to review/reason about. It's a pain when you see a shebang deferred by a comment block, and now need to triple-check there's no unicode shenanigan lurking in there, or that this isn't a new kind of exploit against a particular busybox version... |
NO. I think you missed the point where I clearly declared there are no license which means it is same as the repo i.e MIT.
LOL. A comment can 'exploit' a code. Let me tell you this - even putting shebang character in the first 1-2 bytes can still lead to Homoglyph Attacks. Plus in case you are not aware, you should always run shellcheck. Once committed to git, then it is impossible to change comment and add unicode characters. It's really surprising and disheartening to see people who hardly have 1-2 contributions a year (that too 0 contribution in this repository) go on writing a whole baseless paragraph citing 'legal' issues without having an iota of knowledge of law. To further clear the air and be straight & direct, this repo was pitched to PMGA Tech LLP, and since it needed health-check, I made the script and they allowed me to use 1/2 my consultation time with them to build this. i.e. 1/2 time I used from my personal time and 1/2 from their consultation time. In return they just asked me to put the sponsor message which is a fair ask. I don't know why people jump to conclusions when someone else gets recognition for their honest work! PMGA Tech LLP could have had me not post anything for you to use if they wanted to! This has been done only because we were using this repository and wanted to contribute something to it in return ! And you have a problem with that ? It's one thing to leech and not contribute back, but its whole level when one only leeches out of this repository and put questions on recognition of someone's legitimate contribution!
I would agree to this. But strictly legally speaking it would have the same effect as writing it in the code; which is nada since the REPO is MIT and the people contributing to it are doing so with that knowledge and voluntarily. @hoppke Sir, In my humble opinion, you should contribute first so that you understand the hard-work that goes into coding and testing. Your time would have been better spent contributing rather than trying to raise unnecessary and baseless questions. Anyway, I think this is as far as I can go. @zyclonite can decide what he wants to do. |
An exploit could be made to look like an innocent comment, sure. So e.g. in your script it looks like a shebang. @Paraphraser beat me to it and already pointed out it isn't right. So I'll expand a bit. It'll likely be the default shell, so busybox on a pristine alpine image, but there are probably no promises that it'll run in the same compatibility mode as /bin/sh (myself, I'd bet on it selecting /bin/ash). Which may or may not be running the same POSIX compliance level, and thus may or may not be an issue, but it may of course change in the future even if no one touches the script again, because it wouldn't be a first for busybox/alpine. If only we could have a way for a script to precisely declare the shell it wants to execute under... Maybe one day.
On the subject of 'surprising', is the "PMGA Tech LLP" ~5-6 years old, registered in India, declaring two native founders, only one on them on linkedin, no trace of employees, contractors or anything resembling a business history there, nor on google. The only real content from this company I got out of google is a "company web page" in the .tech domain, it's "in progress" and mostly a contact FORM. Curiously it mentions no names, phone numbers, office addresses, not even the country it's operating in. No list of clients, no portfolio. No identifiable info. I've seen IT companies 'giving back to OSS', from both ends, but never saw it done by an IT house running like this, in full stealth mode. And it decides to sponsor changes in a VPN gateway, of all the possible benefactors! |
That's an aspect I hadn't considered. It's a very good point. I was only looking at the best practice aspect of the signature bytes, plus the potential implied IP claim as a secondary issue. Your remark about the parallels with Like you, all of this caused me to Google the organisation. The results did not alleviate my disquiet. Another thing I noticed is summarised here: "A" is the initial batch of commits, which were signed, while "B" is the more recent commits, which were unsigned. That speaks to commitment to process, and does nothing to alleviate my disquiet. "C" is the result of clicking on one of the "Verified" buttons. Now, compare/contrast that with "D" which is the result of doing the same thing with one of my commits. In the "D" case, GitHub is able to tie the commit and the digital signature back to my account. In the "C" case, GitHub was unable to do that. Now, why should that be so? From a clone with PR37 applied:
Compare/contrast with a clone where PR38 has been applied:
In my case, my public key has the necessary additional UIDs tying my public key to the email addresses listed in my GitHub profile. GitHub can go from the digital signature, to the email addresses in the public key, to the email addresses in the profile, and then display my account name ("Paraphraser") as a hyperlink to my profile. I'm not sure what the goal is of embedding In my case, "Paraphraser" was an accident of history which subsequently became akin to a kind of brand name. I use it here, on Discord, on Mastodon and so on. But I don't go to extreme lengths to hide my real name because to do so serves no practical purpose that I can think of.
On behalf of humanity I'd like to apologise for that remark. Each of us can only do what we can, and only when we can. Everyone has a first comment, or a first issue, or a first commit, or a first repo.
I think it might depend on the jurisdiction. I don't know about India but where I am (Australia) a sponsorship may be considered a "work for hire" in which case the IP rights belong to the sponsor unless there's a written agreement to the contrary. The insertion of a "sponsored by" may be seen as establishing a claim, in the same way as a "written by". I also wonder if you've forgotten that your initial commit included these lines:
I know you removed the second line a day later but it still speaks to your original intention, and it's all still recorded on GitHub. Even though you were the one who removed the second line, that removal might be considered to have triggered the revocation. My personal philosophy is to acknowledge my sources and not simply because it's a sound ethical practice but because it lends credibility to my own work. That's actually why I left the "sponsored by" line in PR #38, even though the script I'm proposing bears very little relationship to the original. I've been thinking that it might be more appropriate to replace that line with something like:
I disagree. It seems to me that GitHub already does a very good job of tracking contributions, and it looks to me like you're well aware of that. I think if I were in the situation where someone was sponsoring my activities, I'd probably just mention on my GitHub profile page that some of my work was sponsored by organisation X and leave it at that. |
So the criteria for giving something back to the OSS community is to first become a company with loads of employees and become known to the whole world like Google/Apple, share client names, portfolios etc. and only then the hard work and contribution will be recognized ?
Can't the company use VPN to give remote contractors a secure access to its servers ?
Is an entity having 'registered in INDIA' or not using 'linkedin' (which is a private offering from microsoft) a justification for discrediting the entity ? Shall all the business deals be posted on their site or maybe here ? Is that also mandatory for submitting a code to OSS community ?
Sure, I have changed the text to be very descriptive now.
In that case you will be personally liable for using the code (or parts of it) in your PR #38; in case you decide to use it without the licensing terms; if you believe the revocation is triggered. My intent has not changed; the 'Original' and 'Final' intent is to
Everyone has a right to their own beliefs. Ethics mean a lot to me and that is the reason I am fighting for being ethically and morally right to recognize someone's contribution. Since this post is now becoming nothing but an attempt to discredit the contributors, I would stop replying here. I hereby specifically state that my failure to reply shall not construe to be any agreement/disagreement to any of the posts explicitly not replied to. My terms are pretty much simple. If you use the code, please recognize the contribution. That is ALL which is being asked. |
Spell-Checked Licensing Text
I'd say that the criteria for "giving something back to the OSS community" are:
I have my doubts about the first point but I definitely don't think you've succeeded on the second point. You might want to reflect on the fact that two people independently but simultaneously reached the conclusion that there might have been more to your proposal than met the eye, and that some deeper research was in order. I can't speak for @hoppke but the immediate trigger appears to have been concerns about supply-chain attacks. I note that you have only just addressed the problem of the signature bytes, which was something I first mentioned two weeks ago. Aside from my growing general disquiet about the whole thing, the most immediate trigger for my research was noticing that some of your commits were signed, some not, wondering about that pattern, then noticing that you were using a public key that didn't link back to your profile. I thought that was odd. I don't care two hoots about where someone is located. Everyone has to live somewhere. I would not have used a phrase like 'registered in INDIA' (your capital letters, not Hoppke's) myself. However, because it had already been raised, it seemed appropriate to mention that there might be legal differences between where you are and where I am. I also don't care whether a contributor is an individual beavering away at home (like me) or employed by the largest company on the planet. While "lack of presence" (for want of a better expression) seems to have been a concern for Hoppke, it isn't a concern for me. My concerns are grounded in the perpetual argumentativeness, unwillingness to take advice or adopt best practice, and general conduct which seems antithetical to garnering community acceptance for the proposal. The very anti-pattern, in fact, that Hoppke has mentioned in the Nobody here has gone nuts about "gb-123-git" as an account name. Why go nuts about "Hoppke" and carry on about Prairie Dogs? What purpose does it serve? It doesn't address Hoppke's concerns about the possibility of supply-chain attacks. Neither is "LOL" an appropriate response to a very clearly articulated and legitimate worry.
and in so doing, I think you have made the situation far worse than it was. The essence of OSS is contributing to the common good. It's not about either big-noting yourself or your organisation, or sneaking what might be intellectual property time-bombs into code. It's about giving freely. As they say, "free" as in "beer". The way you show "no strings attached" is by not attaching any strings. Zip. Zero. Zilch. Nada. If you can't do that then the appropriate course of action is to fork the repo and go off on your own. Nothing stops you from doing that.
Which is a concern.
And GitHub already does that for you. When you take an action on GitHub (issue, PR, whatever) it is recorded. Everyone can see who did what and when. All contributions are recognised, automatically. You are asking for more and are getting annoyed because other people foresee problems with your approach and ask why?
There you go again. Your terms. However, we are in agreement. At this point I withdraw my support for any form of health-checking and recommend to @zyclonite that both #37 and #38 be rejected and closed. Apart from anything else, it will save us having to try to figure out why buildah is out-of-date and whether that's amenable to being fixed. |
As a consumer of the project I would also advise against merging it. The IP in the PR allegedly stems from a collab between an LLP and the PR's author, created (to an unspecified extent) during a professional engagement of the two. The details are unclear, it's not clear what the PR author's legal rights and obligations were in that engagement (employee? direct subcontractor? b2b via a third party?), and it's not clear which lines in the PR are attributed to which entity, which jurisdictions they fall under etc. The LLP entity was not introduced in a way that could be verified, we've no authentication for it, no handshake with whoever's allowed to release IP from the LLPs end. It could come up and deny involvement/knowledge/consent at any point. The LLP has a very low online profile so it's not clear what the business model is. The PR's author, when challenged, becomes defensive/dismissive. The author's relation to the LLP is unclear (could be one of the owners, could be an employee, could be subcontracting, so when making statements about T&Cs or IP ownership we need to treat them as 2nd hand personal opinions, not legally binding. As @zyclonite spotted, there's weirdness, or even intentional obfuscation, in the PGP sig of the author. Similarly the LLPs homepage also does everything to offer no real identifiable info. The domain is 2-3 years old, signed by let's encrypt so it only confirms the domain is owned by the certificate holder, but not who the certificate holder is. This could be just about plugging the name of a fake-it-till-you-make-it IT consultancy into some OSS projects on the internet, for rep-building purposes. But zerotier-docker's position is sensitive, as it's a tempting target for supply chain attacks (technical OR legal, and this PR is IMO far from 'safe' in both respects), and including a PR like this may create opportunities for someone to hurt consumers of the project in some way. So it's a gain vs risk assessment. Is a healthcheck script worth the risk for zerotier-docker? |
Added Health-Check to ZeroTier Docker
This is based on the discussions in Pull Request #33.
Based on the Discussion the following Flow is Created :
The following variables can be defined:
CHK_ZT_SPECIFIC_NETWORK : <Enter 1 Specific Network for Checking; ZT_MIN_ROUTES_FOR_HEALTH is ignored if this is used.>
CHK_ZT_MIN_ROUTES_FOR_HEALTH= <Should be a Number greater than 0>
If nothing is defined, then the Health-check will pass only when ALL networks defined are connected.
To Test :
@zyclonite @Paraphraser @hoppke
@zyclonite & @Paraphraser
Please Check if the Docker build in coming out correctly
Also please test multiple networks
Guys, let me know your thoughts.