-
Notifications
You must be signed in to change notification settings - Fork 707
Add opt-in IPv6 for containers #5879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThe changes introduce support for IPv6 in the Supervisor's Docker network management. Constants and configuration schemas are updated to distinguish between IPv4 and IPv6 network masks, and new options for enabling IPv6 are added. Network creation, migration, and API endpoints are modified to handle IPv6 settings, with corresponding tests and documentation updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API
participant DockerConfig
participant DockerNetwork
User->>API: POST /docker/options ({"enable_ipv6": true})
API->>DockerConfig: Set enable_ipv6 = true
DockerConfig->>API: Save config
API-->>User: 200 OK
User->>API: GET /docker/info
API->>DockerConfig: Get enable_ipv6
API-->>User: {"enable_ipv6": true, ...}
API->>DockerNetwork: post_init(enable_ipv6)
DockerNetwork->>DockerNetwork: _get_network(enable_ipv6)
DockerNetwork->>DockerNetwork: Create/migrate network with IPv6 as needed
sequenceDiagram
participant DockerNetwork
participant Docker
participant SupervisorContainer
participant ObserverContainer
DockerNetwork->>Docker: networks.get(name)
alt Network IPv6 setting needs change and only Supervisor/Observer attached
DockerNetwork->>Docker: networks.disconnect(Supervisor)
DockerNetwork->>Docker: networks.disconnect(Observer)
DockerNetwork->>Docker: networks.remove()
DockerNetwork->>Docker: networks.create(enable_ipv6)
DockerNetwork->>Docker: networks.connect(Supervisor)
DockerNetwork->>Docker: networks.connect(Observer)
else Network has other containers
DockerNetwork->>DockerNetwork: Log warning, skip migration
end
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Pylint (3.3.7)supervisor/api/network.pysupervisor/api/__init__.pysupervisor/api/docker.py
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
supervisor/docker/network.py (1)
102-130
: Consider adding IPv6 address support to the attach_container method.The current implementation only allows specifying an IPv4 address when attaching a container. For complete IPv6 support, consider enhancing this method to also accept an optional IPv6 address parameter.
def attach_container( self, container: docker.models.containers.Container, alias: list[str] | None = None, ipv4: IPv4Address | None = None, + ipv6: IPv6Address | None = None, ) -> None: """Attach container to Supervisor network. Need run inside executor. """ ipv4_address = str(ipv4) if ipv4 else None + ipv6_address = str(ipv6) if ipv6 else None # Reload Network information with suppress(docker.errors.DockerException, requests.RequestException): self.network.reload() # Check stale Network if container.name and container.name in ( val.get("Name") for val in self.network.attrs.get("Containers", {}).values() ): self.stale_cleanup(container.name) # Attach Network try: - self.network.connect(container, aliases=alias, ipv4_address=ipv4_address) + self.network.connect(container, aliases=alias, ipv4_address=ipv4_address, ipv6_address=ipv6_address) except docker.errors.APIError as err: raise DockerError( f"Can't link container to hassio-net: {err}", _LOGGER.error ) from err
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
supervisor/api/network.py
(2 hunks)supervisor/const.py
(2 hunks)supervisor/docker/network.py
(3 hunks)supervisor/docker/observer.py
(2 hunks)supervisor/misc/filter.py
(2 hunks)tests/api/test_network.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
`*/**(html|markdown|md)`: - For instructional content in documentation, use a direct and authoritative tone. Avoid expressions of politeness such as 'may' or 'please', and ensure t...
*/**(html|markdown|md)
: - For instructional content in documentation, use a direct and authoritative tone. Avoid expressions of politeness such as 'may' or 'please', and ensure the goal of the instruction is fronted.
- Apply the Microsoft Style Guide to ensure documentation maintains clarity and conciseness.
- In step-by-step instructions, front the location phrase in the instructional sentence.
- In step-by-step instructions, front the 'goal' in the instructional sentence.
- In step-by-step instructions, if in doubt what to front, front the 'goal' before the location phrase in the instructional sentence.
- do not hyphenate terms like 'top-right' or 'bottom-left' with 'corner'
supervisor/const.py
`*/**(html|markdown|md)`: - Use bold to mark UI strings. - If "" are used to mark UI strings, replace them by bold.
*/**(html|markdown|md)
: - Use bold to mark UI strings.
- If "" are used to mark UI strings, replace them by bold.
supervisor/const.py
`*/**(html|markdown|md)`: - Be brief in your replies and don't add fluff like "thank you for..." and "Please let me know if"
*/**(html|markdown|md)
: - Be brief in your replies and don't add fluff like "thank you for..." and "Please let me know if"
supervisor/const.py
`*/**(html|markdown|md)`: - Use sentence-style capitalization also in headings.
*/**(html|markdown|md)
: - Use sentence-style capitalization also in headings.
supervisor/const.py
`*/**(html|markdown|md)`: do not comment on HTML used for icons
*/**(html|markdown|md)
: do not comment on HTML used for icons
supervisor/const.py
`*/**(html|markdown|md)`: Avoid flagging inline HTML for embedding videos in future reviews for this repository.
*/**(html|markdown|md)
: Avoid flagging inline HTML for embedding videos in future reviews for this repository.
supervisor/const.py
🔇 Additional comments (11)
supervisor/misc/filter.py (1)
12-12
: Correctly updated import statement for IPv4 specificity.The import statement has been properly updated to use the newly renamed IPv4-specific constant, which is consistent with the IPv6 enablement changes.
tests/api/test_network.py (2)
9-9
: Correctly updated import statement.The import statement has been properly updated to use the newly renamed IPv4-specific constant.
62-62
: Test correctly updated to match implementation changes.The assertion has been updated to check that the API returns the IPv4 network mask, matching the implementation changes.
supervisor/docker/observer.py (1)
5-5
: Correctly updated import statement.The import statement has been properly updated to use the newly renamed IPv4-specific constant.
supervisor/api/network.py (1)
41-41
: Correctly updated import statement.The import statement has been properly updated to use the newly renamed IPv4-specific constant.
supervisor/const.py (2)
5-5
: Addition of IPv6Network import is correct for implementing IPv6 support.The new import of IPv6Network alongside the existing IPv4Network is necessary to enable IPv6 networking in the supervisor.
44-46
: LGTM: Clear naming convention for network constants.The changes establish a clear naming pattern by:
- Adding a new IPv6 network mask constant with the unique IPv6 ULA prefix (fd0c::/64)
- Renaming existing IPv4 constants to explicitly include "IPV4" in their names
This approach correctly maintains backward compatibility while adding the new IPv6 functionality.
supervisor/docker/network.py (4)
10-15
: LGTM: Updated imports align with the renamed constants.The imports have been correctly updated to reference the new, more explicit constant names from const.py.
50-50
: LGTM: Updated all IPv4 references to use the new constant name.All network property references now consistently use the explicitly named DOCKER_IPV4_NETWORK_MASK constant.
Also applies to: 55-55, 60-60, 65-65, 70-70, 75-75
84-92
: LGTM: Properly configured both IPv6 and IPv4 IPAM pools.The implementation correctly:
- Creates a separate IPv6 IPAM pool using the defined IPv6 network mask
- Maintains the existing IPv4 IPAM pool configuration with subnet, gateway and IP range
- Combines both pools into a single IPAM configuration
This approach follows Docker networking best practices for dual-stack IPv4/IPv6 configuration.
98-98
: LGTM: Enabled IPv6 in network creation.Setting
enable_ipv6=True
is essential for the Docker network to accept and route IPv6 traffic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I access my HA instances through nginx and purely using IPv6.
By nginx you mean the official NGINX proxy add-on?
But since it is exposed like this to the outside, I have to use the Ban IP feature, which currently doesn't work because nginx communicates with the HA instance using IPv4 and the Forward header doesn't work because it refuses to mix these families (which is then banning docker IP of the container). :)
Thanks for explaining your specific use case here! 🙏
I was wondering if X-Forwarded-For indeed doesn't allow mixing IP addresses, but that doesn't seem the problem per-se, even the X-Forwarded-For examples at mdn show headers with mixed address families. I also don't think this is a NGINX problem. This is purely a Docker problem: We expose the outside IP address using Docker, which seemingly makes it look like originating from the Docker network's hassio host IP (172.30.32.1
). Presumably this is just because Docker uses docker-proxy
by default like so:
/usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 1443 -container-ip 172.30.33.6 -container-port 443 -use-listen-fd
Side quest: Interestingly enough there is also an IPv4 docker-proxy running, so why does it work for IPv4 then? Shouldn't that also lead to a masqueraded IPv4 source IP? It turns out that besides the proxy, there is also a iptables rule:
iptables -t nat -L DOCKER -n -v
...
8 480 DNAT tcp -- !hassio * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1443 to:172.30.33.6:443
So it seem the IPv4 proxy is only a fallback 🤔 🤷
Anyhow, I can see why using IPv6 can help here, even in the NAT case: This probably makes Docker use the same DNAT rules for IPv6 as well, hence the source IP is no longer masqueraded.
That said, I am a bit worried what it does for users without IPv6 connectivity: IP client software running in add-on who try to connect to hosts with IPv6 addresses (where the nameserver resolved AAAA) might try connecting to those hosts using IPv6. Maybe this is not a (big) problem because the add-on use ULA address range and clients will ultimately fallback to IPv4. But there is some risk of things not working as expected, I think. Ideally we make that feature an opt-in first maybe? 🤔
Yes.
The thing is, when I initially investigated the problem, I even logged the packet headers and they just weren't there, so something is going wrong. I just thought let's try give the containers IPv6 and let's see.. and it worked so my conclusion was nginx is not handing over v6 (but who knows if nginx even know the source IP, proly not) to a server on v4 (in other words, out-of-spec behavior no matter who's to blame here). So when is the whole traffic after the changes on v6 it very likely hides some problem somewhere (docker-proxy is a strong candidate) but I wasn't able to find out more.
Yes, it's easy to see when observing the traffic there.
Hmmm, it would be ideal but I guess it would need |
Hm, yeah I guess no gateway will make any connection attempt fail immediately, and likely cause to try out the next address (family). But the exact behavior is always a bit tricky, depends on how well the client code is written.
Yeah the current code essentially doesn't migrate existing installations. Which maybe is actually not a bad thing, just to see how things work first... 🤔 But ultimately we should migrate existing installations. |
38a2325
to
51bf91a
Compare
Yeah, everything around IP stack is a bit tricky sometimes.. heh.
I think so too and as you said this change does not affect current users in any way, so far.
Maybe it's doable during |
fa6a6c9
to
a29c5ec
Compare
a29c5ec
to
ede24eb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
tests/docker/test_network.py (4)
10-16
: Add docstrings to the mock class and methods.The
MockINetwork
class is missing docstrings as flagged by the static analysis. While this is a test-only class, adding documentation would improve maintainability.-class MockINetwork(dict): +class MockINetwork(dict): + """Mock implementation of Docker network that returns False for all attributes.""" def __init__(self): + """Initialize mock network.""" pass def __getattr__(self, attr: str): + """Return False for any attribute access.""" return False🧰 Tools
🪛 Ruff (0.8.2)
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in
__init__
(D107)
14-14: Missing docstring in magic method
(D105)
10-12
: Consider whether dict inheritance is necessary.The class inherits from
dict
but doesn't use any dictionary functionality. Consider removing the inheritance if not needed.-class MockINetwork(dict): +class MockINetwork:🧰 Tools
🪛 Ruff (0.8.2)
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in
__init__
(D107)
27-27
: Add descriptive assertion message.Adding an error message to the assertion would make test failures more informative.
- mock_create.assert_called_with(enable_ipv6=True) + mock_create.assert_called_with(enable_ipv6=True, + msg="Network should be recreated with IPv6 enabled")
18-27
:❓ Verification inconclusive
Consider extending test coverage.
The test only verifies that the network is created with IPv6 enabled. Consider adding assertions for other important parameters (like subnet configurations, gateway settings, etc.) that would be part of an IPv6-enabled network.
🏁 Script executed:
#!/bin/bash # Search for the create method usage in the network module to understand what other parameters might be important grep -r "create.*enable_ipv6" --include="*.py" supervisor/docker/Length of output: 67
🏁 Script executed:
#!/bin/bash # Locate the NetworkCollection class definition grep -R -n "class NetworkCollection" --include="*.py" supervisor/docker echo "----" # Locate the create method implementations in the docker module grep -R -n "def create" --include="*.py" supervisor/dockerLength of output: 151
🏁 Script executed:
#!/bin/bash # Search for NetworkCollection class and its create method across the repo grep -R -n "class NetworkCollection" --include="*.py" --exclude-dir={tests,node_modules,build,dist} . echo "----" grep -R -n "def create" --include="*.py" --exclude-dir={tests,node_modules,build,dist} .Length of output: 1448
Extend IPv6 network creation test coverage
The current test only verifies that
create
is called withenable_ipv6=True
. To ensure the network is created correctly for IPv6, please update the test to assert any additional parameters yourcreate
implementation supports—for example:• IPAM configuration (subnet, gateway, IP range)
• Driver options or internal flags
• Network name, labels, or other relevant settingsLocations to update:
- tests/docker/test_network.py, lines 18–27
- Replace or augment the existing
with something likemock_create.assert_called_with(enable_ipv6=True)mock_create.assert_called_with( enable_ipv6=True, ipam_config={"Subnet": "2001:db8::/64", "Gateway": "2001:db8::1"}, internal=False, driver_opts={…}, name="my_ipv6_network", # etc. )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
supervisor/docker/network.py
(2 hunks)tests/docker/test_network.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- supervisor/docker/network.py
🧰 Additional context used
🪛 Ruff (0.8.2)
tests/docker/test_network.py
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in __init__
(D107)
14-14: Missing docstring in magic method
(D105)
🔇 Additional comments (1)
tests/docker/test_network.py (1)
1-9
: LGTM! Clear imports for testing Docker network functionality.The imports are appropriate for a test file focusing on the Docker network manager, including the necessary mock utilities and Docker classes.
29b4b36
to
bcf91f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/docker/test_network.py (2)
10-16
: Add docstrings to the MockINetwork class.The
MockINetwork
class is missing docstrings that would explain its purpose and behavior. Since this is a mock class used for testing, it's important to document what aspects of a real network it's simulating and how it behaves.class MockINetwork(dict): + """Mock implementation of Docker internal network.""" def __init__(self): + """Initialize the mock network.""" pass def __getattr__(self, attr: str): + """Return False for any attribute to simulate a network without IPv6 enabled.""" return False🧰 Tools
🪛 Ruff (0.8.2)
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in
__init__
(D107)
14-14: Missing docstring in magic method
(D105)
18-27
: Test validates IPv6 enablement but could be more comprehensive.The test correctly verifies that
create
is called withenable_ipv6=True
when recreating a network, which aligns with the PR objective of enabling IPv6 support. However, it doesn't verify other aspects of network creation such as IPAM configuration for both IPv4 and IPv6 subnets.You might want to extend the test to verify more aspects of the network creation by checking additional parameters passed to the
create
method:def test_network_recreate_as_ipv6_with_full_config(docker: DockerAPI): """Test that network recreation includes proper IPv4 and IPv6 configuration.""" with ( patch.object(NetworkCollection, "get", return_value=MockINetwork()), patch.object(NetworkCollection, "create") as mock_create, ): docker.network._get_network() # Verify IPv6 is enabled mock_create.assert_called_once() args, kwargs = mock_create.call_args assert kwargs["enable_ipv6"] is True # Verify IPAM configuration for both IPv4 and IPv6 assert "ipam" in kwargs ipam_config = kwargs["ipam"]["Config"] assert len(ipam_config) == 2 # Should have both IPv4 and IPv6 pools # Verify IPv4 subnet assert any(config.get("Subnet", "").startswith("172.30.32.") for config in ipam_config) # Verify IPv6 subnet assert any(config.get("Subnet", "").startswith("fd00:dead:beef::") for config in ipam_config)Note: You should adjust the subnet prefixes to match the actual expected values from your implementation.
🛑 Comments failed to post (1)
tests/docker/test_network.py (1)
18-27: 💡 Verification agent
🧩 Analysis chain
Consider testing the migration path for existing installations.
Based on the PR objectives, there was discussion about how to handle migration for existing installations. This test verifies that when a network exists without IPv6, it will be recreated with IPv6 enabled, which is good. However, it might be beneficial to add tests for other scenarios:
- What happens when the network already has IPv6 enabled?
- How are existing containers handled during network recreation?
- Are there any error conditions during migration that should be gracefully handled?
🏁 Script executed:
#!/bin/bash # Look for error handling code in the network recreation logic rg -A 5 -B 5 "_get_network" --glob "*.py" | grep -i "except\|error\|fail"Length of output: 73
🏁 Script executed:
#!/bin/bash # Locate and inspect the implementation of the _get_network method to check for any error handling rg -n "def _get_network" -A 20 -B 5 --glob "*.py"Length of output: 1885
Add tests for all migration scenarios in tests/docker/test_network.py
Currently only the “existing network without IPv6” path is covered. Please add tests for:
• Existing network already IPv6-enabled
– patchNetworkCollection.get
to return a network whereEnableIPv6=True
– calldocker.network._get_network()
– assert neithernetwork.remove()
norcreate()
is invoked• Network not found (migration from pre-IPv6 install)
– patchNetworkCollection.get
to raisedocker.errors.NotFound
– calldocker.network._get_network()
– assertcreate(enable_ipv6=True)
is invoked• Error during migration
– patchNetworkCollection.get
to return non-IPv6 network andnetwork.remove()
to raisedocker.errors.APIError
– calldocker.network._get_network()
– assert the exception is propagated or logged as appropriateThese will ensure coverage for all branches in
_get_network
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/docker/test_network.py (1)
10-16
: Add docstrings to the mock class for better documentation.The
MockINetwork
class is missing docstrings, which would improve code readability and maintainability. Consider adding class and method docstrings explaining its purpose as a mock returningFalse
for all attributes.class MockINetwork(dict): + """Mock implementation of a Docker network object for testing. + + Returns False for all attribute access to simulate a network + without IPv6 or other required features. + """ def __init__(self): + """Initialize mock network.""" pass def __getattr__(self, _: str): + """Return False for any attribute access.""" return False🧰 Tools
🪛 Ruff (0.8.2)
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in
__init__
(D107)
14-14: Missing docstring in magic method
(D105)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
supervisor/docker/network.py
(2 hunks)tests/docker/test_network.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- supervisor/docker/network.py
🧰 Additional context used
🪛 Ruff (0.8.2)
tests/docker/test_network.py
10-10: Missing docstring in public class
(D101)
11-11: Missing docstring in __init__
(D107)
14-14: Missing docstring in magic method
(D105)
🔇 Additional comments (2)
tests/docker/test_network.py (2)
1-8
: Good job on setting up the test module!The module docstring and imports are well-organized, following the Python standard of ordering imports (standard library, third-party, local).
22-23
: Good job fixing the patch approach!The code correctly uses
patch.object()
for patching theNetworkCollection
methods, which is the appropriate approach when patching methods of imported classes.
7763616
to
cc13e47
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This generally LGTM.
I did encounter an issue in testing when the network is being recreated: The DNS plug-in is potentially being restarted earlier than expected (before attaching to the DNS container).
It is not directly related to this change, but it causes the plug-in container not being recreated, which in turn is problematic because the network interface is gone:
2025-05-28 10:31:05.895 WARNING (SyncWorker_2) [supervisor.docker.manager] Can't restart hassio_dns: 500 Server Error for http+docker://localhost/v1.49/containers/6bd0a4c0e9b45c6ecc577a95cf5fd48bbcd2d4ac433be75429ee0bee7aa57252/restart?t=10: Internal Server Error ("Cannot restart container 6bd0a4c0e9b45c6ecc577a95cf5fd48bbcd2d4ac433be75429ee0bee7aa57252: failed to set up container networking: network c89d12c2e717bfa186d9de0def82b5220078007d49d30d193d1072e7ecc12e91 not found")
2025-05-28 10:31:05.896 ERROR (MainThread) [supervisor.plugins.dns] Can't start CoreDNS plugin
This only happens if a user actually makes use of that feature, so it is not that critical. I'll fix this in a follow up PR.
A connectivity check can potentially be triggered before the DNS plug-in is loaded. Avoid calling restart on the DNS plug-in before it got initiially loaded. This prevents starting before attaching. The attaching makes sure that the DNS plug-in container is recreated before the DNS plug-in is initially started, which is e.g. needed by a potentially hassio network configuration change (e.g. the migration required to enable/disable IPv6 on the hassio network, see #5879).
A connectivity check can potentially be triggered before the DNS plug-in is loaded. Avoid calling restart on the DNS plug-in before it got initially loaded. This prevents starting before attaching. The attaching makes sure that the DNS plug-in container is recreated before the DNS plug-in is initially started, which is e.g. needed by a potentially hassio network configuration change (e.g. the migration required to enable/disable IPv6 on the hassio network, see #5879).
a88477c
to
d8713eb
Compare
Interesting, could this be solved by reattaching all the "plugins"? Or is it completely unrelated to the migration? |
It seems that reattaching if the container is stopped (which is the default for all but Supervisor/Observer on startup) really means re-creating the container. Not sure why exactly, but that seems to be the behavior. It's probably also not a bad idea, this makes sure the container gets created with whatever latest Supervisor code is doing (making sure potentially changed run parameters are actually applied). But that DNS plug-in got started before attaching is unintentional and addressed with #5922. |
It kinda makes sense because it needs to reconfigure container's IP. So something like this?
|
03260ad
to
ced405e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
supervisor/docker/network.py (1)
163-171
: Silent error suppression may hide important failures.Using
suppress(DockerError)
for container reattachment could hide important failures during network setup. Consider logging warnings when reattachment fails to aid in debugging.This issue was mentioned in previous reviews. The current implementation silently suppresses errors which could make troubleshooting difficult when containers fail to attach to the new network.
🧹 Nitpick comments (1)
supervisor/docker/network.py (1)
114-181
: Complex migration logic could benefit from refactoring.The
_get_network
method handles network migration correctly but is quite complex with nested conditions and multiple responsibilities. Consider extracting the migration logic into a separate method for better maintainability and testability.Extract the migration logic into a helper method:
+ def _migrate_network( + self, network: docker.models.networks.Network, enable_ipv6: bool + ) -> tuple[bool, docker.models.networks.Network]: + """Migrate existing network to new IPv6 configuration. + + Returns (success, network) where success indicates if migration completed. + """ + containers = network.containers + containers_all = all( + container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) + for container in containers + ) if containers else True + + if containers and containers_all: + for container in containers: + with suppress( + docker.errors.APIError, + docker.errors.DockerException, + requests.RequestException, + ): + network.disconnect(container, force=True) + + if not containers or containers_all: + try: + network.remove() + return True, network + except docker.errors.APIError: + _LOGGER.warning("Failed to remove existing Supervisor network") + return False, network + else: + _LOGGER.warning( + "System appears to be running, " + "not applying Supervisor network change. " + "Reboot your system to apply the change." + ) + return False, network def _get_network(self, enable_ipv6: bool = False) -> docker.models.networks.Network: """Get supervisor network.""" try: - if network := self.docker.networks.get(DOCKER_NETWORK): - if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6: - return network - _LOGGER.info( - "Migrating Supervisor network to %s", - "IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only", - ) - if (containers := network.containers) and ( - containers_all := all( - container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) - for container in containers - ) - ): - for container in containers: - with suppress( - docker.errors.APIError, - docker.errors.DockerException, - requests.RequestException, - ): - network.disconnect(container, force=True) - if not containers or containers_all: - try: - network.remove() - except docker.errors.APIError: - _LOGGER.warning("Failed to remove existing Supervisor network") - return network - else: - _LOGGER.warning( - "System appears to be running, " - "not applying Supervisor network change. " - "Reboot your system to apply the change." - ) - return network + if network := self.docker.networks.get(DOCKER_NETWORK): + if network.attrs.get(DOCKER_ENABLEIPV6) == enable_ipv6: + return network + _LOGGER.info( + "Migrating Supervisor network to %s", + "IPv4/IPv6 Dual-Stack" if enable_ipv6 else "IPv4-Only", + ) + success, network = self._migrate_network(network, enable_ipv6) + if not success: + return network
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (17)
supervisor/api/__init__.py
(3 hunks)supervisor/api/docker.py
(2 hunks)supervisor/api/network.py
(2 hunks)supervisor/const.py
(4 hunks)supervisor/dbus/network/connection.py
(2 hunks)supervisor/dbus/network/setting/__init__.py
(1 hunks)supervisor/docker/manager.py
(3 hunks)supervisor/docker/network.py
(5 hunks)supervisor/docker/observer.py
(2 hunks)supervisor/misc/filter.py
(2 hunks)supervisor/validate.py
(3 hunks)tests/api/test_docker.py
(2 hunks)tests/api/test_network.py
(2 hunks)tests/dbus/network/test_connection.py
(2 hunks)tests/dbus/network/test_ip_configuration.py
(2 hunks)tests/docker/test_network.py
(1 hunks)tests/test_validate.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (15)
- tests/dbus/network/test_ip_configuration.py
- supervisor/dbus/network/setting/init.py
- tests/dbus/network/test_connection.py
- supervisor/misc/filter.py
- supervisor/api/network.py
- tests/api/test_network.py
- tests/test_validate.py
- supervisor/validate.py
- supervisor/dbus/network/connection.py
- supervisor/docker/observer.py
- supervisor/api/docker.py
- supervisor/docker/manager.py
- tests/api/test_docker.py
- supervisor/const.py
- supervisor/api/init.py
🔇 Additional comments (6)
tests/docker/test_network.py (1)
1-113
: Well-structured test implementation with comprehensive coverage.The test file is well-designed with appropriate mock classes and comprehensive parameterized test scenarios. The test logic correctly verifies network recreation behavior under different conditions.
supervisor/docker/network.py (5)
3-3
: Good async import additions.The new imports for
asyncio
andSelf
typing are appropriate for the async initialization pattern implemented in this file.Also applies to: 7-7
32-48
: Well-designed network configuration constants.The
DOCKER_NETWORK_PARAMS
dictionary provides a clean, centralized configuration for Docker network creation with proper IPAM setup for both IPv4 and IPv6 subnets. The structure supports dual-stack networking correctly.
62-67
: Excellent async initialization pattern.Moving network initialization to
post_init()
and running blocking Docker API calls in an executor is the correct approach for maintaining event loop compatibility.
219-241
: Well-designed container attachment by name.The new
attach_container_by_name
method is a good addition that simplifies container management during network migration. The error handling is comprehensive and appropriate.
264-279
: Improved error handling in stale cleanup.The enhanced error handling in
stale_cleanup
with specific exception types and proper logging is a good improvement over the previous implementation.
* Avoid early DNS plug-in start A connectivity check can potentially be triggered before the DNS plug-in is loaded. Avoid calling restart on the DNS plug-in before it got initially loaded. This prevents starting before attaching. The attaching makes sure that the DNS plug-in container is recreated before the DNS plug-in is initially started, which is e.g. needed by a potentially hassio network configuration change (e.g. the migration required to enable/disable IPv6 on the hassio network, see #5879). * Mock DNS plug-in running
7ae77a4
to
d90d2ff
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
supervisor/docker/network.py (2)
32-48
: IPv6 enabled by default in global parameters may cause confusion.The
DOCKER_NETWORK_PARAMS
dictionary hasATTR_ENABLE_IPV6: True
by default, but the_get_network
method defaults toenable_ipv6=False
. This inconsistency could lead to unexpected behavior where the global parameters suggest IPv6 is enabled but the method call disables it.Consider aligning the defaults or adding a comment explaining this design choice:
DOCKER_NETWORK_PARAMS = { "name": DOCKER_NETWORK, "driver": DOCKER_NETWORK_DRIVER, "ipam": docker.types.IPAMConfig( pool_configs=[ docker.types.IPAMPool(subnet=str(DOCKER_IPV6_NETWORK_MASK)), docker.types.IPAMPool( subnet=str(DOCKER_IPV4_NETWORK_MASK), gateway=str(DOCKER_IPV4_NETWORK_MASK[1]), iprange=str(DOCKER_IPV4_NETWORK_RANGE), ), ] ), - ATTR_ENABLE_IPV6: True, + ATTR_ENABLE_IPV6: False, # Will be overridden in _get_network based on config "options": {"com.docker.network.bridge.name": DOCKER_NETWORK}, }
124-149
: Complex migration logic correctly handles container attachment edge cases.The migration logic properly addresses the container attachment issue identified in past reviews by:
- Checking if only Supervisor/Observer containers are attached
- Disconnecting containers before network removal
- Gracefully handling cases where other containers are attached
However, the nested conditional logic is quite complex and could benefit from extraction.
Consider extracting the migration logic into a separate method for better readability:
+ def _can_migrate_network(self, containers: list) -> bool: + """Check if network migration is safe to perform.""" + return not containers or all( + container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) + for container in containers + ) + + def _disconnect_containers(self, network, containers: list) -> None: + """Safely disconnect containers from network.""" + for container in containers: + with suppress( + docker.errors.APIError, + docker.errors.DockerException, + requests.RequestException, + ): + network.disconnect(container, force=True)Then simplify the main logic:
- if (containers := network.containers) and ( - containers_all := all( - container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) - for container in containers - ) - ): - for container in containers: - with suppress( - docker.errors.APIError, - docker.errors.DockerException, - requests.RequestException, - ): - network.disconnect(container, force=True) - if not containers or containers_all: + containers = network.containers + if self._can_migrate_network(containers): + self._disconnect_containers(network, containers)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (17)
supervisor/api/__init__.py
(3 hunks)supervisor/api/docker.py
(2 hunks)supervisor/api/network.py
(2 hunks)supervisor/const.py
(4 hunks)supervisor/dbus/network/connection.py
(2 hunks)supervisor/dbus/network/setting/__init__.py
(1 hunks)supervisor/docker/manager.py
(3 hunks)supervisor/docker/network.py
(5 hunks)supervisor/docker/observer.py
(2 hunks)supervisor/misc/filter.py
(2 hunks)supervisor/validate.py
(3 hunks)tests/api/test_docker.py
(2 hunks)tests/api/test_network.py
(2 hunks)tests/dbus/network/test_connection.py
(2 hunks)tests/dbus/network/test_ip_configuration.py
(2 hunks)tests/docker/test_network.py
(1 hunks)tests/test_validate.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/api/test_network.py
🚧 Files skipped from review as they are similar to previous changes (15)
- tests/dbus/network/test_ip_configuration.py
- supervisor/misc/filter.py
- tests/dbus/network/test_connection.py
- supervisor/api/init.py
- tests/test_validate.py
- supervisor/dbus/network/setting/init.py
- supervisor/api/network.py
- supervisor/validate.py
- supervisor/dbus/network/connection.py
- supervisor/api/docker.py
- supervisor/docker/observer.py
- tests/api/test_docker.py
- supervisor/docker/manager.py
- tests/docker/test_network.py
- supervisor/const.py
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Build armv7 supervisor
- GitHub Check: Run tests Python 3.13.3
🔇 Additional comments (4)
supervisor/docker/network.py (4)
62-67
: Excellent async initialization pattern implementation.The move to async
post_init
properly addresses the blocking Docker API calls by running them in an executor. This prevents blocking the event loop while maintaining the synchronous Docker client interface.
163-171
: Container reattachment uses appropriate error suppression.The use of
suppress(DockerError)
for container reattachment is appropriate here since these are optional attachments during network initialization. If these fail, the system can still function, and the containers will be attached later when they start.
219-241
: New attach_container_by_name method provides good abstraction.This method properly abstracts the container lookup and attachment logic, making the migration code cleaner. The error handling is comprehensive and consistent with other methods.
208-217
: Enhanced exception handling improves error diagnosis.The expanded exception handling catches specific Docker and request exceptions, providing better error messages compared to the broad exception handling from previous versions.
d90d2ff
to
9893ca8
Compare
9893ca8
to
bb8260b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
supervisor/docker/network.py (1)
163-171
: Container reattachment silently suppresses errors.The use of
suppress(DockerError)
hides potential reattachment failures. Consider logging warnings when reattachment fails to improve debugging visibility:- with suppress(DockerError): - self.attach_container_by_name( - SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor - ) - - with suppress(DockerError): - self.attach_container_by_name( - OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer - ) + try: + self.attach_container_by_name( + SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor + ) + except DockerError as err: + _LOGGER.warning("Failed to reattach supervisor container: %s", err) + + try: + self.attach_container_by_name( + OBSERVER_DOCKER_NAME, [ATTR_OBSERVER], self.observer + ) + except DockerError as err: + _LOGGER.warning("Failed to reattach observer container: %s", err)
🧹 Nitpick comments (2)
supervisor/api/docker.py (1)
57-66
: Add error handling for configuration save failures.The options method correctly validates input and updates the IPv6 configuration, but lacks explicit error handling for potential save failures.
async def options(self, request: web.Request) -> None: """Set docker options.""" body = await api_validate(SCHEMA_OPTIONS, request) if ATTR_ENABLE_IPV6 in body: self.sys_docker.config.enable_ipv6 = body[ATTR_ENABLE_IPV6] - await self.sys_docker.config.save_data() + try: + await self.sys_docker.config.save_data() + except Exception as err: + raise APIError("Failed to save Docker configuration") from errsupervisor/docker/network.py (1)
117-149
: Complex migration logic handles container attachment edge cases.The network migration logic correctly addresses the issue from past reviews where containers attached to the network would prevent removal. The implementation checks for only supervisor/observer containers and safely disconnects them before migration.
However, consider extracting this complex logic into a separate method for better maintainability:
+ def _can_migrate_network(self, containers: list) -> bool: + """Check if network can be safely migrated.""" + return not containers or all( + container.name in (OBSERVER_DOCKER_NAME, SUPERVISOR_DOCKER_NAME) + for container in containers + ) + + def _disconnect_containers_for_migration(self, network, containers: list) -> None: + """Safely disconnect containers for network migration.""" + for container in containers: + with suppress( + docker.errors.APIError, + docker.errors.DockerException, + requests.RequestException, + ): + network.disconnect(container, force=True)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (17)
supervisor/api/__init__.py
(3 hunks)supervisor/api/docker.py
(2 hunks)supervisor/api/network.py
(2 hunks)supervisor/const.py
(4 hunks)supervisor/dbus/network/connection.py
(2 hunks)supervisor/dbus/network/setting/__init__.py
(1 hunks)supervisor/docker/manager.py
(3 hunks)supervisor/docker/network.py
(5 hunks)supervisor/docker/observer.py
(2 hunks)supervisor/misc/filter.py
(2 hunks)supervisor/validate.py
(3 hunks)tests/api/test_docker.py
(2 hunks)tests/api/test_network.py
(2 hunks)tests/dbus/network/test_connection.py
(2 hunks)tests/dbus/network/test_ip_configuration.py
(2 hunks)tests/docker/test_network.py
(1 hunks)tests/test_validate.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/api/test_network.py
🚧 Files skipped from review as they are similar to previous changes (14)
- tests/dbus/network/test_ip_configuration.py
- supervisor/misc/filter.py
- tests/dbus/network/test_connection.py
- supervisor/dbus/network/connection.py
- tests/test_validate.py
- supervisor/dbus/network/setting/init.py
- supervisor/api/network.py
- supervisor/validate.py
- supervisor/api/init.py
- supervisor/docker/observer.py
- tests/api/test_docker.py
- tests/docker/test_network.py
- supervisor/docker/manager.py
- supervisor/const.py
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Build amd64 supervisor
- GitHub Check: Build aarch64 supervisor
- GitHub Check: Build armv7 supervisor
- GitHub Check: Build armhf supervisor
- GitHub Check: Run tests Python 3.13.3
🔇 Additional comments (6)
supervisor/api/docker.py (2)
34-36
: Well-structured validation schema for IPv6 options.The schema correctly validates the optional IPv6 enablement flag as a boolean value, following the established pattern for API validation.
41-55
: IPv6 status correctly added to Docker info response.The info method now includes the IPv6 enablement status from the Docker configuration, providing visibility into the current IPv6 state.
supervisor/docker/network.py (4)
62-67
: Async initialization pattern correctly implemented.The post_init method properly defers network initialization to run in an executor, maintaining event loop compatibility as intended.
33-48
: IPv6 network parameters correctly configured.The DOCKER_NETWORK_PARAMS structure properly includes both IPv4 and IPv6 IPAM pool configurations with appropriate subnet masks and gateway settings.
219-241
: New container attachment method enhances functionality.The
attach_container_by_name
method provides a cleaner interface for attaching containers by name with comprehensive error handling and proper use of the new container name constants.
205-217
: Enhanced exception handling for container attachment.The expanded exception handling now covers multiple Docker and request exception types, providing better error diagnosis. The inline IPv4 address conversion is also cleaner.
bb8260b
to
f7d51c0
Compare
Configurable and w/ migrations between IPv4-Only and Dual-Stack Signed-off-by: David Rapan <[email protected]>
f7d51c0
to
6baa9c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
supervisor/docker/network.py (1)
163-180
: Consider logging warnings for container reattachment failures.The
suppress(DockerError)
blocks hide potential failures during container reattachment. Based on previous review feedback, consider logging warnings when reattachment fails to aid in debugging.- with suppress(DockerError): - self.attach_container_by_name( - SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor - ) + try: + self.attach_container_by_name( + SUPERVISOR_DOCKER_NAME, [ATTR_SUPERVISOR], self.supervisor + ) + except DockerError as err: + _LOGGER.warning("Failed to reattach supervisor container: %s", err)Apply similar changes to observer and other container attachments.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting
📒 Files selected for processing (17)
supervisor/api/__init__.py
(3 hunks)supervisor/api/docker.py
(2 hunks)supervisor/api/network.py
(2 hunks)supervisor/const.py
(4 hunks)supervisor/dbus/network/connection.py
(2 hunks)supervisor/dbus/network/setting/__init__.py
(1 hunks)supervisor/docker/manager.py
(3 hunks)supervisor/docker/network.py
(5 hunks)supervisor/docker/observer.py
(2 hunks)supervisor/misc/filter.py
(2 hunks)supervisor/validate.py
(3 hunks)tests/api/test_docker.py
(2 hunks)tests/api/test_network.py
(2 hunks)tests/dbus/network/test_connection.py
(2 hunks)tests/dbus/network/test_ip_configuration.py
(2 hunks)tests/docker/test_network.py
(1 hunks)tests/test_validate.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/api/test_network.py
🚧 Files skipped from review as they are similar to previous changes (15)
- tests/dbus/network/test_ip_configuration.py
- supervisor/dbus/network/setting/init.py
- tests/dbus/network/test_connection.py
- supervisor/dbus/network/connection.py
- supervisor/misc/filter.py
- tests/test_validate.py
- supervisor/api/network.py
- supervisor/validate.py
- supervisor/docker/observer.py
- tests/api/test_docker.py
- supervisor/docker/manager.py
- supervisor/api/docker.py
- supervisor/api/init.py
- tests/docker/test_network.py
- supervisor/const.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
supervisor/docker/network.py (5)
supervisor/docker/manager.py (7)
docker
(146-150)post_init
(127-143)enable_ipv6
(98-100)enable_ipv6
(103-105)network
(153-157)info
(175-179)containers
(165-167)supervisor/exceptions.py (1)
DockerError
(539-540)supervisor/api/network.py (1)
info
(197-212)supervisor/docker/observer.py (1)
name
(27-29)supervisor/docker/supervisor.py (1)
name
(25-27)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Run tests Python 3.13.3
🔇 Additional comments (7)
supervisor/docker/network.py (7)
3-48
: LGTM! Clean import organization and constants definition.The addition of asyncio import and typing.Self for async initialization is appropriate. The new constants from supervisor.const are well-organized and the DOCKER_NETWORK_PARAMS structure correctly defines dual-stack IPv4/IPv6 configuration with proper IPAM pool configs.
57-67
: Excellent async initialization pattern.Moving network initialization from the constructor to
post_init
addresses the previous review concern and follows best practices. Usingrun_in_executor
for the blocking Docker API calls ensures the event loop isn't blocked during network setup.
69-112
: Properties correctly updated to use new constants.All IP address properties have been properly updated to use the
DOCKER_IPV4_NETWORK_MASK
constant with appropriate indexing for different components. This maintains the fixed IP addressing scheme for core Supervisor components.
183-217
: Improved container attachment with better exception handling.The method correctly handles IPv4 address conversion and includes comprehensive exception handling covering Docker API errors and network issues. The error messages are informative and include container names for better debugging.
219-240
: Well-designed utility method for container attachment by name.This new method provides a clean abstraction for attaching containers by name with proper error handling and duplicate attachment prevention. The logic flow is correct and the exception handling is comprehensive.
242-262
: Good improvements to default bridge detachment.Using the
DOCKER_NETWORK_DRIVER
constant instead of hardcoded "bridge" and expanding exception handling improves maintainability and error coverage. The error messages are clear and include container context.
264-279
: Simplified and improved stale cleanup method.The method correctly handles stale container cleanup with better error handling and appropriate warning-level logging. The name-based approach simplifies the interface and aligns with other methods in the class.
Proposed change
Hi,
I access my HA instances through nginx and purely using IPv6. But since it is exposed like this to the outside, I have to use the Ban IP feature, which currently doesn't work because nginx communicates with the HA instance using IPv4 and the forward header doesn't work
because it refuses to mix these familiesfor some reason (therefore it's banning docker IP of the container). :)I have this well tested for many months already and there are no side effects to this configuration.
Step I used to reconfigure existing
hassio
instance (it requires to run the image directly):After restart, all containers will also receive an IPv6 address that they can use for outgoing connections but the most important thing is that Ban IP now sees the real IPv6 of the connected client.
Link to my FR: https://community.home-assistant.io/t/docker-hassio-network-with-ipv6-enabled-by-default/760739 I made a while ago but didn't had the chance to look into supervisor until now.
Type of change
Additional information
--enable-ipv6
w/ha docker options
cli#567Checklist
ruff format supervisor tests
)If API endpoints or add-on configuration are added/changed:
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests