Problem:
I recently checked the Foreman 3.4 docs regarding the support of IPv4 and IPv6 and it states, that dual stack configurations are not supported. We are running dual stack here for years, without any issues. Granted, our environment is limited to F/K functionality, including tftp to set up new hosts, Ansible for maintaining the state and the VMware plugin to handle virtual machines. Maybe the issue is with other parts or plugins.
Am I missing something?
Expected outcome:
Have full support for dual stack configurations in the docs.
You can install Foreman and Smart Proxies in IPv6-only systems, dual-stack installation is not supported.
As you point out, this actually works fine (and thatās also the developer expectation), but it was never formally verified. I think this limitation should only be shown in Satellite documentation, not upstream Foreman/Katello, as there we have a difference what is defined as āsupportedā.
I guess, I have never read that part. All our servers and hosts have IPv4 and IPv6 addresses and foreman/katello works just fine with it. Unless you start using ip addresses instead of hostnames foreman uses DNS to resolve hostnames and connects with whatever the systems decides to use.
Interface settings/facts etc. can use both.
The only limitation I know of is the provisioning template kickstart_kernel_options which fails if you have set an ipv4 and ipv6 subnet for a host.
Other than that I donāt have any issues and I am really surprised that dual-stack isnāt officially supportedā¦
However, looking at my current F 3.4/K 4.6 install, I still see inside of the template:
# networking credentials
raise("Dual-stack provisioning not supported") if subnet4 && subnet6
So I am not sure this is really applied. @lzap@ezr-ondrej Could you please verify, if the merged pull request has made it into 3.4? It says merged on May 7, 2021. Thank you
Which never got merged. I guess it would need another person to take it over the finish line and probably asking @ekohl what issues there were left off preventing merge.
Since I was involved in the discussions, allow me to elaborate a bit.
The context of recent(ish) IPv6 support is that Red Hat had a large customer with a requirement for IPv6-only. So a lot of focus was on a pure IPv6 setup. There was no attention to a dual stack setup, even though for the most part it just works.
What you should also know is that docs.theforeman.org is based on the open sourced Red Hat Satellite documentation. I still consider it incomplete because it doesnāt always reflect the upstream projectās stance on things. This is an example.
Red Hat has its own definition of supported. If RH calls it supported, customers can ask customer support and it should be resolved. One guidance on that is that if itās documented, it has to be supported. Then there are also things which are undocumented and unsupported. This doesnāt mean it canāt work, itās just that if it breaks, you get to keep the pieces. There have been discussions about creating some āknown to workā category, but Iām not sure that ever went anywhere.
Bringing this back to the Foreman community, there is no customer support so everything is unsupported if we follow Red Hatās definition of supported.
Please correct me if I am wrong, but during the kickstart (Dracut) phase we could fall back to a single IP address, like you do in the patch for 3.5, and then during the host configuration the (static or DHCP) network setting get set, leading to a provisioned dual stack host.
So basically, only the corresponding foreman/katello server and smart proxies need to be reachable via the chosen dual_stack_fallback address, after the host is deployed it will have access to the entire network.
If that holds true in your perspective, I am happy to raise a documentation issue to change the paragraph in the documentation to include this statement.
Fully understood, I am only talking about the documentation for the community release.
For kickstart the host in foreman can only either have ipv4 or ipv6 subnet (it can have addresses for both set, both the subnet field for one of them must be empty). Otherwise it fails. After the host has been built you can set subnets for both ipv4 and ipv6 on the host.
With the patch above, that case is handled differently.
Your foreman servers and proxies are simply dual stack and have both ipv4 and ipv6 addresses.
So beyond the limitation during kickstart, itās all dual-stack. At least I havenāt seen any other limitation, yet. All connections are handled via hostnamesā¦
Currently I am using my own cloned version of the kickstart template, which doesnāt require this step. It will automatically fall back to ipv4, if both are selected. That was the original idea of my PR above, which has evolved to the latest PR mentioned by Ewoud.
I actually only very recently saw the patch, but Iām wondering if we can have a default. For that we need to know the impacts.
There are a few steps:
Kernel parameter is passed to Dracut to set up Anaconda
Anaconda receives a kickstart with network definitions
Host is booted in the final network config
The first step canāt set up dual stack statically, but I wonder if Anaconda reconfigures the network using the network definitions. If it does, then you only need the Dracut environment to retrieve the kickstart, which means the only factor is whether Foreman has IPv6, IPv4 or both. We can actually determine that using DNS lookups to the Foreman hostname: if thereās AAAA, we prefer IPv6. Then the user isnāt burdened by an additional step.
Now I donāt know the actual interactions here. If Anaconda doesnāt reconfigure the network and only uses the Dracut environment then the implications are bigger, since it also needs to retrieve content.
Last week some RH Satellite engineers had a meeting with the Anaconda team and networking was an area that we wanted to dive into. @lstejska can we put this on the agenda for the follow up meeting for networking?
It would be good to have more documentation around this topic. Documenting this parameter in the provisioning hosts guide is a good step, even if we at some point automatically determine it.
To be honest, Ewoud, I am not really sure if we need to have answers to all those questions. If I am wrong, I am happy to stand corrected
From my perspective, right now you have to make a choice between v4 and v6 on both the network definitions and your environment. So either a) all your servers plus the content plus whatever is needed during kickstart is available via ipv4 or b) via ipv6.
If you would have a mix of servers and/or services reachable only by one or the other, you are in trouble today anyway.
The fall-back approach proposed about 2 years ago and the better version (the patch in 3.5 above) do not change that requirement at all, they only provide you with the option to have a dual-stack config applied to the host while still running on a single IP during kickstart.
I fail to see the difference of those scenarios:
current approach
host is configured with one subnet only (ipv4 or ipv6)
kickstart runs with that IP
all servers/services reachable via that selected IP address
approach with the patch in F 3.5
host is configured with two subnets, however, only the selected IP address is used
kickstart runs with that IP
all servers/services reachable via that selected IP address
But somehow I have the feeling I am missing something, maybe because I donāt know Fedora at all.
We can actually determine that using DNS lookups to the Foreman hostname: if thereās AAAA, we prefer IPv6. Then the user isnāt burdened by an additional step.
True, however, we should still have an option to force v4 or v6 in the config. Maybe the admin is in a middle of a migration and the F/K server is already on v6, while most other network services are not.
Itās true that 3.5 will have a much better experience, but I was thinking about how to make it even more automatic. We have a lot of information and if we can autodetect the setup so the user doesnāt have to think about it, that would be an even better experience. On the other hand, perhaps youāre right that itās good enough and itās not a crucial thing to solve.
Perhaps itād help if people shared their IPv6 migration strategies, especially if they work in larger organizations.