Foreman and network/subnet management

Hi all,

On a PR that add MTU support to the subnets in foreman, I discussed with some of you about Network support in foreman and how IMHO foreman can simplify host creation (select ipv4 subnet and/or select ipv6 subnet VS select a network) and open the door to more integration with external network providers (eg: create a network in a network provider or compute resource on network creation in foreman or import network subnet…)

I never saw other implementation than D (1 network is 0…1 IPv4 subnet and 0…1 IPv6 subnet with at least one of them and IPv4 and IPv6 networks are not used on another network), except on some network routers (Case C). Case D is the more simple implementation but I don’t know if some other are widely used by foreman users.

Because actualy the VLAN ID is on the subnet model, and if you remove from the picture the Network and ComputeResourceNertwork parts, think foreman is acting as:

  • Case D for virtual interfaces, because it will use the first VLANID it found and ignore the other.
  • Case A for ‘normal’ interfaces, if users don’t care about VLAN tag.

IMHO, I think that the case D is the good way to go if we decide to go that path. I think that will help to automatically select the compute resource network in a second time, and will simplify the host interface GUI.

Maybe this can be a good candidate for foreman 2.0 :wink:

Any comments/thoughts are welcome :slightly_smiling_face:



Since all cases are quite similar, I would prefer D because refactoring it into A-C should not be big issue. Compared to what we have today, it’s still much more flexible integration.

If we decide to do this, it would be good idea to write a network provider modules for Smart Proxy as a simple ISC DHCP configurator. This could solve one of the big misunderstanding - Foreman’s inability to manage subnets and conflicts with Puppet/installer. Integration with network providers should be in the initial design.

What is the idea behind Network vs ComputeNetwork?

This sounds like a very good abstraction and an improvement. We had a similar discussion this week without knowing your story I was also thinking we need something like this. At first thought I’m leaning to D.

Regarding the ComputeNetwork you can think back of @TimoGoebel’s demo. He mentioned they often had issues where a user selected a subnet but then forgot to select the corresponding network in the compute resource. With this Network abstraction you would just select the correct Network and you could only select the resources that are actually on that network. When there’s only one option you can automatically select it. Note you can still have a situation where you want a v4-only or v6-only host on a dual stack network.

Yes, if we can achieve that it will be really nice. I will try to think about how we can manage this. To workaround the conflict with puppet installer, maybe we can add an option to include an external dhcpd config file where users can store things like subnets and others things that are not managed by foreman. I didn’t find the time to make a PR to add this.

The purpose of this is to map foreman network to compute resources network. If we create a mapping like this, we can select automatically the compute resource network on host creation. A lot of provisioning errors of VM in my company are due to user that forgot to select corresponding compute resource network. That what I tried to solve last year for VLAN based compute resources, but here this will work for any kind of compute resources and as much for the webui user as for hammer/api users.

On this part we can choose two way (maybe we can add an option in foreman settings to switch from one to the other):

  1. In host’s interface modal we show only networks mapped to the current compute resource. In this case we can automatically choose the compute resource network and not show the compute resource network part of the form

  2. In host modal we show all networks, if selected network have a mapping to the current compute resource we select it automatically, if not we show the compute resource networks so the user can manually select it as is currently the case.

I think the first is a good default and fallback to the second will not be too complicated.

Yes this is a common issue for us too. some javascript was added to auto select compute resource network based on vlan for vmware compute resource and I extended it to ovirt some weeks ago (and it can be extended to any VLAN based compute resources) but this only work on name convention in compute resource network name and don’t work for hammer/API users.

Yes, for this I wanted to add a network select in the host interface modal and replace the actual ipv4 and ipv6 subnet selects by to checkboxes “enable ipv4” and “enable ipv6” so user can choose to configure one or both of them (or “disable ipvX” checkboxes depending on what we want as default value for these checkboxes) These checkboxes will be disabled if the selected network don’t have corresponding ipv4 or ipv6 subnets configured.

1 Like

I really like the ideas. Let’s do this. I’m currently very short on time, but will do everything I can to get this stuff in. Let’s do this one thing at a time, every small step towards the goal is a good one and highly appreciated.

Yeah, this is not a perfect solution. But better than nothing. :slight_smile:

1 Like

We should avoid big bang patches as much as we can. We need small steps and good transition path. Any ideas how to approach this?

1 Like

Yes, my main concern is migration path from existing installations to this new network management, but I failed ATM to find a “small hops” path to this.

I think that we have to provide a migration script that can be run in noop mode by user before migration to see if some of their data in foreman do not fit into the model D. This migration script (rake task, rails database migration ?) should print which data can’t be automatically updated via the normal update process.

On top of my head, this script have to:

  • Check prerequisite on the subnet parts: For all interfaces of each host, get all dual stack hosts (ipv4 and ipv6) to get all couple of ipv4 subnets and ipv6 subnets used together on some hosts. See if there is multiple combination of ipv4 and ipv6 subnets or if there is VLAN or MTU values different on those subnets (or any differences on taxonomy, domains or all other relations that will be transferred to the network model). If that is the case, we can’t migrate automatically. We print an advise message on manual migration path
  • Check prerequisite on the compute resource part: For all hosts that are linked to a compute resource other than bare metal, check that host’s interfaces subnets used are always linked to the same compute resource network on each compute resource. If that is the case, we can’t migrate automatically. We print an advise message on manual migration path.

If there is no problem, we automatically migrate all current foreman data. if one of the previous point is not satisfied, we exit.

For the network, subnet part

  • Create a new network that encapsulate previous subnets that are used on dual stack hosts.
  • For all remaining subnets, automatically create a network to and copy MTU and VLAN values in it.

For the hosts part:

  • For all interfaces of each host, link host’s interface to network that encapsulate current subnet(s)
  • Remove direct references to subnets

For the compute resource part:

  • For all interfaces of each host, link compute resource network to the foreman network of the host’s interface

When all data is migrated correctly, enforce the data model to the model D

Some things I don’t know how to handle here are:

  • smart proxies . Can we move smart proxies (tftp, dhcp, discovery, remote execution…) from subnet to network ? IMHO we should move them to network but maybe I don’t have all usecases in mind.
  • Parameters: Should we keep this on subnets or transfer to network ?

Is there any place to discuss about the design of this or we continue here ?

What’s the definition of an interface? Is it a physical one which can be in multiple VLANs/networks at the same time or is it a logical one that’s only in one network?

How the design sound to @sean797 in regard to multihoming/ha plans?

Sorry I don’t see were you want to go with this question. If you have some example to illustrate your mind, please send it :slight_smile:. For me an interface (physical or virtual) is only in one network, not configured (and so in none network), but you can setup virtual interface (vlan tagged iface) on top of it that will be in a different network

If we review all type of interfaces available in foreman:

  • Interface: Can be attached to a untagged (or “native” on a trunk) network or to no network (on the switch side, port in access mode, trunk with “native” vlan or bare trunk)
  • BMC: untagged or tagged network or no network (if not configured)
  • Bond: same than bare interface
  • Bridge: same than interface
  • Virtual interface: tagged virtual interface on top of another interface, can be attached to one network

So if I didn’t miss a usecase it should be ok with this approach.

1 Like