Detection of changes in host network configuration by foreman

Hi all,

I’m trying to figure out how foreman detects and updates the available network interfaces and network configuration on a host.

I’m using a setup with foreman discovery and salt and I noticed that whenever the network configuration is changed by salt, sometimes foreman shows the changes in the webinterface and sometimes it doesn’t. I can’t seem to figure out what the trigger is for this.

This applies to the IP address of a host, but also additional network interface like bonded or virtual interfaces. Sometimes foreman seems to detect the changes, but not every time.

Does anyone know what would be the trigger for foreman to update the information, so that I can investigate a little deeper? Can I trigger it manually? Is it a bug that the information isn’t updated in all cases?

It seems to apply to at least foreman 1.16 and older.

I think it depends on if you have a compute resource, what that compute resource is set to and if your host is associated with that compute resource. For me that information follows whatever I have set in my hypervisor, so I assume that is being populated by fog

There is a setting called “ignore_puppet_facts_for_provisioning” and if that’s not set, Foreman will re-create host’s NICs on every fact upload. The key method is host/base#set_interfaces and it only recognizes regular interface and IPMI. We are limited on what puppet/salt will send, so don’t expect much.

There is a PR in discovery plugin to recognize bonds using LLDP neighbors which is a custom fact actually. We are currently discussing if this should live in Foreman Core rather than in discovery plugin.

Thanks for the answers.

At this point I’m not using a compute resource. Also ignore_puppet_facts_for_provisioning is set to No (the default I think?).

Actually, investigating a little more makes me think my issue closer to the foreman_salt plugin and/or salt itself.

To explain my scenario a little more, during provisioning only dhcp is used, which works as expected. After provisioning by foreman, salt takes over and configures the host with static IP address configuration.
It seems the updated IP address configuration is not always updated in the facts known by foreman (foreman doesn’t always notice the new ip addresses). I can’t really put my finger on it yet, as it doesn’t seem that easy to reproduce.

There is another reason why I’m trying to figure out how this works; when the salt configuration creates a bond and a vlan based interface on top of the bond, it seems that one of the physical network interface is lost in the administration of foreman. Foreman simply isn’t aware anymore that the interface exists. I’m trying to understand why that happens and if it is an issue of salt, foreman or the foreman_salt plugin.

Could be salt because this is known to work with puppet. @mmoll or @stbenjam might know more.

Thanks, I’ll investigate that a little further and maybe post it in the community of salt.

To rewind a little to the point where the network interfaces do not get updated in foreman at all times, I just noticed the following in the foreman_salt plugin;

At the first initial deployment of the host, both network interfaces default to dhcp. Debugging showed that the foreman_salt plugin is parsing the following facts to be translated:

ip_interfaces::eth0::0 => 192.168.128.214
ip_interfaces::eth0::1 => fe80::5054:ff:fe26:8058
ip_interfaces::eth1::0 => 192.168.128.215
ip_interfaces::eth1::1 => fe80::70a:b7c2:63a8:40e0

After network reconfiguration with a bond configuration with a static IP address (with eth0 and eth1 as slaves), foreman_salt only seems to process the following facts:

ip_interfaces::bond0::0 => 192.168.128.3
ip_interfaces::bond0::1 => fe80::5054:ff:fe26:8058

Both the eth0 and eth1 interfaces shown in the first code block are now slaves of the bond0 interface and have no IP address anymore. Therefore the facts seem to be correct.

However foreman_salt does not update the physical interfaces eth0 and eth1 and keeps thinking they still have an IP address. I think that may be an issue in the foreman_salt plugin?

To make things even worse, when I configure the bond0 as dhcp interfaces, it automatically obtains the IP address that ens4 had before. However since foreman isn’t aware that eth0 doesn’t have that IP address anymore, it refuses to update the NIC administration:

2018-06-20 11:36:24 fce9f34a [app] [W] Saving bond0 NIC for host  failed, skipping because:
2018-06-20 11:36:24 fce9f34a [app] [W]  Ip has already been taken

I think foreman_salt should take into account that there can be network interfaces of that do not have an IP address anymore and cleanup the foreman administration.

I think that’s a more general problem with the fact importer in Foreman itself - any cfgmgmt plugin will likely not be cleaning up old interfaces. @Marek_Hulan can probably confirm, but I think he’s away at the moment.

I’m just setting up a new Foreman & Salt stack at home, so I’ll see if I can test this and report back - might be a few days though :slight_smile:

That sounds like a logical explanation, I hope you can give some insights in the issue.

One thing I’d like to add, I don’t know how the other plugins are handling this, but the foreman_salt plugin seems to ignore interfaces that do not have an IP address. That could mean that foreman is unable to distinguish interface that have no IP address anymore from interface that actually do not exist anymore. That could be of interest when cleaning up.

This sounds like problem with salt client sending wrong “facts” (these are “pilar” I think are they). Our parser code simply takes all facts, removes which are missing, adds and updates the others. Then performs parsing. Can you test how salt reports these NICs locally? In puppet this would be facter --json command.

Grains, actually. Pillar is the storage system for inventory data (and can supply additional grains, much like external facts). Here ya go (edited to just IP stuff)

[greg@opal]$ sudo salt-call grains.items --local
local:
    ----------
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 127.0.0.1
        ip6_nameservers:
        nameservers:
            - 127.0.0.1
        options:
        search:
        sortlist:
    fqdn_ip4:
        - 127.0.0.1
    fqdn_ip6:
    host:
        opal
    hwaddr_interfaces:
        ----------
        enp0s25:
            50:7b:9d:27:a7:39
        lo:
            00:00:00:00:00:00
        wlp3s0:
            4c:34:88:54:d4:95
    ip4_gw:
        10.128.128.128
    ip4_interfaces:
        ----------
        enp0s25:
        lo:
            - 127.0.0.1
        wlp3s0:
            - 10.244.221.224
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        enp0s25:
        lo:
            - ::1
        wlp3s0:
            - fe80::5f95:32f5:e7db:46a3
    ip_gw:
        True
    ip_interfaces:
        ----------
        enp0s25:
        lo:
            - 127.0.0.1
            - ::1
        wlp3s0:
            - 10.244.221.224
            - fe80::5f95:32f5:e7db:46a3
    ipv4:
        - 10.244.221.224
        - 127.0.0.1
    ipv6:
        - ::1
        - fe80::5f95:32f5:e7db:46a3
    saltversion:
        2018.3.0
    virtual:
        physical

Yeah, we need to see output with bonded interfaces tho. This looks like a laptop with no bonds… :slight_smile:

1 Like

This is the output of a bonded setup:

    fqdn:
        testserver.test.localdomain
    fqdn_ip4:
        - 192.168.128.3
    fqdn_ip6:
    host:
        testserver
    hwaddr_interfaces:
        ----------
        bond0:
            52:54:00:26:80:58
        eth0:
            52:54:00:26:80:58
        eth1:
            52:54:00:26:80:58
        lo:
            00:00:00:00:00:00
    id:
        testserver.test.localdomain
    ip4_interfaces:
        ----------
        bond0:
            - 192.168.128.214
        eth0:
        eth1:
        lo:
            - 127.0.0.1
    ip6_interfaces:
        ----------
        bond0:
            - fe80::5054:ff:fe26:8058
        eth0:
        eth1:
        lo:
            - ::1
    ip_interfaces:
        ----------
        bond0:
            - 192.168.128.214
            - fe80::5054:ff:fe26:8058
        eth0:
        eth1:
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.128.214
    ipv6:
        - ::1
        - fe80::5054:ff:fe26:8058

I noticed that in foreman_salt (app/services/foreman_salt/fact_parser.rb) that it looks at the key ‘ip_interfaces’ to update the interfaces. However those empty eth0 and eth1 interface definition don’t seem to be included.

Hmm, can you try to turn on debug mode and paste log snippet which does parsing? We have a couple of debug messages there, that might be useful.

Can you get me this output in JSON format? There is a test test/unit/grains_importer_test.rb in Salt codebase, what I can do is I can add this JSON and see what test output looks like.

salt.json.tar.gz (1.8 KB)
foreman.log.tar.gz (1.5 KB)

See attached foreman log and text file with the json output of salt. I’m not sure if that matches with what you need, but please let me know if you need more.

So nothing really useful there, unit tests won’t run so can’t test the JSON. This needs someone who is aware of how this works. @stbenjam or @mmoll - I believe the problem is during extracting grain data, NIC parser gets incorrect info

I discovered that it is actually the ‘foreman-node’ command from ‘smart_proxy_salt’ which retrieves the salt grains and transforms these into foreman facts.

The data is passed to the plainify method here: https://github.com/theforeman/smart_proxy_salt/blob/master/bin/foreman-node#L102

The following data goes into that method at a certain point:

{"eth1"=>[], "lo"=>["127.0.0.1", "::1"], "bond0"=>["192.168.128.3", "fe80::5054:ff:fe26:8058"], "eth0"=>[]}

And the following comes out, note the empty arrays at the beginning and end"

[[], [{"ip_interfaces::lo::0"=>"127.0.0.1"}, {"ip_interfaces::lo::1"=>"::1"}], [{"ip_interfaces::bond0::0"=>"192.168.128.3"}, {"ip_interfaces::bond0::1"=>"fe80::5054:ff:fe26:8058"}], []]

That information is then transformed into json and posted to the foreman api at /api/hosts/facts.

It seems the knowledge of the interfaces eth0 and eth1 is lost here.

The question is, what would be the correct behavior for foreman to acknowledge that the interface doesn’t have an IP address anymore?

2 Likes

I’m not sure if my previous comment is even that relevant. It seems that there are related scenarios that are also troublesome.

E.g. if I completely remove a previously existing bond0 interface, the foreman interface will still show the interface with all of its last known properties. That is probably not fix-able without some kind of cleanup mechanism at the foreman_salt plugin side.

I suspect that data is sourced from the python script at https://github.com/theforeman/smart_proxy_salt/blob/master/bin/foreman-node#L64

If you just run that script manually, do you see eth0 in the output? If that’s the case, you would probably still see those interfaces in the output from salt - eg “salt-call grains.items” and you would want to use a salt event to trigger a refresh of your grains. foreman-node may just need to do some scrubbing of entries to better populate that data…

Yes, both salt-call grains.items and the python code embedded in foreman-node contain the interfaces eth0 and eth1.

The following is the part that the python code produces:

    "ip4_interfaces": {
      "eth1": [], 
      "lo": [
        "127.0.0.1"
      ], 
      "bond0": [
        "192.168.128.214"
      ], 
      "eth0": []
    }, 

So the interfaces are still present here, but contain an empty array.

That piece of output is then processed by foreman-node and foreman-node skips those interfaces (specifically in its ‘plainify’ function.

I tried to change foreman-node such that it would pass the empty interfaces definition to foreman, however that didn’t work as expected, as foreman then had the same interface twice in its administration.
Part of the cause could be that the mac address of the interfaces changes when creating a bond, such that foreman isn’t able to identify the interfaces anymore.

Some cleaning up mechanism might also help to rectify that. But at this point, I’m not sufficiently aware of what foreman needs to get it right.

1 Like

I added some debug statements to investigate further. There seems to be a cascading effect when considering the full picture.

To paint the full picture, I started my scenario again with the bonding+vlan setup.

Problem 1.
First of all the hosts is deployed with two network interfaces, say eth0 and eth1. Afterwards salt changes the configuration to a bond+vlan setup, say bond0 and bond0.128.

In that case both the bond and the vlan obtain the mac-address of one of the slave interfaces in the bond. What is important is that the vlan interface also obtains the same IP address that was previously assigned to eth0.

The new interface information for bond0 and bond0.128 will be sent to foreman. The information is handled in the foreman core set_interface method. That method replaces the interface eth0 with the information of bond0.128. Ergo, eth0 dissappears.

Debugging showed that the variables in that method hold the following,

iface.identifier = bond0.128
iface.identifier_was = eth0

Now that could be just fine iff the interface information of eth0 is properly updated by foreman_salt. But as we saw in the previous posts, that is not the case.

Question 1: Is it correct behavior that in foreman core the physical interface is removed in favor of a virtual interface?

Problem 2.
If problem 1 is correct behavior, fixing the missing interface definitions in the facts in foreman_salt for eth0 and eth1 (as described in the preceding posts, could actually fix (or work around) the issue. However when the information of eth0 and eth1 is properly sent to foreman, they are added as new interfaces, resulting in duplicate entries.

Question 2: Is it correct behavior that if an interface is updated in foreman, while the identifier remains the same, but the mac-address and the IP address has changed, that it is added as a new interface instead of merged with the existing one with the same identifier?


I’d love to hear your thoughts on this.

I’m also interested to known how this process works with puppet, is it really working without issues? Or could it be that bond+vlan issue is never discovered because it seems to require a very specific scenario (matching IP address and mac-address of vlan with an interface). Furthermore it is difficult to notice, unless subsequent salt/puppet runs are depending on the that specific information from foreman.