RFE: Team network interface support

Foreman has native support for bridged, bond and VLAN interfaces. That includes provisioning on tagged VLAN and bond interfaces, everything is supposed to be passed into Anaconda correctly. It looks like Anaconda supports team options as well.

Purpose of this RFC is to gather opinions about introducing new NIC type: Nic::Team. I’d expect these work items:

  • Get familiar with teaming (good resource: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_teaming)
  • Understand teaming modes and finding reasonable amount of configuration options to support (e.g. runners, link-watchers or just JSON string similarly as nmtui)
  • Introduce new class Nic::Team very similar to Nic::Bond
  • Prepare networking templates for team devices
  • Test provisioning over teamed interface (including Anaconda using teamed provision interface)
  • Figure out how facter reports team interfaces
    • facter 2.x
    • facter 3.x
    • ansible
  • Make sure NICs get parsed correctly from above fact clients
  • Implement API and hammer CLI
  • Testing, end-to-end demo, handover to QA

Are there any hidden dragons? Anybody interested in such a feature? Comments? Speak up.

2 Likes

At a bare minimum we should be able to allow the user to enable LACP, and specify ports without having to write out the JSON for it.

I think this would be a nice addition. You can use Feature #17033: Add Network Teaming configuration for hosts - Foreman for tracking, given the low number, this has been asked quite while ago :slight_smile:

I’d add hammer support to your list and probably removed facter 2. IIRC it’s only used in discovery these days, which hopefully will be updated to newer version before this gets done.

Are there any foreman-discovery things we need to add if we want discovered host to be able to use team instead of bond?

That’s basically making sure that puppet facter 2.x parsing works correctly.

I would like to see support for teaming in Satellite/foreman.

We have a setup that would greatly benefit from being able to provision/configure the OS via a team since the end result requires that the servers be teamed.
Currently we are forced to provisioning each physical server via a single interface using PXE / auto deploy and their are a number of additional setup steps required after the deploy finishes due to no teaming support within foreman/satellite.

As we scale out to more and more physical nodes this is becoming more of an issue. This also affects our ability to recover rapidly if we had a DR situation and needed to rapidly redeploy.

Overall if teaming is the replacement for bonding, why would it not make sense to support it ?

Bumping this,

I have the same problems as NMI, I rely on teaming for my clusters managed by foreman, and not having this support is making things a major time sync.

This is the recommended way to bond from multiple vendors now, and has been for a few years. Its a shame to see puppet fall behind.

In Foreman 1.20 we got a nice PR to add initial Facter 3 support.

However, a quick test shows facter doesn’t report the members of a team:

# facter networking
{
  dhcp => "192.168.121.1",
  domain => "example.com",
  fqdn => "centos7-client-puppet5.example.com",
  hostname => "centos7-client-puppet5",
  interfaces => {
    eth0 => {
      bindings => [
        {
          address => "192.168.121.103",
          netmask => "255.255.255.0",
          network => "192.168.121.0"
        }
      ],
      bindings6 => [
        {
          address => "fe80::5054:ff:fe25:b4f0",
          netmask => "ffff:ffff:ffff:ffff::",
          network => "fe80::"
        }
      ],
      dhcp => "192.168.121.1",
      ip => "192.168.121.103",
      ip6 => "fe80::5054:ff:fe25:b4f0",
      mac => "52:54:00:25:b4:f0",
      mtu => 1500,
      netmask => "255.255.255.0",
      netmask6 => "ffff:ffff:ffff:ffff::",
      network => "192.168.121.0",
      network6 => "fe80::"
    },
    eth1 => {
      dhcp => "192.168.122.1",
      mac => "52:54:00:2d:65:6e",
      mtu => 1500
    },
    eth2 => {
      dhcp => "192.168.122.1",
      mac => "52:54:00:2d:65:6e",
      mtu => 1500
    },
    lo => {
      bindings => [
        {
          address => "127.0.0.1",
          netmask => "255.0.0.0",
          network => "127.0.0.0"
        }
      ],
      bindings6 => [
        {
          address => "::1",
          netmask => "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff",
          network => "::1"
        }
      ],
      ip => "127.0.0.1",
      ip6 => "::1",
      mtu => 65536,
      netmask => "255.0.0.0",
      netmask6 => "ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff",
      network => "127.0.0.0",
      network6 => "::1"
    },
    team0 => {
      bindings => [
        {
          address => "192.168.122.235",
          netmask => "255.255.255.0",
          network => "192.168.122.0"
        }
      ],
      bindings6 => [
        {
          address => "fe80::b0ea:7526:9b83:2a82",
          netmask => "ffff:ffff:ffff:ffff::",
          network => "fe80::"
        }
      ],
      dhcp => "192.168.122.1",
      ip => "192.168.122.235",
      ip6 => "fe80::b0ea:7526:9b83:2a82",
      mac => "52:54:00:2d:65:6e",
      mtu => 1500,
      netmask => "255.255.255.0",
      netmask6 => "ffff:ffff:ffff:ffff::",
      network => "192.168.122.0",
      network6 => "fe80::"
    }
  },
  ip => "192.168.121.103",
  ip6 => "fe80::5054:ff:fe25:b4f0",
  mac => "52:54:00:25:b4:f0",
  mtu => 1500,
  netmask => "255.255.255.0",
  netmask6 => "ffff:ffff:ffff:ffff::",
  network => "192.168.121.0",
  network6 => "fe80::",
  primary => "eth0"
}

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:25:b4:f0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.103/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
       valid_lft 3138sec preferred_lft 3138sec
    inet6 fe80::5054:ff:fe25:b4f0/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP group default qlen 1000
    link/ether 52:54:00:2d:65:6e brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP group default qlen 1000
    link/ether 52:54:00:2d:65:6e brd ff:ff:ff:ff:ff:ff
7: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:2d:65:6e brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.235/24 brd 192.168.122.255 scope global noprefixroute dynamic team0
       valid_lft 3504sec preferred_lft 3504sec
    inet6 fe80::b0ea:7526:9b83:2a82/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

# facter --version
3.11.4 (commit dc7babfd0ad4523a72d3366cd4f4322add9c3b4b)

Now I’m not familiar with teaming so maybe I didn’t do it right. However, I think this also doesn’t exist with bonding so perhaps we can treat them in the same way.

At Config Management Camp I talked to a user who asked about this. Has there been any effort to proceed with this?

If teaming means bonding two interfaces for ha reasons, this already works. We use it all the time.

Foreman currently by default deploys bonds but AFAIK there’s no team support.

Ah, gotcha.

Back to your question - NO.