PXE-Less network defaulting to /24

Hi all,

I am seeing issues with certain /16 networks when provisioning using a discovery pxe-less process. For the first half of a subnet for example: works fine but when I end up in the in a higher octet, the foreman server is no longer reachable.

I booted all of the way info the discovery disk and saw that instead of a /16 (which is what the network is split up into), the default for the disk is a /24. When I went to tty3 and logged into the shell and changed the network (ip addr set dev ens192) and removed the old /24 IP config… I was then able to again talk to the Foreman server. But once kexec kicked off it went back to /24 and I lost communication and the build failed.

I feel that we should be able to define the network subnet mask or at least have the option to change it from the default /24 if needed.


Hey there,

I am assuming DHCP-less PXE-less mode. When you enter IP address in CIDR format, discovery calls a helper script which prepares network configuration and then starts the connection:

The script which deploys NM connection configuration named “primary” is here:

It looks like the script stores whole IP to “address1” ini key, but according to documentation of NM this should be written as array of pairs “address” an “prefix”.


The NM INI format is pretty much undocumented as people are supposed to use its DBUS API, but I can guess it should be named “address1” and “prefix1”. Someone needs to create a connection with static IP address and prefix and read that file to verify.

Anyway, to make this short, you have found a bug. So far I have been testing PXE-less in C-class networks. It is hard to believe this bug was found after all these years. Please report an issue and this is an easy fix (break CIDR into IP and MASK), it is just a shell script so feel free to send a PR. I can test quickly. Nice one.

I wanted to test once more and I just got the opportunity to do that. THIS time it seemed to work so for now I am holding off opening a bug report. Consider this resolved for now.