Ipxe vm client attempting to load ipv6 route when its not supported in our network?

Problem:
Foreman 3.8.0 with Katello-4.10 RHVM, ipa intergrated, ovirt and infoblox , centos7 vm is provisioned in ipa and creates vm in rhvm, vm attempts to boot and times out, cntl+b shows node trying to use an ipv6 route?

Expected outcome:
ipxe should use a ipv4 route

Foreman and Proxy versions:
Foreman 3.8.0 with foreman-proxy-3.8.0-1.el8.noarch
Foreman and Proxy plugin versions:
Foreman 3.8.0 with Katello-4.10

Distribution and version:
Red Hat Enterprise Linux release 8.9 (Ootpa)

Other relevant data:
attempts to do bare metal provisioning also fail with
NBP 0 byte download error although url and curl or a browser can pull file i.e.

Console echoes
http://pulp3.domain.net:8000/httpboot/grub2/grubx64.efi

Start PXE over IPV4 on Mac XXXXXXXXXX
Station address is 100.110.67.250

Server ip address is 100.110.27.147
NBP Filename is http://pulp3.domain.net:8000/httpboot/grub2/grubx64.efi
NBP filesize is 0 bytes.
PXE-E23: Client Received TFTP error from server.

If the file(s) is/are bad with the Foreman release I have how do I fetch a nightly build one?

I believe I saw this somewhere before but have not been able to refind it.
if there is a setting to pass to ipxe to not use ipv6 would that be in the ipxe template ?

I assume this config is also trying to do discovery which would be for unknown hosts but I don’t think that is what we want, I want to install known hosts defined by us, help would be appreciated as our old system is effectivity dead.

Aug 19 18:04:56 pulp3 systemd[1]: Received notify message exceeded maximum size. Ignoring.
Aug 19 18:04:57 pulp3 in.tftpd[422152]: RRQ from ::ffff:100.110.25.26 filename boot/fdi-image/vmlinuz0
Aug 19 18:04:57 pulp3 in.tftpd[422152]: Client ::ffff:100.110.25.26 File not found boot/fdi-image/vmlinuz0
Aug 19 18:04:57 pulp3 systemd[1]: Received notify message exceeded maximum size. Ignoring.
Aug 19 18:05:01 pulp3 systemd[1]: message repeated 4 times: [Received notify message exceeded maximum size. Ignoring.]

This might be just ipxe defaults upon reacting to new gear in our network hardware provisioning on some older hardware works