UEFI provisioning with HTTP for bare metal provisioning

Hello,

In our company till now we were provisioning (with discovery image) bare-metal hosts with use of PXE boot and “legacy” mode (with download via HTTP) which was working fine.
Unfortunately nowadays more and more laptops are deprecating use of it in favor of UEFI.

At this moment I was able to move from PXE → UEFI (grubx64.efi image and Grub2 templates) but at this moment everything works slow as both download of FDI and later kernel is done via TFTP.

How can I implement HTTP protocol for download during those two steps (it was possible to do that with PXE so I assume it would be possible here too)? In which templates and what changes needs to be done?

Foreman version is 2.1.

If there is some more data needed, please let me know.

Thanks in advance !

Which provisioning template do you use? Did you look into the template?

Looking at “Kickstart default PXEGrub2” you would have to enable httpboot on the smart proxy with foreman-installer, set a httpboot proxy in the subnet and it should add menuentries for http and https boot.

1 Like

https://docs.theforeman.org/nightly/Provisioning_Guide/index-foreman-el.html#creating-hosts-with-uefi-http-boot-provisioning_provisioning

I also suggest to extract grubx64.efi from Fedora Rawhide and use the most up to date version, EL7 version is a bit old and there are some bugs.

I did everything as mentioned above but I am getting same problem as user in this thread: Httpboot provisoning files missing - #5 by lzap. Although I have no typo in settings.yml. I am getting: “error: File not found” and “error: you need to load the kernel first”.

Thanks!

We need more info, file not found when? Where? Anything in logs? TFTP logs or HTTP logs? If you use HTTP Boot feature then check smart proxy logs for more info.

Sorry for late reply and lack of valuable info in previous posts.

Below you can find a bit more data:

“# foreman-installer --scenario foreman
–foreman-proxy-httpboot true
–foreman-proxy-http true
–foreman-proxy-tftp true”

  • HTTPBoot Capsule added to the “provisioning network”
  • all ports between “provisioning network” and server are opened
  • /var/lib/tftpboot/grub2/grub.cfg file looks as follows:
menuentry 'Foreman Discovery Image' --id discovery {
  linuxefi boot/fdi-image/vmlinuz0 rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=https://ip_address proxy.type=foreman BOOTIF=01-$mac
  initrdefi boot/fdi-image/initrd0.img
}

menuentry 'Foreman Discovery Image custom' --id custom_discovery {
  linuxefi (http,ip_address)/pulp/isos/company_name/Library/custom/Foreman_Discovery_Image/Foreman_Discovery_Image_repo/foreman-discovery-image-3.7.5.iso-vmlinuz ip=dhcp rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nokaslr nomodeset proxy.url=https://ip_address proxy.type=foreman BOOTIF=01-$mac
  initrdefi (http,ip_address)/pulp/isos/company_name/Library/custom/Foreman_Discovery_Image/Foreman_Discovery_Image_repo/foreman-discovery-image-3.7.5.iso-img fdi.countdown=10
}
  • first option (with id → discovery) via TFTP works fine and downloads the image (but due to TFTP protocol, this takes a lot of time)
  • wget works fine and downloads the images w/o any problem
  • once host is booted, proper boot menu (from grub.cfg) is shown and when choosing “custom_discovery” download of the files starts but after few seconds I got: "“error: …/…/grub-core/net/tftp.c:255:File not found.” " (see screenshot attached) and then laptop gets “Kernel Panic” (also screenshot attached).
  • in the “/var/log/httpd/error_log” following can be found
[ 2021-10-07 16:30:41.5074 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] #<Thread:0x00007f7f259b6b70@/opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:471 run> terminated with exception (report_on_exception is true):
[ 2021-10-07 16:30:43.1631 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/connection/heartbeats.rb:100:in `sleep'
[ 2021-10-07 16:30:44.5600 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] : time interval must be positive (ArgumentError)
[ 2021-10-07 16:30:44.5601 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/connection/heartbeats.rb:100:in `block in _start_send_ticker'
[ 2021-10-07 16:30:44.5601 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

next try

[ 2021-10-07 16:35:37.5453 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] warning: broker sent EOF, and connection not reliable
[ 2021-10-07 16:35:37.5454 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] #<Thread:0x00007f7f259b61c0@/opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:471 run> terminated with exception (report_on_exception is true):
[ 2021-10-07 16:35:37.0176 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:198:in `block (2 levels) in start_listeners': Received message is nil, and connection not reliable (Stomp::Error::NilMessageError)
[ 2021-10-07 16:35:37.0176 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in `loop'
[ 2021-10-07 16:35:37.0177 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in `block in start_listeners'
[ 2021-10-07 16:35:37.0177 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

another one:

[ 2021-10-07 16:44:07.9582 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] warning: broker sent EOF, and connection not reliable
[ 2021-10-07 16:44:07.9585 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] #<Thread:0x000000001d49f530@/opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:471 run> terminated with exception (report_on_exception is true):
[ 2021-10-07 16:44:07.9585 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:198:in `block (2 levels) in start_listeners': Received message is nil, and connection not reliable (Stomp::Error::NilMessageError)
[ 2021-10-07 16:44:07.9585 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in `loop'
[ 2021-10-07 16:44:07.9585 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in `block in start_listeners'
[ 2021-10-07 16:44:07.9585 13001/7f139af46700 Pool2/Implementation.cpp:1274 ]: [App 13334 stderr]       from /opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

and last one:

[Thu Oct 07 16:45:17.540823 2021] [mpm_prefork:notice] [pid 12820] AH00170: caught SIGWINCH, shutting down gracefully


[ 2021-10-07 16:45:23.6915 9615/7fb58edae780 agents/Watchdog/Main.cpp:450 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nobody', 'default_python' => 'python', 'default_ruby' => 'ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '6', 'max_pool_size' => '12', 'passenger_root' => '/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'prestart_urls' => 'aHR0cDovL3NhdGVsbGl0ZS5kczEuaW50ZXJuYWw6ODAAaHR0cHM6Ly9zYXRlbGxpdGUuZHMxLmludGVybmFsOjQ0MwA=', 'temp_dir' => '/var/run/rubygem-passenger', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '9614', 'web_server_type' => 'apache', 'web_server_worker_gid' => '48', 'web_server_worker_uid' => '48' }
[ 2021-10-07 16:45:23.6961 9618/7f0aae4c8780 agents/HelperAgent/Main.cpp:602 ]: PassengerHelperAgent online, listening at unix:/var/run/rubygem-passenger/passenger.1.0.9614/generation-0/request
[ 2021-10-07 16:45:23.7085 9625/7f08ef5a6880 agents/LoggingAgent/Main.cpp:318 ]: PassengerLoggingAgent online, listening at unix:/var/run/rubygem-passenger/passenger.1.0.9614/generation-0/logging
[ 2021-10-07 16:45:23.7088 9615/7fb58edae780 agents/Watchdog/Main.cpp:631 ]: All Phusion Passenger agents started!
[ 2021-10-07 16:45:23.7485 9642/7f23fd276780 agents/Watchdog/Main.cpp:450 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nobody', 'default_python' => 'python', 'default_ruby' => 'ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '6', 'max_pool_size' => '12', 'passenger_root' => '/usr/share/gems/gems/passenger-4.0.18/lib/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'prestart_urls' => 'aHR0cDovL3NhdGVsbGl0ZS5kczEuaW50ZXJuYWw6ODAAaHR0cHM6Ly9zYXRlbGxpdGUuZHMxLmludGVybmFsOjQ0MwA=', 'temp_dir' => '/var/run/rubygem-passenger', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '9614', 'web_server_type' => 'apache', 'web_server_worker_gid' => '48', 'web_server_worker_uid' => '48' }
[ 2021-10-07 16:45:23.7522 9645/7f2671c52780 agents/HelperAgent/Main.cpp:602 ]: PassengerHelperAgent online, listening at unix:/var/run/rubygem-passenger/passenger.1.0.9614/generation-1/request
[ 2021-10-07 16:45:23.7590 9653/7f674f3c9880 agents/LoggingAgent/Main.cpp:318 ]: PassengerLoggingAgent online, listening at unix:/var/run/rubygem-passenger/passenger.1.0.9614/generation-1/logging
[ 2021-10-07 16:45:23.7592 9642/7f23fd276780 agents/Watchdog/Main.cpp:631 ]: All Phusion Passenger agents started!
[ 2021-10-07 16:45:25.7871 9645/7f2671b5b700 Pool2/Spawner.h:738 ]: [App 9929 stdout]
[ 2021-10-07 16:45:34.9645 9645/7f2671b9c700 Pool2/Spawner.h:159 ]: [App 9929 stderr] /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGEXP
[ 2021-10-07 16:45:34.9647 9645/7f2671b9c700 Pool2/Spawner.h:159 ]: [App 9929 stderr] /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
[ 2021-10-07 16:45:36.9890 9645/7f2671b5b700 Pool2/Spawner.h:738 ]: [App 9929 stdout] API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
[ 2021-10-07 16:45:50.6920 9645/7f2671b9c700 Pool2/Spawner.h:159 ]: [App 9929 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_theme_satellite-6.0.1.7/app/models/concerns/distributor_version.rb:5: warning: already initialized constant Katello::Glue::Provider::DISTRIBUTOR_VERSION
[ 2021-10-07 16:45:50.6921 9645/7f2671b9c700 Pool2/Spawner.h:159 ]: [App 9929 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.16.0.26/app/models/katello/glue/provider.rb:3: warning: previous definition of DISTRIBUTOR_VERSION was here
[ 2021-10-07 16:45:53.7256 9634/7fb58edae780 agents/Watchdog/Main.cpp:337 ]: Some Phusion Passenger agent processes did not exit in time, forcefully shutting down all.
[ 2021-10-07 16:45:58.3169 9645/7f2671b9c700 Pool2/Spawner.h:159 ]: [App 9929 stderr] /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_azure_rm-2.1.2/app/models/foreman_azure_rm/azure_rm_compute.rb:13: warning: circular argument reference - sdk
[ 2021-10-07 16:46:15.3835 9645/7f2671b5b700 Pool2/SmartSpawner.h:301 ]: Preloader for /usr/share/foreman started on PID 9929, listening on unix:/var/run/rubygem-passenger/passenger.1.0.9614/generation-1/backends/preloader.9929

Any idea what might be causing this?

Thanks for help!

Are you aware that UEFI HTTP Boot is a special feature of UEFI firmware and it needs to be enabled? Your error (TFTP not found) shows that grub2 is trying to fetch configuration via TFTP, that should not be active protocol when in UEFI HTTP Boot mode at all.

What hardware are you trying to boot?

We use HP EliteBook 840 G5 and G6 in “Legacy mode Disabled and Secure Boot Disabled” which is pure UEFI.
In BIOS settings there is no explicit option for enabling “HTTP Boot”.

That is not what UEFI HTTP Boot is.

You are booting EFI PXE, that environment cannot do HTTP.

I suggest you use iPXE instead:

https://docs.theforeman.org/nightly/Provisioning_Guide/index-foreman-el.html#Configuring_Networking-Configuring_gPXE_to_Reduce_Provisioning_Times

Ok, then it is starting to make a sense.

The post you mentioned is also for UEFI hosts or should I use one mentioned in the article (Discovery iPXE EFI workflow in Foreman 1.20+) ?

Can I use this method with Windows DHCP or should I set up one managed by Foreman?

You can use Windows DHCP of course as long as you set up all DHCP flags correctly.

Basically what you can use:

  • set DHCP filename to return “ipxe.efi” for PXE/EFI clients
  • set DHCP filename to return “http://foreman/unattended/ipxe?bootstrap=1” for iPXE clients
  • create a host in foreman with iPXE template associated

That is all you need to do.

Thanks to your hints (also helpful link for covering Windows part - iPXE - open source boot firmware [howto:msdhcp]) I was able to finally make it work :slight_smile:

Děkuji Lukáš

So this setup works fine on a Foreman itself, but I run into a problem when configuring it for different site, where we use “foreman-proxy”.

I configured there DHCP policy to provide:

http://foreman_proxy_ip:8000/unattended/iPXE?bootstrap=1

when there is iPXE class used and “ipxe.efi” in other cases.

Problem is, that anytime I boot a new machine no matter whether it is already in DB or not, I am getting the “iPXE default local boot” template and not the “iPXE intermediate script” and then subsequently getting “No more network devices” (similar as here → iPXE "No more network devices") and the loader stops.

On “foreman-proxy” I have enabled “HTTPBoot” and “Templates” services.
Also there is no network issue between Foreman ↔ Foreman-Proxy as all the ports are opened.

What did I missed here?

Thanks again!

Looks like fix is exactly in here → Foreman 2.4 / Katello 4 - iPXE not working - #14 by lzap

So when I change it, the script is properly downloaded and FDI is being downloaded!

In which version of Foreman is it fixed?

Unfortunately turned out I was celebrating too early :roll_eyes:

Once the “discovery process” is finished, host is properly assigned to Host Group (via Discovery Rule) and then rebooted.

But once the host is rebooted, it starts again the same procedure of “discovery” as before as if it was not known to the foreman-proxy. I can see in the tftpboot/pxelinux.cfg/ folder that it has properly created a “MAC_ADDRESS.ipxe” file with proper content inside,

2021-10-19T16:40:16 263dd4bd [I] Started POST /tftp/iPXE/00:50:56:b8:13:28
2021-10-19T16:40:16 263dd4bd [D] verifying remote client 10.225.130.10 against trusted_hosts ["foreman_fqdn", "foreman-proxy_fqdn"]
2021-10-19T16:40:16 263dd4bd [D] TFTP: /var/lib/tftpboot/pxelinux.cfg/mac_address_of_provisioned_machine.ipxe created successfully
2021-10-19T16:40:16 263dd4bd [I] Finished POST /tftp/iPXE/mac_address_of_provisioned_machine with 200 (3.09 ms)

What could be the reason of this constant “discovery loop”? Why is proxy not aware of the “.ipxe” entry?

Proxy log:

2021-10-19T16:40:57 67735dcd [I] Started GET /unattended/iPXE bootstrap=1
2021-10-19T16:40:57 67735dcd [D] Template: request for unattended/iPXE using {"bootstrap"=>"1", "url"=>"http://foreman-proxy:8000"} at foreman_fqdn
2021-10-19T16:40:57 67735dcd [D] Retrieving a template from https://foreman_fqdn//unattended/iPXE?bootstrap=1&url=http%3A%2F%2Fforeman-proxy_fqdn%3A8000
2021-10-19T16:40:57 67735dcd [D] HTTP headers: {"CONNECTION"=>"keep-alive", "USER_AGENT"=>"iPXE/1.0.0+ (133f4c)", "X-Forwarded-For"=>"ip_of_host, foreman-proxy_fqdn"}
2021-10-19T16:40:57 67735dcd [I] Finished GET /unattended/iPXE with 200 (213.18 ms)
2021-10-19T16:41:13 be947c88 [I] Started GET /unattended/iPXE mac=${net0/mac}
2021-10-19T16:41:13 be947c88 [D] Template: request for unattended/iPXE using {"mac"=>"${net0/mac}", "url"=>"http://foreman-proxy_fqdn:8000"} at foreman_fqdn
2021-10-19T16:41:13 be947c88 [D] Retrieving a template from https://foreman_fqdn//unattended/iPXE?mac=%24%7Bnet0%2Fmac%7D&url=http%3A%2F%2Fforeman-proxy_fqdn%3A8000
2021-10-19T16:41:13 be947c88 [D] HTTP headers: {"CONNECTION"=>"keep-alive", "USER_AGENT"=>"iPXE/1.0.0+ (133f4c)", "X-Forwarded-For"=>"ip_of_host, foreman-proxy_fqdn"}
2021-10-19T16:41:13 be947c88 [I] Finished GET /unattended/iPXE with 200 (131.74 ms)

Why the setup was working properly on “foreman” itself but it can’t work on “foreman-proxy”?

Side-question, is there a way (except creating “custom discovery entry”) to create separate “iPXE Global default” templates for foreman and foreman-proxy?

Thanks a lot !!

For iPXE you do not need TFTP at all, Foreman supports deployment of iPXE templates to TFTP but generally this is not needed as iPXE can load directly from foreman via HTTP.

This does not appear to be correct. The bootstrap template should have replaced the variable with MAC address of your host. Check that. That is also the reason why it loops - Foreman cannot find your host via MAC.

Check Administer - Setting - Provisioning - iPXE intermediate script and that template. If you try this from a host that is not managed by Foreman (unknown IP address):

$ curl -s http://xxx.redhat.com/unattended/iPXE?bootstrap=1 | head
#!ipxe
# Intermediate iPXE script to report MAC address to Foreman

:net0
isset ${net0/mac} || goto no_nic
dhcp net0 || goto net1
chain http://xxx.redhat.com/unattended/iPXE?mac=${net0/mac} || goto net1
...

There you see how iPXE gets bootstrapped. Hope it makes sense.