How to reboot a host automatically after using discovery with discovery rules

Hi all,

after my hosts are discovered successfully they will automatically assigned
to the correct hostgroup and Environment and added to hosts registry.
But the host must be restarted manually. After discovery detection it hangs
in the screen from the picture.
After I press reboot, all works as expected.

How can I configure a automatically reboot?

Kind regards,
Christian

Hi Christian,

Have you enabled reboot(discovery_reboot) option in Settings>Discovered?

··· On Thursday, November 10, 2016 at 2:49:45 PM UTC, Christian Froestl wrote: > > Hi all, > > after my hosts are discovered successfully they will automatically > assigned to the correct hostgroup and Environment and added to hosts > registry. > But the host must be restarted manually. After discovery detection it > hangs in the screen from the picture. > After I press reboot, all works as expected. > > How can I configure a automatically reboot? > > Kind regards, > Christian >

I found the following exception in the production log.

2016-11-11 09:52:08 [app] [I] Rendered home/_user_dropdown.html.erb
(2.2ms)
2016-11-11 09:52:08 [app] [I] Read fragment views/tabs_and_title_records-3
(0.2ms)
2016-11-11 09:52:08 [app] [I] Rendered home/_topbar.html.erb (9.2ms)
2016-11-11 09:52:08 [app] [I] Rendered layouts/base.html.erb (10.3ms)
2016-11-11 09:52:08 [app] [I] Filter chain halted as :welcome rendered or
redirected
2016-11-11 09:52:08 [app] [I] Completed 200 OK in 30ms (Views: 13.9ms |
ActiveRecord: 3.0ms)
2016-11-11 09:52:08 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:08 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:08 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:08 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:22 [app] [I] Started POST "/api/v2/discovered_hosts/facts"
for 10.0.2.15 at 2016-11-11 09:52:22 +0100
2016-11-11 09:52:22 [app] [I] Processing by
Api::V2::DiscoveredHostsController#facts as JSON
2016-11-11 09:52:22 [app] [I] Parameters: {"facts"=>"[FILTERED]",
"apiv"=>"v2", "discovered_host"=>{"facts"=>"[FILTERED]"}}
2016-11-11 09:52:22 [app] [I] Import facts for 'mac080027b4d94a' completed.
Added: 65, Updated: 0, Deleted 0 facts
2016-11-11 09:52:22 [app] [I] Detected subnet: Dev (172.16.4.0/24) with
taxonomy ["SSW-Trading GmbH"]/["Oststeinbek"]
2016-11-11 09:52:22 [app] [I] Assigned location: Oststeinbek
2016-11-11 09:52:22 [app] [I] Assigned organization: SSW-Trading GmbH
2016-11-11 09:52:22 [app] [I] Locking discovered host 08:00:27:b4:d9:4a in
subnet Dev (172.16.4.0/24)
2016-11-11 09:52:22 [app] [I] Match found for host mac080027b4d94a (34)
rule Disco_Dev (4)
2016-11-11 09:52:24 [app] [I] Create DHCP reservation for
mac080027b4d94a.example.com-08:00:27:b4:d9:4a/172.16.4.9
2016-11-11 09:52:24 [app] [I] Add DNS A record for
mac080027b4d94a.example.com/172.16.4.9
2016-11-11 09:52:24 [app] [I] Add DNS PTR record for
172.16.4.9/mac080027b4d94a.example.com
2016-11-11 09:52:26 [app] [W] Unable to reboot mac080027b4d94a
> RestClient::InternalServerError: 500 Internal Server Error
>
/opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/abstract_response.rb:74:in
return!' > /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/request.rb:495:inprocess_result'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/request.rb:421:in
block in transmit' > /opt/rh/rh-ruby22/root/usr/share/ruby/net/http.rb:853:instart'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/request.rb:413:in
transmit' > /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/request.rb:176:inexecute'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/request.rb:41:in
execute' > /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.8.0/lib/restclient/resource.rb:76:input'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman_discovery-6.0.0/app/services/foreman_discovery/node_api/node_resource.rb:102:in
put' > /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_discovery-6.0.0/app/services/foreman_discovery/node_api/power_service.rb:8:inreboot'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman_discovery-6.0.0/app/models/host/discovered.rb:181:in
reboot' > /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_discovery-6.0.0/app/models/host/managed_extensions.rb:31:insetReboot'
> /usr/share/foreman/app/models/concerns/orchestration.rb:162:in execute' > /usr/share/foreman/app/models/concerns/orchestration.rb:107:inblock in
process'
> /usr/share/foreman/app/models/concerns/orchestration.rb:99:in each' > /usr/share/foreman/app/models/concerns/orchestration.rb:99:inprocess'
> /usr/share/foreman/app/models/concerns/orchestration.rb:39:in
post_commit' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:432:inblock in make_lambda'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:263:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:263:inblock in simple'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:506:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:506:inblock in call'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:506:in
each' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:506:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:92:in
__run_callbacks__' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:778:in_run_commit_callbacks'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/transactions.rb:314:in
committed!' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/transaction.rb:89:incommit_records'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/transaction.rb:153:in
commit' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/transaction.rb:175:incommit_transaction'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/transaction.rb:194:in
within_new_transaction' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/database_statements.rb:213:intransaction'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/transactions.rb:220:in
transaction' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/transactions.rb:348:inwith_transaction_returning_status'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/transactions.rb:286:in
block in save' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/transactions.rb:301:inrollback_active_record_state!'
…skipping…
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/routing/mapper.rb:49:in
serve' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/journey/router.rb:43:inblock in serve'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/journey/router.rb:30:in
each' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/journey/router.rb:30:inserve'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/routing/route_set.rb:815:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:in
call' > /opt/theforeman/tfm/root/usr/share/gems/gems/apipie-rails-0.3.6/lib/apipie/static_dispatcher.rb:65:incall'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/apipie-rails-0.3.6/lib/apipie/extractor/recorder.rb:132:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:incall'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/apipie-rails-0.3.6/lib/apipie/middleware/checksum_in_headers.rb:27:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/etag.rb:24:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/conditionalget.rb:38:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/head.rb:13:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/params_parser.rb:27:in
call' > /usr/share/foreman/lib/middleware/catch_json_parse_errors.rb:9:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/flash.rb:260:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/session/abstract/id.rb:225:incontext'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/session/abstract/id.rb:220:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/cookies.rb:560:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/query_cache.rb:36:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activerecord-4.2.5.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:653:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/callbacks.rb:29:in
block in call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:88:inrun_callbacks'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:778:in
_run_call_callbacks' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/callbacks.rb:81:inrun_callbacks'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/callbacks.rb:27:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/remote_ip.rb:78:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/debug_exceptions.rb:17:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/show_exceptions.rb:30:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/rack/logger.rb:38:in
call_app' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/rack/logger.rb:22:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/request_id.rb:21:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/methodoverride.rb:22:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/runtime.rb:18:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/activesupport-4.2.5.1/lib/active_support/cache/strategy/local_cache_middleware.rb:28:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/actionpack-4.2.5.1/lib/action_dispatch/middleware/static.rb:116:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/sendfile.rb:113:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/engine.rb:518:in
call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/application.rb:165:incall'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/railtie.rb:194:in
public_send' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/railties-4.2.5.1/lib/rails/railtie.rb:194:inmethod_missing'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/urlmap.rb:66:in
block in call' > /opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/urlmap.rb:50:ineach'
>
/opt/rh/sclo-ror42/root/usr/share/gems/gems/rack-1.6.2/lib/rack/urlmap.rb:50:in
call' > /usr/share/passenger/phusion_passenger/rack/thread_handler_extension.rb:74:inprocess_request'
>
/usr/share/passenger/phusion_passenger/request_handler/thread_handler.rb:141:in
accept_and_process_next_request' > /usr/share/passenger/phusion_passenger/request_handler/thread_handler.rb:109:inmain_loop'
> /usr/share/passenger/phusion_passenger/request_handler.rb:455:in block (3 levels) in start_threads' > /opt/theforeman/tfm/root/usr/share/gems/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:incall'
>
/opt/theforeman/tfm/root/usr/share/gems/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'
2016-11-11 09:52:26 [app] [I] Completed 201 Created in 3725ms (Views: 1.9ms
> ActiveRecord: 194.0ms)
2016-11-11 09:52:26 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:26 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:26 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003
2016-11-11 09:52:26 [app] [W] Failed to communicate with graphite:
Connection refused - connect(2) for "0.0.0.0" port 2003

··· Am Donnerstag, 10. November 2016 15:49:45 UTC+1 schrieb Christian Froestl: > > Hi all, > > after my hosts are discovered successfully they will automatically > assigned to the correct hostgroup and Environment and added to hosts > registry. > But the host must be restarted manually. After discovery detection it > hangs in the screen from the picture. > After I press reboot, all works as expected. > > How can I configure a automatically reboot? > > Kind regards, > Christian >

Hi Akash,

yes, Reboot = true

··· Am Donnerstag, 10. November 2016 17:14:24 UTC+1 schrieb Akash Kaveti: > > > Hi Christian, > > Have you enabled reboot(discovery_reboot) option in Settings>Discovered? > > > On Thursday, November 10, 2016 at 2:49:45 PM UTC, Christian Froestl wrote: >> >> Hi all, >> >> after my hosts are discovered successfully they will automatically >> assigned to the correct hostgroup and Environment and added to hosts >> registry. >> But the host must be restarted manually. After discovery detection it >> hangs in the screen from the picture. >> After I press reboot, all works as expected. >> >> How can I configure a automatically reboot? >> >> Kind regards, >> Christian >> >

Reboot happens via an API call to a cut-down Foreman-proxy running on the
discovered host. It looks like that service is returning a 500 when told to
reboot - but you'll need to get onto the host to the logs of that.

The Discovery docs cover enabling SSH on the Discovery Image, so you'll
want to do that and then see if you can get the proxy log when it's told to
reboot.

Cheers
Greg

What Greg said, also you can configure Smart Proxy as a http/https
proxy for discover calls, depending on if you have selected Discovery
Proxy for the Subnet the host was discovered in or not. Then the call
will be proxied, you need to enable Discovery Smart Proxy plugin.
Check our documentation, it's all there.

LZ

··· On Fri, Nov 11, 2016 at 12:24 PM, Greg Sutcliffe wrote: > Reboot happens via an API call to a cut-down Foreman-proxy running on the > discovered host. It looks like that service is returning a 500 when told to > reboot - but you'll need to get onto the host to the logs of that. > > The Discovery docs cover enabling SSH on the Discovery Image, so you'll want > to do that and then see if you can get the proxy log when it's told to > reboot. > > Cheers > Greg > > -- > You received this message because you are subscribed to the Google Groups > "Foreman users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to foreman-users+unsubscribe@googlegroups.com. > To post to this group, send email to foreman-users@googlegroups.com. > Visit this group at https://groups.google.com/group/foreman-users. > For more options, visit https://groups.google.com/d/optout.


Later,
Lukas @lzap Zapletal

I have done all configuration
from Foreman :: Plugin Manuals,
but it's still not working.
I try to use the discovery proxy for this and have made the following
configuration:

PXELInux global default:
ONTIMEOUT discovery

LABEL discovery
MENU LABEL Foreman Discovery
MENU DEFAULT
KERNEL boot/fdi-image/vmlinuz0
APPEND initrd=boot/fdi-image/initrd0.img fdi.ssh=1 fdi.rootpw=redhat
rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image
acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0
rd.debug=1 nomodeset proxy.url=https://katello.example.com:9090
proxy.type=proxy
IPAPPEND 2

The discovery proxy is active in the smart proxy settings, but when I go
into the Smart Proxy -> Services under Discovery I see Version 1.0.4 and
"Data not available for Discovery". Is that right?

If I change the APPEND line to proxy_type forman, the automatic reboot is
successfull.

I logged me into the discovered host by ssh, but there are no logfiles
unter /var/log/foreman-proxy/.

Any idea whats going wrong with the foreman-proxy?

··· Am Freitag, 11. November 2016 13:49:09 UTC+1 schrieb Lukas Zapletal: > > What Greg said, also you can configure Smart Proxy as a http/https > proxy for discover calls, depending on if you have selected Discovery > Proxy for the Subnet the host was discovered in or not. Then the call > will be proxied, you need to enable Discovery Smart Proxy plugin. > Check our documentation, it's all there. > > LZ > > On Fri, Nov 11, 2016 at 12:24 PM, Greg Sutcliffe > <greg.su...@gmail.com > wrote: > > Reboot happens via an API call to a cut-down Foreman-proxy running on > the > > discovered host. It looks like that service is returning a 500 when told > to > > reboot - but you'll need to get onto the host to the logs of that. > > > > The Discovery docs cover enabling SSH on the Discovery Image, so you'll > want > > to do that and then see if you can get the proxy log when it's told to > > reboot. > > > > Cheers > > Greg > > > > -- > > You received this message because you are subscribed to the Google > Groups > > "Foreman users" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to foreman-user...@googlegroups.com . > > To post to this group, send email to forema...@googlegroups.com > . > > Visit this group at https://groups.google.com/group/foreman-users. > > For more options, visit https://groups.google.com/d/optout. > > > > -- > Later, > Lukas @lzap Zapletal >

Hey

> The discovery proxy is active in the smart proxy settings, but when I go
> into the Smart Proxy -> Services under Discovery I see Version 1.0.4 and
> "Data not available for Discovery". Is that right?

Correct

> If I change the APPEND line to proxy_type forman, the automatic reboot is
> successfull.

It must be proxy.type (note the dot character). Also make sure the
APPEND line is on one line without any breaks.

> I logged me into the discovered host by ssh, but there are no logfiles unter
> /var/log/foreman-proxy/.

See journalctl for some relevant data.

> Any idea whats going wrong with the foreman-proxy?

Nope, use foreman-tail when discovering both on Smart Proxy and
Foreman instance and paste bin output. Do you see requests hitting
Smart Proxy? What HTTP code does it return? Does it get proxied
through to Foreman app? Again, what is the result?

LZ

··· -- Later, Lukas @lzap Zapletal

hi, I have like the same issue… host running the FDI does not automatically reboot after discovery.

this is my PXELinux global default entry

LABEL discovery
  MENU LABEL Foreman Discovery Image
  KERNEL boot/fdi-image/vmlinuz0
  APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nokaslr nomodeset proxy.url=https://ac-foreman.domain.local proxy.type=foreman
  IPAPPEND 2

using journalctl in the FDI is a long list. Any hint what i should look for?

my settings look like this:

Going into the Host > Discovered Hosts page and triggering Reboot on the Action tab is working. But it would be awesome if it can work automatically.

What else can I check?

You need to dig out logs in production.log, Foreman is probably unable to reach to the host or something.

Here is a “discovery session” out of the production.log:

2022-03-31T05:57:34 [I|app|c0e16bbc] Started POST "/api/v2/discovered_hosts/facts" for 10.aaa.bbb.cc at 2022-03-31 05:57:34 +0200
2022-03-31T05:57:34 [I|app|c0e16bbc] Processing by Api::V2::DiscoveredHostsController#facts as JSON
2022-03-31T05:57:34 [I|app|c0e16bbc]   Parameters: {"facts"=>"[FILTERED]", "apiv"=>"v2", "discovered_host"=>{"facts"=>"[FILTERED]"}}
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on mac
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on ip
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on type Nic::Managed
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on name mac70106f4d7aec
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on host_id 79
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on subnet_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on domain_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on attrs {}
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on provider
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on username
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on password [redacted]
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on virtual false
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on link true
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on identifier
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on tag
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on attached_to
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on managed true
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on mode balance-rr
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on attached_devices
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on bond_options
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on primary true
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on provision true
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on compute_attributes {}
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on ip6
2022-03-31T05:57:34 [I|aud|c0e16bbc] Nic::Managed (181) create event on subnet6_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on name mac70106f4d7aec
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on last_compile
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on root_pass
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on architecture_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on operatingsystem_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on environment_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on ptable_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on medium_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on build false
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on comment
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on disk
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on installed_at
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on model_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on hostgroup_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on owner_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on owner_type
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on enabled true
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on puppet_ca_proxy_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on managed false
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on use_image
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on image_file
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on uuid
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on compute_resource_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on puppet_proxy_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on certname
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on image_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on organization_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on location_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on otp
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on realm_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on compute_profile_id
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on provision_method
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on grub_pass
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on global_status 0
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on lookup_value_matcher
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on pxe_loader
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on initiated_at
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on build_errors
2022-03-31T05:57:34 [I|aud|c0e16bbc] Host::Base (79) create event on discovery_rule_id
2022-03-31T05:57:35 [I|app|c0e16bbc] Import facts for 'mac70106f4d7aec' completed. Added: 373, Updated: 0, Deleted 0 facts
2022-03-31T05:57:35 [I|aud|c0e16bbc] Nic::Managed (181) update event on mac , 70:10:6f:4d:7a:ec
2022-03-31T05:57:35 [I|aud|c0e16bbc] Nic::Managed (181) update event on identifier , eno1
2022-03-31T05:57:36 [I|app|c0e16bbc] Detected IPv4 subnet: Deployer Network with taxonomy ["....."]
2022-03-31T05:57:36 [I|app|c0e16bbc] Assigned location: ###########
2022-03-31T05:57:36 [I|app|c0e16bbc] Assigned organization: ############
2022-03-31T05:57:36 [I|aud|c0e16bbc] Host::Base (79) update event on model_id , 8
2022-03-31T05:57:36 [I|aud|c0e16bbc] Host::Base (79) update event on owner_id , 1
2022-03-31T05:57:36 [I|aud|c0e16bbc] Host::Base (79) update event on owner_type , User
2022-03-31T05:57:36 [I|aud|c0e16bbc] Host::Base (79) update event on organization_id , 1
2022-03-31T05:57:36 [I|aud|c0e16bbc] Host::Base (79) update event on location_id , 3
2022-03-31T05:57:36 [I|aud|c0e16bbc] Nic::Managed (181) update event on subnet_id , 1
2022-03-31T05:57:36 [I|app|c0e16bbc] Completed 201 Created in 1335ms (Views: 0.9ms | ActiveRecord: 407.9ms | Allocations: 482213)

should I use a different log level? I can’t find anything suspicious.

what irritates me a little is, that my PXELinux global default entry does not have the port of the server. But in most documentation pages (including this: Provisioning Guide, it is always with 8443 for the port. Can this be the problem? I never changed it. That was set up the automated setup of the plugin.

Can you test with 3.x FDI instead of 4.x to rule out this is a problem with FDI?