SSH Errors - Foreman Master + Proxy hosts, clean install (Foreman 3.8 / Katello 4.9)

Problem:

Hello,

I’m setting up Foreman + Katello with several smart proxies as a patch management solution for my infrastructure and VMs.

Foreman Version 3.7.0
Katello Version 4.9.2
Operating System Rocky Linux / EL 8.8

The architecutre generally consists of the following:

stackmgmt-master.common.tld
stackmgmt-proxy.location1.tld
stackmgmt-proxy.location2.tld

common.tld <> location1.tld can communicate freely (no restrictions)
common.tld <> location2.tld can communicate freely (no restrictions)

Foreman has the following hosts registered:

12 x Rocky / EL 9.2 hosts
3 x Rocky / EL 8.8 hosts (2x stackmgmt-proxy, 1x stackmgmt-master)

Rocky products, repositories, authorization keys all setup.

I’m able to query and execute remote application of patches on all hosts with the exception of the 3 assigned as stackmgmt-proxy and stackmgmt-master hosts, which result in the following errors:

1:
[DEPRECATION WARNING]: ANSIBLE_CALLBACK_WHITELIST option, normalizing names to
2:
new standard, use ANSIBLE_CALLBACKS_ENABLED instead. This feature will be
3:
removed from ansible-core in version 2.15. Deprecation warnings can be disabled
4:
by setting deprecation_warnings=False in ansible.cfg.
5:

6:
PLAY [all] *********************************************************************
7:

8:
TASK [Gathering Facts] *********************************************************
9:
fatal: [stackmgmt-master.common.tld]: UNREACHABLE! => {“changed”: false, “msg”: “Failed to connect to the host via ssh: Warning: Permanently added ‘tackmgmt-master.common.tld,10.0.0.1’ (ECDSA) to the list of known hosts.\r\nroot@tackmgmt-master.common.tld: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).”, “unreachable”: true}
10:
PLAY RECAP *********************************************************************
11:
stackmgmt.yyz01.k : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
12:
Exit status: 1
13:
StandardError: Job execution failed

Expected outcome:

This being a fresh install without any changes to default configs, I’m struggling to understand why there’s a SSH key issue with Foreman hosts (master and proxy) only. This does not apply to other hosts that have been registered. I’m also unable to find a reference in documentation to the above error / issue.

Foreman and Proxy versions:

Foreman and Proxy plugin versions:

Foreman Version 3.7.0
Katello Version 4.9.2

Distribution and version:

|Operating System|Rocky Linux / EL 8.8|

Other relevant data:

Observations:

I’m not sure if this is standard or required, but I found that a recurring job / tasks needed to be setup inorder to query registered hosts and retrieve updated facts / packages / info / etc. I found that setting up a recurring task every 20 minutes running Ansible → Run Ansible Role(s) workflow does the track. Otherwise reporting period exceeds thresholds resulting in hosts being flagged as “out of sync”.

The recurring job is applied to a host collection. Every time a host is added to the host collection, the job must be recreated. I would assume host collection members would be evaluated on every execution of a recurring job.

Hi,
iirc the keys are not deployed by default to foreman and proxies themselves, so if you want to use remote execution against foreman and smart proxies, you have to deploy the keys yourself. On managed hosts the situation is different as they are, well, managed.

That sounds about right. This is similar to how puppet integration works - if the client doesn’t check in within a given time period, it is considered out of sync because we do not know whether it is in sync or not. For ansible it is the same, but you have to be driving the checks as they don’t happen “on their own” like with puppet as there’s no agent running on the managed hosts.

The interval after which the host is considered out of sync is configurable in Administer > Settings > Ansible report timeout. There is even an option to turn this behaviour off completely with Administer > Settings > Ansible > Ansible out of sync disabled in case you don’t have any need for this behaviour.

Not necessarily, you should be able to pick dynamic query as a targeting type and then the search query will indeed be reevaluated every single time the job runs, meaning you shouldn’t then need to recreate it every time you add or remove hosts to/from the host collection.

Thanks for the reply. Can you suggest the supported method / path for keys that should be deployed on master and proxy hosts?

With reference to managed hosts, both master and proxy nodes appear in foreman. I assumed them to be managed.

I believe that all you need to do is to append the key located here:
~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub to authorized keys of your root user.

cat ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub >> /root/.ssh/authorized_keys

(Please, doublecheck the commands and paths as I am writing this on my mobile from the top of my head)

Thank you, apprechiate the insights. For others reviewing this thread:

  • On master host, running command as prescribed sets up keys as expected:
cat ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub >> /root/.ssh/authorized_keys
  • On Proxy hosts, keys need to be transferred before being added to root authorized_keys:
~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub

Seperately, I also realized that master and in some cases proxy hosts must also be manually registered with Foreman for content management. I assumed this was part of the installer process but it isn’t. Simply go to Hosts → Register Host in the GUI, walk through the prompts to generate a curl string to run on the host. I ended up doing this after adding keys as per above, however suspect the registration process resolves SSH key issue as originaly reported.

Apprechiate everyone’s help.