Problem with Foreman 2.1rc2 with Remote Execution on CentOS 8

https://projects.theforeman.org/issues/27184

With respect to /var/run sub-directories, I had this same issue: a race condition expecting another service to have created the /var/run sub-directory, when backporting the Fedora ISC DHCP back to RHEL 7 at a previous job. I believe I added a snippet to the dhcpd-service in the /etc/systemd/ tree to use ExecStartPre to create the directory we needed with the same permissions as the other service would have.

In this case we could simple put such commands in the main service definition file. It may be worth creating an empty file or using touch(1) after the directory is present to ensure the PID file is present and not have to change the ruby code. I have not looked at the foreman code to determine what would be the correct approach.

I did a little more reading, and the following (currently untested) should assist, if needed to silence this warning:

smart_proxy_dynflow_core.service: Can't open PID file /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (yet?) after start: No such file or directory

Reference: man systemd.service

ExecStartPre=+install -d -o foreman-proxy -g foreman-proxy -m 0755 /var/run/foreman-proxy
ExecStartPre=touch /var/run/foreman-proxy/smart_proxy_dynflow_core.pid

The ‘+’ described in Table 1. Special executable prefixes, allows the command to ignore the User= line, and execute the command with full privileges - this is required as /var/run is owned by root:root. With the newly created directory owned by foreman-proxy, the touch(1) does not need to be executed with full privileges.

Isn’t tmpfiles.d a much cleaner solution to that?

After some more reading (thanks for the pointer), I think so and in fact the directories are created using this mechanism, see snippets at the end of this message.

Without further research, this would imply to me that the message be better rephrased “Note: No pre-existing PID file detected at startup (/var/run/foreman-proxy/smart_proxy_dynflow_core.pid)”, or something that would lead the reader to a conclusion that this is not something to worry about if not worrying is the correct action on seeing the message. Alternatively, on finding the PID file, generate a message about that case, instead of the missing PID file case.

Another consideration could be, the need for additional entries to the After= (or Requires=) line in the systemd service definition files, possibly adding systemd-tmpfiles-setup.service. I also noted that none of the Foreman related services (save foreman.service requiring foreman.socket) depend on each other – that’s quite cool if correctly representing reality.


Existing Foreman 2.1rc2 tmpfiles.d configuration on both CentOS 7 & 8:

more /usr/lib/tmpfiles.d/foreman*.conf
::::::::::::::
/usr/lib/tmpfiles.d/foreman.conf
::::::::::::::
d /run/foreman 0755 foreman foreman -
::::::::::::::
/usr/lib/tmpfiles.d/foreman-proxy.conf
::::::::::::::
d /run/foreman-proxy 0755 foreman-proxy foreman-proxy -
d /var/cache/foreman-proxy 0700 foreman-proxy foreman-proxy -

Question: Should I raise an issue about this problem, as no one has further commented on the core issue?
(Save Adam and ekohl both 6 days ago)

Also has anyone else seen this issue?

Or tried this combination CentOS 8 & Remote Execution on 2.1rc2?

I had to deploy an EL8 based machine to get to the bottom of this. Turns out we missed something in packaging and on EL8 some files got deployed into the old scl prefix and therefore weren’t picked up by the running processes.

As a workaround deploy a symlink putting the file into the right place with

ln -s /opt/theforeman/tfm/root/usr/share/smart_proxy_dynflow_core/bundler.d/foreman_remote_execution_core.rb /usr/share/smart_proxy_dynflow_core/bundler.d/

and restart smart_proxy_dynflow_core

1 Like

Adam, Thanks for the reply - this has helped but not solved the problem. The job now sits in the “Pending” rather than “Failed” state for the CentOS 8, see below. Not sure why, so rebooted the VMs, and the problem persists for both the existing and new attempts on the CentOS 8 system. journalctl --unit smart_proxy_dynflow_core adds nothing to the picture. The foreman log can see a problem, but the dynflow log with debugging on is a bit of a mess, so I’m going to turn debug off and get a simpler starting point to investigate from.

Not having a lot if success in finding the source if the problem… So I’ll report a smaller issue:

On the page: https://FQDN/job_invocations?.… you get a documentation link, this points to https://theforeman.org/plugins/foreman_remote_execution/3.2/index.html#3.2ExecutingaJob

This seems reasonable as following would suggest: rpm -qa | grep remote

rubygem-foreman_remote_execution-3.2.1-1.fm2_1.el8.noarch
rubygem-smart_proxy_remote_execution_ssh-0.3.0-3.fm2_1.el8.noarch
rubygem-foreman_remote_execution_core-1.3.0-1.el8.noarch

But the Foreman :: Plugin documentation index only shows versions: 1.7, 1.3, and 0.x as having manuals for Remote Execution.

To ensure I was not providing false errors related to the earlier problem, I snapshot(ed) the VM, and rolled it back to before the first foreman-installer command in the previously attached write up (the “Basics Completed” snapshot), reinstalled foreman, changed the firewall as previously described, and installed the remote execution plugin. As follows:

	sudo foreman-installer \
	    --enable-foreman-plugin-remote-execution \
            --enable-foreman-proxy-plugin-remote-execution-ssh

# BEGIN: Update 2010/06/12 16:23

	# Fix package issue
	sudo ln -s \
	    /opt/theforeman/tfm/root/usr/share/smart_proxy_dynflow_core/bundler.d/foreman_remote_execution_core.rb \
	    /usr/share/smart_proxy_dynflow_core/bundler.d/

	sudo systemctl restart smart_proxy_dynflow_core

# END:   Update 2010/06/12 16:23

I verified by systemctl status that the service had restarted.

I tried a two Tasks: Package Check and Command - both tasks can be seen in the Foreman tasks list:

I looked through logs in /var/log/httpd and /var/log/foreman* and even the dynflow_steps table in postgres were I had previously seen errors (ruby stack traces) recorded → NOTHING of interest.

All logs are so clean it’s off putting, but this is at the default debugging levels.

I’m going to leave the system for a while to see if timeouts exist to clean up the tasks.

If no one has better suggestions over night (Sydney Time), I’ll up the debug levels tomorrow and see what if anything can be gleaned.

I did a bit more debugging. The actual error does not show up in logs (a bug), I had to run smart_proxy_dynflow_core with logging to console to actually be able to see it.

E, [2020-06-15T03:20:30.722368 #7382] ERROR -- : OpenSSH keys only supported if ED25519 is available
net-ssh requires the following gems for ed25519 support:
 * rbnacl (>= 3.2, < 5.0)
 * rbnacl-libsodium, if your system doesn't have libsodium installed.
 * bcrypt_pbkdf (>= 1.0, < 2.0)
See https://github.com/net-ssh/net-ssh/issues/478 for more information
Gem::MissingSpecError : "Could not find 'rbnacl' (< 5.0, >= 3.2.0) among 177 total gem(s)
Checked in 'GEM_PATH=/usr/share/foreman-proxy/.gem/ruby:/usr/share/gems:/usr/local/share/gems', execute `gem env` for more information"
 (NotImplementedError)
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/ed25519_loader.rb:19:in `raiseUnlessLoaded'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:112:in `classify_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:52:in `load_data_private_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:43:in `load_private_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:142:in `sign'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:62:in `authenticate_with'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:20:in `block in authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:122:in `block in each_identity'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:119:in `each'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:119:in `each_identity'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:19:in `authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:80:in `block in authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:66:in `each'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:66:in `authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh.rb:241:in `start'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:267:in `session'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:337:in `run_sync'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:426:in `ensure_remote_directory'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:403:in `upload_data'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:399:in `cp_script_to_remote'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:167:in `prepare_start'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:153:in `start'
/usr/share/gems/gems/foreman-tasks-core-0.3.4/lib/foreman_tasks_core/runner/dispatcher.rb:32:in `start_runner'

Looking into why it only happens on EL8.

Oh, I see.

The net-ssh library we use uses the first line of the private key to determine which type of key (RSA, ed25519 and so on) it is. The version of ssh shipped with EL7 generates keys in the PEM format, where private keys contain the string BEGIN RSA PRIVATE KEY on the first line of the key. The version shipped with EL8 generates keys in RFC4716 which contain BEGIN OPENSSH PRIVATE KEY and which are considered to be ed25519 keys by the library we use. Hence the error is fired because we don’t ship the dependencies it needs for handling ed25519 keys.

If you don’t mind regenerating your keys, then it is easy. Just regenerate them in the right format with the following command and then redeploy the public key to the target hosts.

sudo -u foreman-proxy ssh-keygen -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy -m PEM -N '' -t rsa -b 4096

We will need to be explicit about the format we require in the installer.

Sorry for the delay in responding.

Recreated ssh keys and updated authorized keys for root.
Tested Remote Execution (again just to the foreman host itself) - No Problems :slight_smile:
sudo dnf upgrade moved system from CentOS 8.1 → CentOS 8.2, Rebooted.
Tested Remote Execution (ditto above aside)- No Problems :slight_smile:

Many Thanks,
Peter

PS: On to the ansible …

Continuing the adventure into unknown territory…

The Good:

  • Remote Execution and Tasks Hammer CLI are functional on both CentOS 7.8 and 8.2

    sudo foreman-installer --enable-foreman-cli-remote-execution --enable-foreman-cli-tasks
    hammer remote-execution-feature list
    hammer remote-execution-feature info --id 1
    hammer task list

  • Ansible features install on CentOS 7.8 without issue

    sudo foreman-installer --enable-foreman-cli-ansible --enable-foreman-plugin-ansible --enable-foreman-proxy-plugin-ansible

The Bad: Ansible features install on CentOS 8 fails (same command line as above)

Execution of '/bin/dnf -d 0 -e 1 -y install rubygem-smart_proxy_ansible' returned 1: Error: 
Problem: cannot install the best candidate for the job
.- nothing provides ansible >= 2.2 needed by rubygem-smart_proxy_ansible-3.0.1-5.fm2_1.el8.noarch
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/execution.rb:297:in `execute'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/provider.rb:101:in `execute'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/provider/package/yum.rb:303:in `install'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type/package.rb:98:in `block (3 levels) in <module:Puppet>'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/property.rb:490:in `set'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/property.rb:570:in `sync'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:241:in `sync'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:136:in `sync_if_needed'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:82:in `perform_changes'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:21:in `evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:267:in `apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:287:in `eval_resource'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `call'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `block (2 levels) in evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `block in evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/graph/relationship_graph.rb:122:in `traverse'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:178:in `evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:240:in `block (2 levels) in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:239:in `block in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/log.rb:161:in `with_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/report.rb:146:in `as_logging_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:238:in `apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:185:in `block (2 levels) in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:184:in `block in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:233:in `block in benchmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:232:in `benchmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:183:in `apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:391:in `run_internal'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:227:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:210:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:343:in `apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:260:in `block (2 levels) in main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:243:in `block in main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:207:in `main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:177:in `run_command'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:382:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:710:in `exit_on_fail'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:382:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:143:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:77:in `execute'
/opt/puppetlabs/puppet/bin/puppet:5:in `<main>'
 Problem: cannot install the best candidate for the job
  - nothing provides ansible >= 2.2 needed by rubygem-smart_proxy_ansible-3.0.1-5.fm2_1.el8.noarch
Preparing installation Done                                              
  Something went wrong! Check the log for ERROR-level output
  The full log is at /var/log/foreman-installer/foreman.log

On CentOS 7, EPEL provides ansible as can be seen in:

sudo yum list installed | grep ansible
ansible.noarch                          2.9.9-1.el7             @epel           
ansible-runner.noarch                   1.4.6-1.el7             @ansible-runner 
python2-ansible-runner.noarch           1.4.6-1.el7             @ansible-runner 
python2-daemon.noarch                   2.1.2-7.el7at           @ansible-runner 
python2-pexpect.noarch                  4.6-1.el7at             @ansible-runner 
python2-ptyprocess.noarch               0.5.2-3.el7at           @ansible-runner 
tfm-rubygem-foreman_ansible.noarch      5.0.1-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-foreman_ansible_core.noarch 3.0.3-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-hammer_cli_foreman_ansible.noarch
tfm-rubygem-smart_proxy_ansible.noarch  3.0.1-5.fm2_1.el7       @foreman-plugins

But EPEL can not be installed on CentOS 8.x without causing problems (as noted in attachment CmdLineHostory…). But I’ll include the observed problem here as I didn’t include the details in the attachment.

The commands are:

sudo yum -y upgrade
sudo yum -y install epel-release
sudo yum -y upgrade
sudo yum -y erase epel-release
sudo yum -y upgrade

The output is:

[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:28:23 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
Nothing to do.
Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y install epel-release
Last metadata expiration check: 0:28:31 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
================================================================================
 Package               Architecture    Version            Repository       Size
================================================================================
Installing:
 epel-release          noarch          8-8.el8            extras           23 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 23 k
Installed size: 32 k
Downloading Packages:
epel-release-8-8.el8.noarch.rpm                 299 kB/s |  23 kB     00:00    
--------------------------------------------------------------------------------
Total                                            22 kB/s |  23 kB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : epel-release-8-8.el8.noarch                            1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Verifying        : epel-release-8-8.el8.noarch                            1/1 

Installed:
  epel-release-8-8.el8.noarch                                                   

Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:04:47 ago on Tue 16 Jun 2020 21:09:56 AEST.
Error: 
 Problem: package foreman-2.1.0-0.20.rc2.el8.noarch requires rubygem(net-ssh) = 4.2.0, but none of the providers can be installed
  - cannot install both rubygem-net-ssh-5.1.0-2.el8.noarch and rubygem-net-ssh-4.2.0-2.el8.noarch
  - cannot install both rubygem-net-ssh-4.2.0-2.el8.noarch and rubygem-net-ssh-5.1.0-2.el8.noarch
  - cannot install the best update candidate for package rubygem-net-ssh-4.2.0-2.el8.noarch
  - cannot install the best update candidate for package foreman-2.1.0-0.20.rc2.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y erase epel-release
Dependencies resolved.
================================================================================
 Package               Architecture    Version           Repository        Size
================================================================================
Removing:
 epel-release          noarch          8-8.el8           @extras           32 k

Transaction Summary
================================================================================
Remove  1 Package

Freed space: 32 k
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Erasing          : epel-release-8-8.el8.noarch                            1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Verifying        : epel-release-8-8.el8.noarch                            1/1 

Removed:
  epel-release-8-8.el8.noarch                                                   

Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:28:53 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
Nothing to do.
Complete!
[admin@virtualcentos8 ~] CentOS 8 $ 

This will be a much harder problem to solve as it affects even the most basic Foreman install on CentOS 8, if I recall correctly!

Following up from the other thread, EPEL shouldn’t be required on EL8. @aruzicka are we missing some requirement of rubygem-smart_proxy_ansible or a repo in the install instruction? where do we get ansible from on el8?

When checking repoclosure in packaging we’re using the repository at http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=configmanagement-ansible-29.

When you install centos-release-ansible-29 on EL8 it will set things up for you to be able to pull ansible from there. This should probably be mentioned in the docs and maybe the installer should be able to deal with it?

2 Likes

This definitely needs to be in the documentation for CentOS 8, as I just did a quick google search (to conform my suspicion), and the first half-a-dozen how-to links on Ansible on RHEL8/CentOS8 said enable the EPEL repo, and than either dnf install ansible or install python3-pip and pip3 install ansible.

So documentation and/or installer changes will be required to ensure a good first experience - IMHO.

I can report that with the changes for remote execution (discussed above) and the CentOS configuration management ansible dnf repository (discussed below), I was able to install the Foreman Ansible features on CentOS 8.2 / Foreman 2.1rc2, as follows:

sudo vi /etc/yum.repos.d/CentOS-ansible.repo
cat     /etc/yum.repos.d/CentOS-ansible.repo
[ansible]
name=CentOS-$releasever - Ansible 2.9
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=configmanagement-ansible-29&infra=$infra
gpgcheck=1
enabled=1
gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-ConfigManagement

sudo yum list ansible

# Enable Foreman's Ansible Features

	sudo foreman-installer --enable-foreman-cli-ansible --enable-foreman-plugin-ansible --enable-foreman-proxy-plugin-ansible

Both the WebUI and CLI ansible features are present, I’ll look into testing these once I have finished reading the relevant documentation :wink:

Sorry, I misunderstood part of Adam’s earlier response (I’ll blame the headache I had :frowning: ). The procedure should be as suggested by Adam:

sudo dnf install centos-release-ansible-29
sudo dnf list ansible
sudo foreman-installer --enable-foreman-cli-ansible --enable-foreman-plugin-ansible --enable-foreman-proxy-plugin-ansible

There is no need to manually configure the dnf repository.

As the saying goes: If it is not tested, it doesn’t work
and another saying goes: Testing can only show the presence of bugs, never there absence
:wink:

Summary: Ansible (“Run all ansible roles”) fails on CentOS 8

After installing the ansible features as described above on both CentOS 7.8 and CentOS 8.2 VM previously mentioned in the posting, we have the following installed:

CentOS 7:

sudo yum list installed | grep ansible

ansible.noarch                          2.9.9-1.el7             @epel
ansible-runner.noarch                   1.4.6-1.el7             @ansible-runner
python2-ansible-runner.noarch           1.4.6-1.el7             @ansible-runner
python2-daemon.noarch                   2.1.2-7.el7at           @ansible-runner
python2-pexpect.noarch                  4.6-1.el7at             @ansible-runner
python2-ptyprocess.noarch               0.5.2-3.el7at           @ansible-runner
tfm-rubygem-foreman_ansible.noarch      5.0.1-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-foreman_ansible_core.noarch 3.0.3-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-hammer_cli_foreman_ansible.noarch
tfm-rubygem-smart_proxy_ansible.noarch  3.0.1-5.fm2_1.el7       @foreman-plugins

CentOS 8:

sudo yum list installed | grep ansible

ansible.noarch                                     2.9.9-2.el8                                      @ansible
ansible-runner.noarch                              1.4.6-1.el8                                      @ansible-runner
centos-release-ansible-29.noarch                   1-2.el8                                          @extras
python3-ansible-runner.noarch                      1.4.6-1.el8                                      @ansible-runner
python3-daemon.noarch                              2.1.2-9.el8ar                                    @ansible-runner
python3-lockfile.noarch                            1:0.11.0-8.el8ar                                 @ansible-runner
python3-pexpect.noarch                             4.6-2.el8ar                                      @ansible-runner
rubygem-foreman_ansible.noarch                     5.1.1-1.fm2_1.el8                                @foreman-plugins
rubygem-foreman_ansible_core.noarch                3.0.3-1.fm2_1.el8                                @foreman-plugins
rubygem-hammer_cli_foreman_ansible.noarch          0.3.2-1.fm2_1.el8                                @foreman-plugins
rubygem-smart_proxy_ansible.noarch                 3.0.1-5.fm2_1.el8                                @foreman-plugins
sshpass.x86_64                                     1.06-8.el8                                       @ansible

Unless stated otherwise, commands are being performed on both VMs.

sudo find /etc -name ansible.cfg
/etc/foreman-proxy/ansible.cfg
/etc/ansible/ansible.cfg

sudo egrep -v '^#' /etc/foreman-proxy/ansible.cfg
sudo egrep -v '^#' /etc/ansible/ansible.cfg  | uniq

As section heading can only appear once in ansible.cfg, and there is no (actual) default configuration for ansible, lets just replace the package manager provided configuration, with the one foreman-installer generated.

sudo cp /etc/ansible/ansible.cfg /etc/ansible/ansible.cfg.orig
sudo cp /etc/foreman-proxy/ansible.cfg /etc/ansible/ansible.cfg
ls -l   /etc/foreman-proxy/ansible.cfg /etc/ansible/ansible.cfg

# Verify command line ansible is functional
ansible -m ping localhost
localhost | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

ansible -a 'cat /etc/redhat-release' localhost
localhost | CHANGED | rc=0 >>

CentOS 7: CentOS Linux release 7.8.2003 (Core)
CentOS 8: CentOS Linux release 8.2.2004 (Core)

I than selected the WebUI → Hosts → All Hosts → Select Action: Run all ansible roles
(which is running against the Foreman host itself)

This completed successfully after a few seconds on CentOS 7 but failed on CentOS 8.

Given I had not updated either system for a few days, I did a sudo yum upgrade on both systems. Only CentOS 7 had updates to apply and they are listed at the end of this posting. I followed this by a sudo foreman-maintain service restart on BOTH systems.

The test was repeated with the same results (CentOS 7: Success & CentOS 8: Failed). I further tested just remote execution, and it was functional, as can be seen in the following screen snap (from the CentOS 8 WebUI):

I reviewed the backtrace / stacktrace in each failure and they are the same (no differences at all), and it is attached here:
CentOS8-Run1.stacktrace.log (9.4 KB)

The following log files don’t add any value, that I can see:

/var/log/foreman-proxy/smart_proxy_dynflow_core.log
/var/log/foreman-proxy/proxy.log
/var/log/foreman/production.log

Regards,
Peter

The CentOS 7 packages updated

 Package                                              Arch                   Version                           Repository                       Size
=====================================================================================================================================================
Updating:
 tfm-rubygem-foreman-tasks                            noarch                 2.0.0-1.fm2_1.el7                 foreman-plugins                 2.2 M
 tfm-rubygem-foreman_ansible                          noarch                 5.1.1-1.fm2_1.el7                 foreman-plugins                 2.0 M
 tfm-rubygem-foreman_remote_execution                 noarch                 3.3.1-1.fm2_1.el7                 foreman-plugins                 1.6 M

Transaction Summary
=====================================================================================================================================================
Upgrade  3 Packages

The stacktrace you provided doesn’t contain anything interesting as it will always contain exactly the same thing if at least one host in the job fails. Could you provide output for the host the failed?