Problem with Foreman 2.1rc2 with Remote Execution on CentOS 8

Problem: Schedule Remote Job fails on CentOS 8 / Foreman 2.1rc2

Expected outcome: Same system build procedure works on CentOS 7 / Foreman 2.1rc2

Foreman and Proxy versions: 2.1rc2 repositories for CentOS 7 and CentOS 8

**Foreman and Proxy plugin versions:**2.1rc2 repositories for CentOS 7 and CentOS 8

Distribution and version: CentOS 8.1 x86_64 (updated with all available patches)

NOTE: I have two files I’d like to attach to this posting (build notes and journalctl log), but new users are not permitted to upload attachments (a reasonable restriction). So I hope this post has enough details.

Greetings,

I’m new to foreman, but not to programming and system administration related to provisioning but mostly automation, I have been do that for over two decades :slight_smile:

I’m starting on the bleeding edge with CentOS 8.1 and Foreman 2.1rc{1,2}, and have managed to get a Foreman VM up and going for the provisioning aspects. I moved on to Remote Execution as it a prerequisite for Ansible functionality which I would like to explore. (Also I have a Katello integration question as a postscript)

I have created detailed notes on the build procedure used to reproduce the problem in a clean (and minimal) environment, but would like assistance in debugging the problem (I’m not ruby or web developer), before raising it it as a bug.

On submitting a “Schedule Remote Job” request on CentOS 8 (but not CentOS 7), the key points as I see them are:

WebUI (CentOS8) - clicking on the host name on job failure:

   1: Initialization error: RestClient::NotFound - 404 Not Found
   2: Failed to initialize: TypeError - Value (ActiveSupport::TimeWithZone) '2020-06-05 15:10:46 +1000' is not any of: (Time | NilClass).

From journalctl (see details further down), instances of:

ERROR -- /client-dispatcher: Could not find an executor for Dynflow::Dispatcher::Envelope

/var/log/foreman-proxy/proxy.log (CentOS8):

dateTtime hexcode Started GET /version
dateTtime hexcode Finished GET /version with 200 (0.35 ms)
dateTtime hexcode Started POST /dynflow/tasks/launch
dateTtime hexcode Finished POST /dynflow/tasks/launch with 404 (16.86 ms)    ***
dateTtime hexcode Started POST /dynflow/tasks/
dateTtime hexcode Finished POST /dynflow/tasks/ with 500 (34.28 ms)          ***
dateTtime hexcode Started GET /dynflow/tasks/status
dateTtime hexcode Finished GET /dynflow/tasks/status with 404 (21.94 ms)     ***
dateTtime hexcode Started POST /dynflow/tasks/launch
dateTtime hexcode Finished POST /dynflow/tasks/launch with 404 (13.92 ms)    ***
dateTtime hexcode Started POST /dynflow/tasks/
dateTtime hexcode Finished POST /dynflow/tasks/ with 500 (33.12 ms)          ***

/var/log/foreman-proxy/smart_proxy_dynflow_core.log (CentOS8):

IP - - Date/Time "GET /tasks/count?state=running HTTP/1.1" 200 29
IP - - Date/Time "POST /tasks/launch? HTTP/1.1" 404 46              ***
IP - - Date/Time "POST /tasks/? HTTP/1.1" 500 153486                ***
IP - - Date/Time "GET /tasks/status? HTTP/1.1" 404 555              ***
IP - - Date/Time "POST /tasks/launch? HTTP/1.1" 404 46              ***
IP - - Date/Time "POST /tasks/? HTTP/1.1" 500 153525                ***
IP - - Date/Time "GET /tasks/count?state=running HTTP/1.1" 200 29
IP - - Date/Time "POST /tasks/launch? HTTP/1.1" 404 46              ***
IP - - Date/Time "POST /tasks/? HTTP/1.1" 500 153524                ***
IP - - Date/Time "GET /tasks/status? HTTP/1.1" 404 555              ***
IP - - Date/Time "POST /tasks/launch? HTTP/1.1" 404 46              ***
IP - - Date/Time "POST /tasks/? HTTP/1.1" 500 153524                ***

From journalctl | egrep -i ‘dynflow|proxy|foreman’ (CentOS8):

timestamp hostname smart-proxy[9550]: IPv4Address - - [05/Jun/2020:15:02:31 AEST] "POST /dynflow/tasks/launch HTTP/1.1" 404 46
timestamp hostname smart-proxy[9550]: - -> /dynflow/tasks/launch
timestamp hostname smart-proxy[9550]: IPv4Address - - [05/Jun/2020:15:02:31 AEST] "POST /dynflow/tasks/ HTTP/1.1" 500 153525
timestamp hostname smart-proxy[9550]: - -> /dynflow/tasks/
timestamp hostname dynflow-sidekiq@worker[9431]: E, [2020-06-05T15:02:31.678240 #9431] ERROR -- /client-dispatcher: Could not find an executor for Dynflow::Dispatcher::Envelope[request_id: 118e2f8d-9b56-4063-844d-9e9a4d96fb37-4, sender_id: 118e2f8d-9b56-4063-844d-9e9a4d96fb37, receiver_id: Dynflow::Dispatcher::UnknownWorld, message: Dynflow::Dispatcher::Event[execution_plan_id: 2f734f26-5a69-4be1-bdcf-d82d2e288559, step_id: 3, event: #<Actions::ProxyAction::ProxyActionStopped:0x0000559e3a57cf00>, time: ]] (Dynflow::Error)  ***
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:147:in `dispatch_request'
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:118:in `block (2 levels) in publish_request'
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:206:in `track_request'
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:117:in `block in publish_request'
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:248:in `with_ping_request_caching'
timestamp hostname dynflow-sidekiq@worker[9431]: client_dispatcher.rb:116:in `publish_request'
timestamp hostname dynflow-sidekiq@worker[9431]: [ concurrent-ruby ]

Note: client_dispatcher.rb as actually /usr/share/gems/gems/dynflow-1.4.4/lib/dynflow/dispatcher/client_dispatcher.rb

I have then enabled ‘:log_level: DEBUG’ in /etc/smart_proxy_dynflow_core/settings.yml and
/etc/foreman-proxy/settings.yml and have included the output in the attached build procedure, but the above are the highlights.

Key Points: Foreman 2.1rc2 / CentOS 8 / Remote Execution → Fails

I hope that the details provided in the attached text file will give any one able to help a clear view of the steps taken to date and the results seen on CentOS 7 (working) versus CentOS 8 (not working).

Many Thanks in Advance,
Peter
Sydney, Australia

PS: One quick question, if you don’t mind: It is my understanding from my reading to date, that Katello and Pulp (which worked out of the box on CentOS 7), are intended to run on a separate system from the main Foreman instance. Is this correct? And if so does anyone have a pointer to a write-up on the design of the larger system and the procedure to integrate Foreman and Katello/Pulp?

Build Procedure & Findings - Detailed - You have been warning :slight_smile:
CmdLineHistory-CentOS7and8-Foreman2.1rc2-RemoteExecution-Redacted.log (26.3 KB)
The journactl logs related to foreman|dynflow|proxy
journalctl.CentOS8-redacted.log (139.6 KB)

It looks like you are missing some plugins for smart proxy dynflow core. Could you run “rpm -qa | grep _core” and post the output?

No both systems have the _core modules:

CentOS 7: rpm -qa | grep _core
tfm-rubygem-smart_proxy_dynflow_core-0.2.5-1.fm2_1.el7.noarch
tfm-rubygem-foreman_remote_execution_core-1.3.0-1.el7.noarch

CentOS 8: rpm -qa | grep _core
rubygem-foreman_remote_execution_core-1.3.0-1.el8.noarch
rubygem-smart_proxy_dynflow_core-0.2.5-1.fm2_1.el8.noarch

A few days ago, on the original CentOS 8 system, I was wondering if the services were not getting created or started, but I just checked on both CentOS 7 & 8, and they are present:

CentOS 7: rpm -ql tfm-rubygem-smart_proxy_dynflow_core | egrep '^/usr/'
/usr/bin/smart_proxy_dynflow_core
/usr/lib/systemd/system/smart_proxy_dynflow_core.service

CentOS 8: rpm -ql rubygem-smart_proxy_dynflow_core | grep '^/usr' | grep -v share/gems
/usr/bin/smart_proxy_dynflow_core
/usr/lib/systemd/system/smart_proxy_dynflow_core.service
/usr/share/smart_proxy_dynflow_core/bundler.d

Both systems, show that the service is running: sudo systemctl status smart_proxy_dynflow_core.service. Both report an issue with the PID file, here is the output on CentOS 8:

● smart_proxy_dynflow_core.service - Foreman smart proxy dynflow core service
   Loaded: loaded (/usr/lib/systemd/system/smart_proxy_dynflow_core.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/smart_proxy_dynflow_core.service.d
           └─90-limits.conf
   Active: active (running) since Fri 2020-06-05 16:26:07 AEST; 5h 6min ago
     Docs: https://github.com/theforeman/smart_proxy_dynflow
  Process: 11672 ExecStart=/usr/bin/smart_proxy_dynflow_core -d -p /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (code=exited, status=0/SUCCESS)
 Main PID: 11678 (smart_proxy_dyn)
    Tasks: 7 (limit: 23599)
   Memory: 62.3M
   CGroup: /system.slice/smart_proxy_dynflow_core.service
           └─11678 /usr/bin/ruby /usr/bin/smart_proxy_dynflow_core -d -p /var/run/foreman-proxy/smart_proxy_dynflow_core.pid

Jun 05 16:26:06 virtualcentos8.dc.mydomain.net.au systemd[1]: Starting Foreman smart proxy dynflow core service...
Jun 05 16:26:07 virtualcentos8.dc.mydomain.net.au systemd[1]: smart_proxy_dynflow_core.service: Can't open PID file /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (yet?) after start: No such file or directory
Jun 05 16:26:07 virtualcentos8.dc.mydomain.net.au systemd[1]: Started Foreman smart proxy dynflow core service.

Note: The pid file is present and its contents contain the correct PID for each system

Any other thoughts / pointers are most welcome,
Peter

Are there any errors in SELinux?

@aruzicka perhaps we need to pick this up again:

Great minds think alike or …

I was just investigating SELinux and installed setroubleshoot-server, both systems came back with the following:

sudo sealert -a /var/log/audit/audit.log
100% done
found 0 alerts in /var/log/audit/audit.log

CentOS 8: sestatus – Same as the CentOS 7 system with one additional line:

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)    [This line is NOT present on CentOS 7 system]
Max kernel policy version:      31

I’m not really familiar with SELinux, and have not change anything from the default install w.r.t. SELinux.

Is there any other checks (SELinux or otherwise) I should be looking into?

https://projects.theforeman.org/issues/27184

With respect to /var/run sub-directories, I had this same issue: a race condition expecting another service to have created the /var/run sub-directory, when backporting the Fedora ISC DHCP back to RHEL 7 at a previous job. I believe I added a snippet to the dhcpd-service in the /etc/systemd/ tree to use ExecStartPre to create the directory we needed with the same permissions as the other service would have.

In this case we could simple put such commands in the main service definition file. It may be worth creating an empty file or using touch(1) after the directory is present to ensure the PID file is present and not have to change the ruby code. I have not looked at the foreman code to determine what would be the correct approach.

I did a little more reading, and the following (currently untested) should assist, if needed to silence this warning:

smart_proxy_dynflow_core.service: Can't open PID file /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (yet?) after start: No such file or directory

Reference: man systemd.service

ExecStartPre=+install -d -o foreman-proxy -g foreman-proxy -m 0755 /var/run/foreman-proxy
ExecStartPre=touch /var/run/foreman-proxy/smart_proxy_dynflow_core.pid

The ‘+’ described in Table 1. Special executable prefixes, allows the command to ignore the User= line, and execute the command with full privileges - this is required as /var/run is owned by root:root. With the newly created directory owned by foreman-proxy, the touch(1) does not need to be executed with full privileges.

Isn’t tmpfiles.d a much cleaner solution to that?

After some more reading (thanks for the pointer), I think so and in fact the directories are created using this mechanism, see snippets at the end of this message.

Without further research, this would imply to me that the message be better rephrased “Note: No pre-existing PID file detected at startup (/var/run/foreman-proxy/smart_proxy_dynflow_core.pid)”, or something that would lead the reader to a conclusion that this is not something to worry about if not worrying is the correct action on seeing the message. Alternatively, on finding the PID file, generate a message about that case, instead of the missing PID file case.

Another consideration could be, the need for additional entries to the After= (or Requires=) line in the systemd service definition files, possibly adding systemd-tmpfiles-setup.service. I also noted that none of the Foreman related services (save foreman.service requiring foreman.socket) depend on each other – that’s quite cool if correctly representing reality.


Existing Foreman 2.1rc2 tmpfiles.d configuration on both CentOS 7 & 8:

more /usr/lib/tmpfiles.d/foreman*.conf
::::::::::::::
/usr/lib/tmpfiles.d/foreman.conf
::::::::::::::
d /run/foreman 0755 foreman foreman -
::::::::::::::
/usr/lib/tmpfiles.d/foreman-proxy.conf
::::::::::::::
d /run/foreman-proxy 0755 foreman-proxy foreman-proxy -
d /var/cache/foreman-proxy 0700 foreman-proxy foreman-proxy -

Question: Should I raise an issue about this problem, as no one has further commented on the core issue?
(Save Adam and ekohl both 6 days ago)

Also has anyone else seen this issue?

Or tried this combination CentOS 8 & Remote Execution on 2.1rc2?

I had to deploy an EL8 based machine to get to the bottom of this. Turns out we missed something in packaging and on EL8 some files got deployed into the old scl prefix and therefore weren’t picked up by the running processes.

As a workaround deploy a symlink putting the file into the right place with

ln -s /opt/theforeman/tfm/root/usr/share/smart_proxy_dynflow_core/bundler.d/foreman_remote_execution_core.rb /usr/share/smart_proxy_dynflow_core/bundler.d/

and restart smart_proxy_dynflow_core

1 Like

Adam, Thanks for the reply - this has helped but not solved the problem. The job now sits in the “Pending” rather than “Failed” state for the CentOS 8, see below. Not sure why, so rebooted the VMs, and the problem persists for both the existing and new attempts on the CentOS 8 system. journalctl --unit smart_proxy_dynflow_core adds nothing to the picture. The foreman log can see a problem, but the dynflow log with debugging on is a bit of a mess, so I’m going to turn debug off and get a simpler starting point to investigate from.

Not having a lot if success in finding the source if the problem… So I’ll report a smaller issue:

On the page: https://FQDN/job_invocations?.… you get a documentation link, this points to https://theforeman.org/plugins/foreman_remote_execution/3.2/index.html#3.2ExecutingaJob

This seems reasonable as following would suggest: rpm -qa | grep remote

rubygem-foreman_remote_execution-3.2.1-1.fm2_1.el8.noarch
rubygem-smart_proxy_remote_execution_ssh-0.3.0-3.fm2_1.el8.noarch
rubygem-foreman_remote_execution_core-1.3.0-1.el8.noarch

But the Foreman :: Plugin documentation index only shows versions: 1.7, 1.3, and 0.x as having manuals for Remote Execution.

To ensure I was not providing false errors related to the earlier problem, I snapshot(ed) the VM, and rolled it back to before the first foreman-installer command in the previously attached write up (the “Basics Completed” snapshot), reinstalled foreman, changed the firewall as previously described, and installed the remote execution plugin. As follows:

	sudo foreman-installer \
	    --enable-foreman-plugin-remote-execution \
            --enable-foreman-proxy-plugin-remote-execution-ssh

# BEGIN: Update 2010/06/12 16:23

	# Fix package issue
	sudo ln -s \
	    /opt/theforeman/tfm/root/usr/share/smart_proxy_dynflow_core/bundler.d/foreman_remote_execution_core.rb \
	    /usr/share/smart_proxy_dynflow_core/bundler.d/

	sudo systemctl restart smart_proxy_dynflow_core

# END:   Update 2010/06/12 16:23

I verified by systemctl status that the service had restarted.

I tried a two Tasks: Package Check and Command - both tasks can be seen in the Foreman tasks list:

I looked through logs in /var/log/httpd and /var/log/foreman* and even the dynflow_steps table in postgres were I had previously seen errors (ruby stack traces) recorded → NOTHING of interest.

All logs are so clean it’s off putting, but this is at the default debugging levels.

I’m going to leave the system for a while to see if timeouts exist to clean up the tasks.

If no one has better suggestions over night (Sydney Time), I’ll up the debug levels tomorrow and see what if anything can be gleaned.

I did a bit more debugging. The actual error does not show up in logs (a bug), I had to run smart_proxy_dynflow_core with logging to console to actually be able to see it.

E, [2020-06-15T03:20:30.722368 #7382] ERROR -- : OpenSSH keys only supported if ED25519 is available
net-ssh requires the following gems for ed25519 support:
 * rbnacl (>= 3.2, < 5.0)
 * rbnacl-libsodium, if your system doesn't have libsodium installed.
 * bcrypt_pbkdf (>= 1.0, < 2.0)
See https://github.com/net-ssh/net-ssh/issues/478 for more information
Gem::MissingSpecError : "Could not find 'rbnacl' (< 5.0, >= 3.2.0) among 177 total gem(s)
Checked in 'GEM_PATH=/usr/share/foreman-proxy/.gem/ruby:/usr/share/gems:/usr/local/share/gems', execute `gem env` for more information"
 (NotImplementedError)
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/ed25519_loader.rb:19:in `raiseUnlessLoaded'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:112:in `classify_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:52:in `load_data_private_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/key_factory.rb:43:in `load_private_key'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:142:in `sign'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:62:in `authenticate_with'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:20:in `block in authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:122:in `block in each_identity'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:119:in `each'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/key_manager.rb:119:in `each_identity'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/methods/publickey.rb:19:in `authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:80:in `block in authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:66:in `each'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh/authentication/session.rb:66:in `authenticate'
/usr/share/gems/gems/net-ssh-4.2.0/lib/net/ssh.rb:241:in `start'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:267:in `session'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:337:in `run_sync'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:426:in `ensure_remote_directory'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:403:in `upload_data'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:399:in `cp_script_to_remote'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:167:in `prepare_start'
/usr/share/gems/gems/foreman_remote_execution_core-1.3.0/lib/foreman_remote_execution_core/script_runner.rb:153:in `start'
/usr/share/gems/gems/foreman-tasks-core-0.3.4/lib/foreman_tasks_core/runner/dispatcher.rb:32:in `start_runner'

Looking into why it only happens on EL8.

Oh, I see.

The net-ssh library we use uses the first line of the private key to determine which type of key (RSA, ed25519 and so on) it is. The version of ssh shipped with EL7 generates keys in the PEM format, where private keys contain the string BEGIN RSA PRIVATE KEY on the first line of the key. The version shipped with EL8 generates keys in RFC4716 which contain BEGIN OPENSSH PRIVATE KEY and which are considered to be ed25519 keys by the library we use. Hence the error is fired because we don’t ship the dependencies it needs for handling ed25519 keys.

If you don’t mind regenerating your keys, then it is easy. Just regenerate them in the right format with the following command and then redeploy the public key to the target hosts.

sudo -u foreman-proxy ssh-keygen -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy -m PEM -N '' -t rsa -b 4096

We will need to be explicit about the format we require in the installer.

Sorry for the delay in responding.

Recreated ssh keys and updated authorized keys for root.
Tested Remote Execution (again just to the foreman host itself) - No Problems :slight_smile:
sudo dnf upgrade moved system from CentOS 8.1 → CentOS 8.2, Rebooted.
Tested Remote Execution (ditto above aside)- No Problems :slight_smile:

Many Thanks,
Peter

PS: On to the ansible …

Continuing the adventure into unknown territory…

The Good:

  • Remote Execution and Tasks Hammer CLI are functional on both CentOS 7.8 and 8.2

    sudo foreman-installer --enable-foreman-cli-remote-execution --enable-foreman-cli-tasks
    hammer remote-execution-feature list
    hammer remote-execution-feature info --id 1
    hammer task list

  • Ansible features install on CentOS 7.8 without issue

    sudo foreman-installer --enable-foreman-cli-ansible --enable-foreman-plugin-ansible --enable-foreman-proxy-plugin-ansible

The Bad: Ansible features install on CentOS 8 fails (same command line as above)

Execution of '/bin/dnf -d 0 -e 1 -y install rubygem-smart_proxy_ansible' returned 1: Error: 
Problem: cannot install the best candidate for the job
.- nothing provides ansible >= 2.2 needed by rubygem-smart_proxy_ansible-3.0.1-5.fm2_1.el8.noarch
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/execution.rb:297:in `execute'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/provider.rb:101:in `execute'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/provider/package/yum.rb:303:in `install'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type/package.rb:98:in `block (3 levels) in <module:Puppet>'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/property.rb:490:in `set'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/property.rb:570:in `sync'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:241:in `sync'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:136:in `sync_if_needed'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:82:in `perform_changes'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:21:in `evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:267:in `apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:287:in `eval_resource'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `call'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `block (2 levels) in evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:191:in `block in evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/graph/relationship_graph.rb:122:in `traverse'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction.rb:178:in `evaluate'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:240:in `block (2 levels) in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:239:in `block in apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/log.rb:161:in `with_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/report.rb:146:in `as_logging_destination'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/resource/catalog.rb:238:in `apply'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:185:in `block (2 levels) in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:521:in `block in thinmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:520:in `thinmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:184:in `block in apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:233:in `block in benchmark'
/opt/puppetlabs/puppet/lib/ruby/2.5.0/benchmark.rb:308:in `realtime'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:232:in `benchmark'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:183:in `apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:391:in `run_internal'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:227:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:210:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:343:in `apply_catalog'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:260:in `block (2 levels) in main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:243:in `block in main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:62:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:314:in `override'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:207:in `main'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application/apply.rb:177:in `run_command'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:382:in `block in run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util.rb:710:in `exit_on_fail'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/application.rb:382:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:143:in `run'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:77:in `execute'
/opt/puppetlabs/puppet/bin/puppet:5:in `<main>'
 Problem: cannot install the best candidate for the job
  - nothing provides ansible >= 2.2 needed by rubygem-smart_proxy_ansible-3.0.1-5.fm2_1.el8.noarch
Preparing installation Done                                              
  Something went wrong! Check the log for ERROR-level output
  The full log is at /var/log/foreman-installer/foreman.log

On CentOS 7, EPEL provides ansible as can be seen in:

sudo yum list installed | grep ansible
ansible.noarch                          2.9.9-1.el7             @epel           
ansible-runner.noarch                   1.4.6-1.el7             @ansible-runner 
python2-ansible-runner.noarch           1.4.6-1.el7             @ansible-runner 
python2-daemon.noarch                   2.1.2-7.el7at           @ansible-runner 
python2-pexpect.noarch                  4.6-1.el7at             @ansible-runner 
python2-ptyprocess.noarch               0.5.2-3.el7at           @ansible-runner 
tfm-rubygem-foreman_ansible.noarch      5.0.1-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-foreman_ansible_core.noarch 3.0.3-1.fm2_1.el7       @foreman-plugins
tfm-rubygem-hammer_cli_foreman_ansible.noarch
tfm-rubygem-smart_proxy_ansible.noarch  3.0.1-5.fm2_1.el7       @foreman-plugins

But EPEL can not be installed on CentOS 8.x without causing problems (as noted in attachment CmdLineHostory…). But I’ll include the observed problem here as I didn’t include the details in the attachment.

The commands are:

sudo yum -y upgrade
sudo yum -y install epel-release
sudo yum -y upgrade
sudo yum -y erase epel-release
sudo yum -y upgrade

The output is:

[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:28:23 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
Nothing to do.
Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y install epel-release
Last metadata expiration check: 0:28:31 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
================================================================================
 Package               Architecture    Version            Repository       Size
================================================================================
Installing:
 epel-release          noarch          8-8.el8            extras           23 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 23 k
Installed size: 32 k
Downloading Packages:
epel-release-8-8.el8.noarch.rpm                 299 kB/s |  23 kB     00:00    
--------------------------------------------------------------------------------
Total                                            22 kB/s |  23 kB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : epel-release-8-8.el8.noarch                            1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Verifying        : epel-release-8-8.el8.noarch                            1/1 

Installed:
  epel-release-8-8.el8.noarch                                                   

Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:04:47 ago on Tue 16 Jun 2020 21:09:56 AEST.
Error: 
 Problem: package foreman-2.1.0-0.20.rc2.el8.noarch requires rubygem(net-ssh) = 4.2.0, but none of the providers can be installed
  - cannot install both rubygem-net-ssh-5.1.0-2.el8.noarch and rubygem-net-ssh-4.2.0-2.el8.noarch
  - cannot install both rubygem-net-ssh-4.2.0-2.el8.noarch and rubygem-net-ssh-5.1.0-2.el8.noarch
  - cannot install the best update candidate for package rubygem-net-ssh-4.2.0-2.el8.noarch
  - cannot install the best update candidate for package foreman-2.1.0-0.20.rc2.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y erase epel-release
Dependencies resolved.
================================================================================
 Package               Architecture    Version           Repository        Size
================================================================================
Removing:
 epel-release          noarch          8-8.el8           @extras           32 k

Transaction Summary
================================================================================
Remove  1 Package

Freed space: 32 k
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Erasing          : epel-release-8-8.el8.noarch                            1/1 
  Running scriptlet: epel-release-8-8.el8.noarch                            1/1 
  Verifying        : epel-release-8-8.el8.noarch                            1/1 

Removed:
  epel-release-8-8.el8.noarch                                                   

Complete!
[admin@virtualcentos8 ~] CentOS 8 $ sudo yum -y upgrade
Last metadata expiration check: 0:28:53 ago on Tue 16 Jun 2020 20:46:03 AEST.
Dependencies resolved.
Nothing to do.
Complete!
[admin@virtualcentos8 ~] CentOS 8 $ 

This will be a much harder problem to solve as it affects even the most basic Foreman install on CentOS 8, if I recall correctly!

Following up from the other thread, EPEL shouldn’t be required on EL8. @aruzicka are we missing some requirement of rubygem-smart_proxy_ansible or a repo in the install instruction? where do we get ansible from on el8?