Certificate verify failed (self signed certificate in certificate chain)

Problem:
We’ve had issues installing plugins recently - the logs never give us much info as to the issue, but i’m thinking it may be related to this SSL error I see when running a health check.

# foreman-maintain health check
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check number of fact names in database:                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running:                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using the ping call:           [FAIL]
Couldn't connect to the server: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)
--------------------------------------------------------------------------------
Continue with step [Restart applicable services]?, [y(yes), n(no)] y
Restart applicable services:

Stopping the following service(s):
redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, pulpcore-worker@5.service, pulpcore-worker@6.service, pulpcore-worker@7.service, pulpcore-worker@8.service, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, dynflow-sidekiq@worker-1, dynflow-sidekiq@worker-2, dynflow-sidekiq@worker-3, dynflow-sidekiq@worker-hosts-queue-1, foreman-proxy
\ stopping httpd
Warning: Stopping foreman.service, but it can still be activated by:
  foreman.socket
| stopping pulpcore-content
Warning: Stopping pulpcore-api.service, but it can still be activated by:
  pulpcore-api.socket

Warning: Stopping pulpcore-content.service, but it can still be activated by:
  pulpcore-content.socket
/ All services stopped

Starting the following service(s):
redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, pulpcore-worker@5.service, pulpcore-worker@6.service, pulpcore-worker@7.service, pulpcore-worker@8.service, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, dynflow-sidekiq@worker-1, dynflow-sidekiq@worker-2, dynflow-sidekiq@worker-3, dynflow-sidekiq@worker-hosts-queue-1, foreman-proxy
| All services started
/ Try 1/5: checking status of hammer ping
Couldn't connect to the server: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)
| Try 2/5: checking status of hammer ping
Couldn't connect to the server: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)
\ Try 3/5: checking status of hammer ping
...

Expected outcome:
I assume the health check should complete without issue.

Foreman and Proxy versions:
3.4.1/4.6.0

Foreman and Proxy plugin versions:

Installed Packages

  • ansible-collection-theforeman-foreman-3.5.0-2.el8.noarch
  • ansiblerole-foreman_scap_client-0.2.0-2.el8.noarch
  • candlepin-4.2.3-1.el8.noarch
  • candlepin-selinux-4.2.3-1.el8.noarch
  • foreman-3.4.1-1.el8.noarch
  • foreman-cli-3.4.1-1.el8.noarch
  • foreman-debug-3.4.1-1.el8.noarch
  • foreman-dynflow-sidekiq-3.4.1-1.el8.noarch
  • foreman-installer-3.4.1-1.el8.noarch
  • foreman-installer-katello-3.4.1-1.el8.noarch
  • foreman-ovirt-3.4.1-1.el8.noarch
  • foreman-postgresql-3.4.1-1.el8.noarch
  • foreman-proxy-3.4.1-1.el8.noarch
  • foreman-release-3.4.1-1.el8.noarch
  • foreman-selinux-3.4.1-1.el8.noarch
  • foreman-service-3.4.1-1.el8.noarch
  • foreman-vmware-3.4.1-1.el8.noarch
  • katello-4.6.0-1.el8.noarch
  • katello-certs-tools-2.9.0-1.el8.noarch
  • katello-client-bootstrap-1.7.9-1.el8.noarch
  • katello-common-4.6.0-1.el8.noarch
  • katello-debug-4.6.0-1.el8.noarch
  • katello-repos-4.6.0-1.el8.noarch
  • katello-selinux-4.0.2-2.el8.noarch
  • pulpcore-selinux-1.3.2-1.el8.x86_64
  • python39-pulp-ansible-0.13.2-2.el8.noarch
  • python39-pulp-certguard-1.5.2-3.el8.noarch
  • python39-pulp-cli-0.14.0-4.el8.noarch
  • python39-pulp-container-2.10.9-1.el8.noarch
  • python39-pulp-deb-2.18.0-3.el8.noarch
  • python39-pulp-file-1.10.2-2.el8.noarch
  • python39-pulp-python-3.7.1-1.el8.noarch
  • python39-pulp-rpm-3.18.9-1.el8.noarch
  • python39-pulpcore-3.18.10-1.el8.noarch
  • qpid-proton-c-0.37.0-1.el8.x86_64
  • rubygem-foreman-tasks-7.0.0-1.fm3_4.el8.noarch
  • rubygem-foreman_ansible-9.0.1-1.fm3_4.el8.noarch
  • rubygem-foreman_bootdisk-21.0.2-1.fm3_4.el8.noarch
  • rubygem-foreman_maintain-1.2.1-1.el8.noarch
  • rubygem-foreman_openscap-5.2.2-2.fm3_3.el8.noarch
  • rubygem-foreman_remote_execution-8.0.0-2.fm3_4.el8.noarch
  • rubygem-hammer_cli-3.4.0-1.el8.noarch
  • rubygem-hammer_cli_foreman-3.4.0-1.el8.noarch
  • rubygem-hammer_cli_foreman_ansible-0.4.0-1.fm3_4.el8.noarch
  • rubygem-hammer_cli_foreman_bootdisk-0.3.0-2.el8.noarch
  • rubygem-hammer_cli_foreman_remote_execution-0.2.2-1.fm3_0.el8.noarch
  • rubygem-hammer_cli_foreman_tasks-0.0.17-1.fm3_2.el8.noarch
  • rubygem-hammer_cli_katello-1.7.0-0.1.pre.master.20220802114853git2f16bef.el8.noarch
  • rubygem-katello-4.6.0-1.el8.noarch
  • rubygem-pulp_ansible_client-0.13.4-1.el8.noarch
  • rubygem-pulp_certguard_client-1.5.5-1.el8.noarch
  • rubygem-pulp_container_client-2.10.7-1.el8.noarch
  • rubygem-pulp_deb_client-2.18.1-1.el8.noarch
  • rubygem-pulp_file_client-1.10.5-1.el8.noarch
  • rubygem-pulp_ostree_client-2.0.0-0.1.a1.el8.noarch
  • rubygem-pulp_python_client-3.6.1-1.el8.noarch
  • rubygem-pulp_rpm_client-3.17.12-1.el8.noarch
  • rubygem-pulpcore_client-3.18.5-2.el8.noarch
  • rubygem-qpid_proton-0.37.0-1.el8.x86_64
  • rubygem-smart_proxy_pulp-3.2.0-3.fm3_3.el8.noarch

Distribution and version:
CentOS 8 Stream

Other relevant data:

Just to update - I found this…

I changed the server_ssl_ca value as suggested (server_ssl_ca: “/etc/pki/katello/certs/katello-server-ca.crt”) on our test server and it fixes the health check.
Is this the correct fix for the live server? Not sure if it’ll break anything else…

Thanks
Rob