Fresh install auf 2.5.2 with Katello failed

Problem:
I’m trying to do a fresh install of 2.5.2 on CentOS 8 and it fails.

foreman-installer --scenario katello \
--enable-foreman-cli \
--enable-foreman-cli-ansible \
--enable-foreman-cli-discovery \
--enable-foreman-cli-tasks \
--enable-foreman-cli-templates \
--enable-foreman-cli-openscap \
--enable-foreman-compute-openstack \
--enable-foreman-compute-vmware \
--enable-foreman-plugin-ansible \
--enable-foreman-plugin-discovery \
--enable-foreman-plugin-hooks \
--no-enable-foreman-plugin-monitoring \
--enable-foreman-plugin-openscap \
--enable-foreman-plugin-remote-execution \
--enable-foreman-plugin-snapshot-management \
--enable-foreman-plugin-tasks \
--enable-foreman-plugin-templates \
--enable-foreman-proxy \
--enable-foreman-proxy-content \
--enable-foreman-proxy-plugin-ansible \
--enable-foreman-proxy-plugin-discovery \
--no-enable-foreman-proxy-plugin-monitoring \
--enable-foreman-proxy-plugin-openscap \
--foreman-initial-organization "myOrg" --foreman-initial-location "home" \
--foreman-email-delivery-method smtp \
--foreman-email-smtp-address smtp.my.org --foreman-initial-admin-email admin@my.org  
2021-07-17 23:49:31 [NOTICE] [root] Loading installer configuration. This will take some time.
2021-07-17 23:49:37 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.
2021-07-17 23:49:37 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
2021-07-17 23:53:45 [NOTICE] [configure] Starting system configuration.
2021-07-17 23:56:23 [NOTICE] [configure] 250 configuration steps out of 2029 steps complete.
2021-07-17 23:59:02 [NOTICE] [configure] 500 configuration steps out of 2029 steps complete.
2021-07-17 23:59:59 [NOTICE] [configure] 750 configuration steps out of 2032 steps complete.
2021-07-18 00:01:54 [NOTICE] [configure] 1000 configuration steps out of 2037 steps complete.
2021-07-18 00:04:16 [NOTICE] [configure] 1250 configuration steps out of 2060 steps complete.
2021-07-18 00:12:52 [NOTICE] [configure] 1500 configuration steps out of 2060 steps complete.
2021-07-18 00:14:27 [NOTICE] [configure] 1750 configuration steps out of 2060 steps complete.
2021-07-18 00:16:09 [NOTICE] [configure] 2000 configuration steps out of 2060 steps complete.
2021-07-18 00:16:33 [NOTICE] [configure] System configuration has finished.
Executing: foreman-rake upgrade:run
`/usr/share/foreman` is not writable.
Bundler will use `/tmp/bundler20210718-15202-1dufw3k15202' as your home directory temporarily.
Rubocop not loaded.
=============================================
Upgrade Step 1/5: katello:correct_repositories. This may take a long while.
=============================================
Upgrade Step 2/5: katello:clean_backend_objects. This may take a long while.
Failed upgrade task: katello:clean_backend_objects, see logs for more information.
=============================================
Upgrade Step 3/5: katello:upgrades:4.0:remove_ostree_puppet_content. =============================================
Upgrade Step 4/5: katello:upgrades:4.1:sync_noarch_content. =============================================
Upgrade Step 5/5: katello:upgrades:4.1:fix_invalid_pools. I, [2021-07-18T00:16:47.911843 #15202]  INFO -- : Corrected 0 invalid pools
I, [2021-07-18T00:16:47.911884 #15202]  INFO -- : Removed 0 orphaned pools
  Success!

On login, it tells me that candlepin isn’t running.

I have SELINUX enabled and I found these in the logs:

type=AVC msg=audit(1626559484.836:737): avc:  denied  { create } for  pid=11098 comm="java" name=".pki" scontext=system_u:system_r:tomcat_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1626560160.846:763): avc:  denied  { search } for  pid=13183 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560161.614:765): avc:  denied  { search } for  pid=13289 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560162.368:767): avc:  denied  { search } for  pid=13396 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560163.071:769): avc:  denied  { search } for  pid=13503 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560163.894:771): avc:  denied  { search } for  pid=13614 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560164.641:773): avc:  denied  { search } for  pid=13724 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560165.436:774): avc:  denied  { search } for  pid=13845 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560166.093:775): avc:  denied  { create } for  pid=14193 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560166.093:776): avc:  denied  { connect } for  pid=14193 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560166.093:776): avc:  denied  { sendto } for  pid=14193 comm="gunicorn" path="/run/systemd/notify" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560166.267:778): avc:  denied  { search } for  pid=13957 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560167.976:779): avc:  denied  { search } for  pid=14428 comm="gunicorn" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560168.103:780): avc:  denied  { create } for  pid=14428 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560168.103:781): avc:  denied  { connect } for  pid=14428 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560168.103:781): avc:  denied  { sendto } for  pid=14428 comm="gunicorn" path="/run/systemd/notify" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560168.618:783): avc:  denied  { search } for  pid=14198 comm="gunicorn" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560470.746:32): avc:  denied  { create } for  pid=1222 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560470.746:33): avc:  denied  { connect } for  pid=1222 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560470.746:33): avc:  denied  { sendto } for  pid=1222 comm="gunicorn" path="/run/systemd/notify" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560471.470:36): avc:  denied  { search } for  pid=1224 comm="gunicorn" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560472.286:58): avc:  denied  { create } for  pid=1224 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560472.286:59): avc:  denied  { connect } for  pid=1224 comm="gunicorn" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:pulpcore_server_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560472.286:59): avc:  denied  { sendto } for  pid=1224 comm="gunicorn" path="/run/systemd/notify" scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=unix_dgram_socket permissive=1
type=AVC msg=audit(1626560480.817:98): avc:  denied  { search } for  pid=1459 comm="pulpcore-worker" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560483.726:100): avc:  denied  { search } for  pid=1294 comm="gunicorn" name="httpd" dev="dm-7" ino=148914 scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1
type=AVC msg=audit(1626560494.231:101): avc:  denied  { create } for  pid=1221 comm="java" name=".pki" scontext=system_u:system_r:tomcat_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0

Foreman and Proxy versions:
foreman-2.5.2-1.el8.noarch
foreman-proxy-2.5.2-1.el8.noarch

Foreman and Proxy plugin versions:

Distribution and version:
CentOS 8.4

repo id                                                                   repo name
ansible-runner                                                            Ansible runner
appstream                                                                 CentOS Linux 8 - AppStream
baseos                                                                    CentOS Linux 8 - BaseOS
centos-ansible-29                                                         CentOS Configmanagement SIG - ansible-29
extras                                                                    CentOS Linux 8 - Extras
foreman                                                                   Foreman 2.5
foreman-plugins                                                           Foreman plugins 2.5
katello                                                                   Katello 4.1
katello-candlepin                                                         Candlepin: an open source entitlement management system.
powertools                                                                CentOS Linux 8 - PowerTools
pulpcore                                                                  pulpcore: Fetch, Upload, Organize, and Distribute Software Packages.
puppet6                                                                   Puppet 6 Repository el 8 - x86_64

I will try without SELINUX tomorrow.

Check the installer and production log for the exact error message. The installer log of your last installer run is in /var/log/foreman-installer/katello.log. The production log is /var/log/foreman/production.log.

SeLinux must be enabled. See system requirements: Installing Foreman 2.5 server with Katello 4.1 plugin on Enterprise Linux

It’s a bad idea to turn off security if the installer doesn’t run correctly right away. And there is only a single, potentially relevant message in the audit log you have posted:

type=AVC msg=audit(1626559484.836:737): avc:  denied  { create } for  pid=11098 comm="java" name=".pki" scontext=system_u:system_r:tomcat_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1626560494.231:101): avc:  denied  { create } for  pid=1221 comm="java" name=".pki" scontext=system_u:system_r:tomcat_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0

What Katello version do you use? Foreman 2.5.2 must go with Katello 4.1.1. In particular it uses pulpcore 3.14 instead of 3.11…

The installer-log doesn’t say much more:

2021-07-18 00:16:33 [INFO  ] [post] Executing hooks in group post
2021-07-18 00:16:33 [DEBUG ] [root] Executing: foreman-rake upgrade:run
2021-07-18 00:16:47 [DEBUG ] [root] `/usr/share/foreman` is not writable.
2021-07-18 00:16:47 [DEBUG ] [root] Bundler will use `/tmp/bundler20210718-15202-1dufw3k15202' as your home directory temporarily.
2021-07-18 00:16:47 [DEBUG ] [root] Rubocop not loaded.
2021-07-18 00:16:47 [DEBUG ] [root] =============================================
2021-07-18 00:16:47 [DEBUG ] [root] Upgrade Step 1/5: katello:correct_repositories. This may take a long while.
2021-07-18 00:16:47 [DEBUG ] [root] =============================================
2021-07-18 00:16:47 [DEBUG ] [root] Upgrade Step 2/5: katello:clean_backend_objects. This may take a long while.
2021-07-18 00:16:47 [DEBUG ] [root] Failed upgrade task: katello:clean_backend_objects, see logs for more information.
2021-07-18 00:16:47 [DEBUG ] [root] =============================================
2021-07-18 00:16:47 [DEBUG ] [root] Upgrade Step 3/5: katello:upgrades:4.0:remove_ostree_puppet_content. 
2021-07-18 00:16:47 [DEBUG ] [root] =============================================
2021-07-18 00:16:47 [DEBUG ] [root] Upgrade Step 4/5: katello:upgrades:4.1:sync_noarch_content. 
2021-07-18 00:16:47 [DEBUG ] [root] =============================================
2021-07-18 00:16:47 [DEBUG ] [root] Upgrade Step 5/5: katello:upgrades:4.1:fix_invalid_pools. 
2021-07-18 00:16:47 [DEBUG ] [root] I, [2021-07-18T00:16:47.911843 #15202]  INFO -- : Corrected 0 invalid pools
2021-07-18 00:16:47 [DEBUG ] [root] I, [2021-07-18T00:16:47.911884 #15202]  INFO -- : Removed 0 orphaned pools
2021-07-18 00:16:47 [DEBUG ] [post] Hook /usr/share/foreman-installer/hooks/post/30-upgrade.rb returned nil
2021-07-18 00:16:47 [DEBUG ] [post] cdn_ssl_version already migrated, skipping
2021-07-18 00:16:47 [DEBUG ] [post] Hook /usr/share/foreman-installer/hooks/post/31-cdn_setting.rb returned true
2021-07-18 00:16:47 [DEBUG ] [post] Hook /usr/share/foreman-installer/hooks/post/34-pulpcore_directory_layout.rb returned nil
2021-07-18 00:16:47 [DEBUG ] [post] Hook /usr/share/foreman-installer/hooks/post/99-post_install_message.rb returned nil
2021-07-18 00:16:47 [DEBUG ] [post] Hook /usr/share/foreman-installer/hooks/post/99-version_locking.rb returned nil
2021-07-18 00:16:47 [INFO  ] [post] All hooks in group post finished
2021-07-18 00:16:47 [DEBUG ] [root] Exit with status code: 2 (signal was 2)
2021-07-18 00:16:47 [DEBUG ] [root] Cleaning /tmp/kafo_installation20210717-2221-16l5s9o
2021-07-18 00:16:47 [DEBUG ] [root] Cleaning /tmp/kafo_installation20210717-2221-1ww5wac
2021-07-18 00:16:47 [DEBUG ] [root] Cleaning /tmp/default_values.yaml
2021-07-18 00:16:47 [DEBUG ] [root] Installer finished in 1630.338564803 seconds

/var/lib/pulp is its own partition that gets “reused” after the install-attempts - I rm -rf all the files.
Do you think that could be a problem?

foreman-installer-katello-2.5.2-3.el8.noarch
katello-4.1.1-1.el8.noarch
katello-certs-tools-2.7.3-1.el8.noarch
katello-client-bootstrap-1.7.6-1.el8.noarch
katello-common-4.1.1-1.el8.noarch
katello-debug-4.1.1-1.el8.noarch
katello-default-ca-1.0-1.noarch
katello-repos-4.1.1-1.el8.noarch
katello-selinux-4.0.0-1.el8.noarch
katello-server-ca-1.0-1.noarch
rubygem-hammer_cli_katello-1.1.2-1.el8.noarch
rubygem-katello-4.1.1-1.el8.noarch
pulp-client-1.0-1.noarch
pulpcore-selinux-1.2.4-1.el8.x86_64
python3-pulp-ansible-0.8.0-1.el8.noarch
python3-pulp-certguard-1.4.0-1.el8.noarch
python3-pulp-container-2.7.0-1.el8.noarch
python3-pulpcore-3.14.1-1.el8.noarch
python3-pulp-deb-2.13.0-1.el8.noarch
python3-pulp-file-1.8.1-1.el8.noarch
python3-pulp-rpm-3.13.3-1.el8.noarch
rubygem-pulp_ansible_client-0.8.0-1.el8.noarch
rubygem-pulp_certguard_client-1.4.0-1.el8.noarch
rubygem-pulp_container_client-2.7.0-1.el8.noarch
rubygem-pulpcore_client-3.14.1-1.el8.noarch
rubygem-pulp_deb_client-2.13.0-1.el8.noarch
rubygem-pulp_file_client-1.8.1-1.el8.noarch
rubygem-pulp_rpm_client-3.13.3-1.el8.noarch
rubygem-smart_proxy_pulp-3.0.0-1.fm2_5.el8.noarch

If it’s not in the installer log, I think it should be in the production log.

Or run the rake task directly:

# foreman-rake --trace katello:clean_backend_objects

The production log is 19M…

The first error I can see is this:

2021-07-18T16:05:13 [I|app|] Rails cache backend: File
2021-07-18T16:14:41 [I|app|] Rails cache backend: File
2021-07-18T16:14:44 [W|app|] Failed to register foreman_remote_execution plugin (PG::UndefinedTable: ERROR:  relation "remote_execution_features" does not exist
 | LINE 8:  WHERE a.attrelid = '"remote_execution_features"'::regclass
 |                             ^
 | )
2021-07-18T16:14:44 [W|app|] Creating scope :exportable. Overwriting existing method Katello::Repository.exportable.
2021-07-18T16:14:45 [W|app|] Creating scope :completer_scope. Overwriting existing method Organization.completer_scope.
2021-07-18T16:14:45 [W|app|] Failed to register foreman_openscap plugin (PG::UndefinedTable: ERROR:  relation "hostgroups" does not exist
 | LINE 8:  WHERE a.attrelid = '"hostgroups"'::regclass
 |                             ^
 | )
2021-07-18T16:14:45 [W|app|] Failed to load Proxmox extension uninitialized constant ForemanFogProxmox
2021-07-18T16:14:45 [W|app|] Creating scope :completer_scope. Overwriting existing method Location.completer_scope.
2021-07-18T16:14:47 [W|app|] Setting is not initialized yet, requested value for lab_features will be always nil
2021-07-18T16:14:53 [I|app|] Rails cache backend: File
2021-07-18T16:14:56 [W|app|] Failed to register foreman_remote_execution plugin (PG::UndefinedTable: ERROR:  relation "remote_execution_features" does not exist
 | LINE 8:  WHERE a.attrelid = '"remote_execution_features"'::regclass
 |                             ^
 | )
2021-07-18T16:14:56 [W|app|] Creating scope :exportable. Overwriting existing method Katello::Repository.exportable.
2021-07-18T16:14:57 [W|app|] Creating scope :completer_scope. Overwriting existing method Organization.completer_scope.
2021-07-18T16:14:58 [W|app|] Failed to register foreman_openscap plugin (PG::UndefinedTable: ERROR:  relation "hostgroups" does not exist
 | LINE 8:  WHERE a.attrelid = '"hostgroups"'::regclass
 |                             ^
 | )

But the first “relevant” error I believe to be this:

110718 2021-07-18T16:25:32 [I|app|b5360de6] Started GET "/api/v2/smart_proxies?search=name%3D%22foreman-app01-prod.dom.tld%22" for 192.168.116.13 at 2021-07-18 16:25:32 +0200
110719 2021-07-18T16:25:32 [I|app|b5360de6] Katello event daemon started process=11994
110720 2021-07-18T16:25:32 [W|app|b5360de6] 404 Not Found
110721  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:223:in `exception_with_response'
110722  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/abstract_response.rb:103:in `return!'
110723  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:809:in `process_result'
110724  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:725:in `block in transmit'
110725  b5360de6 | /usr/share/ruby/net/http.rb:933:in `start'
110726  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit'
110727  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute'
110728  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute'
110729  b5360de6 | /usr/share/gems/gems/rest-client-2.0.2/lib/restclient/resource.rb:51:in `get'
110730  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:259:in `backend_status'
110731  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:89:in `block in ping_candlepin_without_auth'
110732  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:138:in `exception_watch'
110733  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:88:in `ping_candlepin_without_auth'
110734  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:230:in `ping_services_for_capsule'
110735  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/models/katello/ping.rb:25:in `ping!'
110736  b5360de6 | /usr/share/gems/gems/katello-4.1.1/app/services/katello/organization_creator.rb:14:in `create_all_organizations!'
110737  b5360de6 | /usr/share/gems/gems/katello-4.1.1/lib/katello/middleware/organization_created_enforcer.rb:12:in `call'
110738  b5360de6 | /usr/share/gems/gems/katello-4.1.1/lib/katello/middleware/event_daemon.rb:10:in `call'
110739  b5360de6 | /usr/share/gems/gems/actionpack-6.0.3.7/lib/action_dispatch/middleware/static.rb:126:in `call'
110740  b5360de6 | /usr/share/gems/gems/actionpack-6.0.3.7/lib/action_dispatch/middleware/static.rb:126:in `call'

`/usr/share/foreman` is not writable.
Bundler will use `/tmp/bundler20210718-15761-tv38df15761' as your home directory temporarily.
Rubocop not loaded.
** Invoke katello:clean_backend_objects (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke katello:check_ping (first_time)
** Invoke environment 
** Invoke dynflow:client (first_time)
** Invoke environment 
** Execute dynflow:client
** Execute katello:check_ping
{:services=>
  {:candlepin=>{:status=>"FAIL", :message=>"404 Not Found"},
   :candlepin_auth=>
    {:status=>"FAIL", :message=>"Katello::Errors::CandlepinNotRunning"},
   :foreman_tasks=>{:status=>"ok", :duration_ms=>"3"},
   :katello_events=>
    {:status=>"ok", :message=>"0 Processed, 0 Failed", :duration_ms=>"1"},
   :candlepin_events=>
    {:status=>"FAIL", :message=>"Not running", :duration_ms=>"0"},
   :pulp3=>{:status=>"ok", :duration_ms=>"134"}},
 :status=>"FAIL"}
rake aborted!
Not all the services have been started. Check the status report above and try again.
/usr/share/gems/gems/katello-4.1.1/lib/katello/tasks/reimport.rake:10:in `block (2 levels) in <top (required)>'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:281:in `block in execute'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:281:in `each'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:281:in `execute'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:199:in `synchronize'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:243:in `block in invoke_prerequisites'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:241:in `each'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:241:in `invoke_prerequisites'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:218:in `block in invoke_with_call_chain'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:199:in `synchronize'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/share/gems/gems/rake-13.0.1/lib/rake/task.rb:188:in `invoke'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:160:in `invoke_task'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:116:in `each'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:116:in `block in top_level'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:125:in `run_with_threads'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:110:in `top_level'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:83:in `block in run'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:186:in `standard_exception_handling'
/usr/share/gems/gems/rake-13.0.1/lib/rake/application.rb:80:in `run'
/usr/share/gems/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'
/usr/bin/rake:23:in `load'
/usr/bin/rake:23:in `<main>'
Tasks: TOP => katello:clean_backend_objects => katello:check_ping

If there something with the flags how I call foreman-installer that should not be done this way?

I just did a re-install, but without a dedicated partition for /var/lib/pulp.

Didn’t make difference.

candlepin isn’t running or not responding.

You can run

# foreman-maintain health check

and

# foreman-maintain service status

to check the status of all required services.

I guess your tomcat isn’t running. That would match the selinux audit message.

Check the tomcat service with

# systemctl status tomcat

and possibly the tomcat logs in /var/log/tomcat or even some logs in /var/log/candlepin in case it gets that far.

Also check the package installation:

# rpm -V candlepin
S.5....T.  c /etc/candlepin/candlepin.conf
.M...UG..  g /etc/candlepin/certs/candlepin-ca.crt

Well, tomcat isn’t running correctly, yes:

[root@fm-server ~]# systemctl status tomcat
● tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2021-07-19 10:17:39 CEST; 11min ago
 Main PID: 10286 (java)
    Tasks: 79 (limit: 205202)
   Memory: 1.2G
   CGroup: /system.slice/tomcat.service
           └─10286 /usr/lib/jvm/jre-11/bin/java -Xms1024m -Xmx4096m -Djava.security.auth.login.config=/usr/share/tomcat/conf/login.config -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/to>

Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.jboss.logging.Slf4jLocationAwareLogger.doLog(Slf4jLocationAwareLogger.java:89)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.jboss.logging.Slf4jLocationAwareLogger.doLog(Slf4jLocationAwareLogger.java:75)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.jboss.logging.Logger.warn(Logger.java:1236)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:47)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
Jul 19 10:29:02 fm-server.dom.tld server[10286]:         at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)

The packag is there, though.

[root@fm-server ~]# rpm -V candlepin
S.5....T.  c /etc/candlepin/candlepin.conf
.M...UG..  g /etc/candlepin/certs/candlepin-ca.crt

As such, the healthcheck URL doesn’t work either.

Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check number of fact names in database:                               [OK]
--------------------------------------------------------------------------------
Check to verify no empty CA cert requests exist:                      [OK]
--------------------------------------------------------------------------------
Check whether all services are running:                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using the ping call:           [FAIL]
Couldn't connect to the server: undefined method `to_sym' for nil:NilClass
--------------------------------------------------------------------------------
Continue with step [Restart applicable services]?, [y(yes), n(no)] no
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.                  

The following steps ended up in failing state:

  [server-ping]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="server-ping"

From /var/log/messages

Jul 19 10:17:47 fm-server dbus-daemon[1137]: [system] Activating service name='org.fedoraproject.Setroubleshootd' requested by ':1.226' (uid=0 pid=1113 comm="/usr/sbin/sedispatch " label="system_u:system_r:auditd_t:s0") (using servicehelper)
Jul 19 10:17:47 fm-server dbus-daemon[1137]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
Jul 19 10:17:49 fm-server dbus-daemon[1137]: [system] Activating service name='org.fedoraproject.SetroubleshootPrivileged' requested by ':1.229' (uid=995 pid=10518 comm="/usr/libexec/platform-python -Es /usr/sbin/setroub" label="system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023") (using servicehelper)
Jul 19 10:17:49 fm-server dbus-daemon[1137]: [system] Successfully activated service 'org.fedoraproject.SetroubleshootPrivileged'
Jul 19 10:17:53 fm-server setroubleshoot[10518]: SELinux is preventing /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/bin/java from create access on the directory .pki. For complete SELinux messages run: sealert -l 0358c251-64e8-41dd-9a4c-7f6524c99009
Jul 19 10:17:53 fm-server setroubleshoot[10518]: SELinux is preventing /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/bin/java from create access on the directory .pki.#012#012*****  Plugin catchall_labels (83.8 confidence) suggests   *******************#012#012If you want to allow java to have create access on the .pki directory#012Then you need to change the label on .pki#012Do#012# semanage fcontext -a -t FILE_TYPE '.pki'#012where FILE_TYPE is one of the following: candlepin_var_cache_t, candlepin_var_lib_t, candlepin_var_log_t, pki_common_t, pki_tomcat_etc_rw_t, pki_tomcat_log_t, pki_tomcat_var_lib_t, tomcat_cache_t, tomcat_log_t, tomcat_tmp_t, tomcat_var_lib_t, tomcat_var_run_t.#012Then execute:#012restorecon -v '.pki'#012#012#012*****  Plugin catchall (17.1 confidence) suggests   **************************#012#012If you believe that java should be allowed create access on the .pki directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'java' --raw | audit2allow -M my-java#012# semodule -X 300 -i my-java.pp#012
Jul 19 10:17:53 fm-server server[10286]: 19-Jul-2021 10:17:53.403 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
Jul 19 10:17:53 fm-server server[10286]: 19-Jul-2021 10:17:53.407 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal Context [/candlepin] startup failed due to previous errors

Check the logs for tomcat in /var/log/tomcat for errors during startup. Also check the journal:

# journalctl -u tomcat.service

Somewhere in there, there should be an error message why it’s not working…

-Jul-2021 13:01:37.067 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/var/lib/tomcat/webapps/candlepin]
19-Jul-2021 13:01:40.660 INFO [main] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
19-Jul-2021 13:01:42.948 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public org.candlepin.model.Persisted org.candlepin.model.OwnerCurator.create(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.006 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public org.candlepin.model.Persisted org.candlepin.model.ProductCurator.merge(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.007 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.ProductCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.007 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public org.candlepin.model.Persisted org.candlepin.model.ProductCurator.create(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.064 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.EntitlementCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.079 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.ConsumerCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.079 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public org.candlepin.model.Persisted org.candlepin.model.ConsumerCurator.create(org.candlepin.model.Persisted,boolean)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.147 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.CdnCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.164 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.PoolCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.258 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.RulesCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.258 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public org.candlepin.model.Persisted org.candlepin.model.RulesCurator.create(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.377 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.ContentCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:43.423 WARNING [main] com.google.inject.internal.ProxyFactory.<init> Method [public void org.candlepin.model.EntitlementCertificateCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@5b17838a]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
19-Jul-2021 13:01:49.885 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
19-Jul-2021 13:01:49.888 SEVERE [main] org.apache.catalina.core.StandardContext.startInternal Context [/candlepin] startup failed due to previous errors
19-Jul-2021 13:01:49.932 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [candlepin] registered the JDBC driver [org.postgresql.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
19-Jul-2021 13:01:49.932 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|493c0ba8]-AdminTaskTimer] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 java.base@11.0.11/java.util.TimerThread.mainLoop(Timer.java:553)
 java.base@11.0.11/java.util.TimerThread.run(Timer.java:506)
19-Jul-2021 13:01:49.933 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|493c0ba8]-HelperThread-#0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.933 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|493c0ba8]-HelperThread-#1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.934 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|493c0ba8]-HelperThread-#2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.935 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.936 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [activemq-buffer-timeout] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:885)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1039)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1345)
 java.base@11.0.11/java.util.concurrent.Semaphore.acquire(Semaphore.java:318)
 org.apache.activemq.artemis.core.io.buffer.TimedBuffer$CheckTimer.run(TimedBuffer.java:478)
 java.base@11.0.11/java.lang.Thread.run(Thread.java:829)
19-Jul-2021 13:01:49.936 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (ActiveMQ-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.937 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-1 (ActiveMQ-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.938 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (ActiveMQ-IO-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$7@7396d407)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.939 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4ad1ed3)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:458)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:112)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:45)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.940 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-1 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4ad1ed3)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:458)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:112)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:45)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.941 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-2 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4ad1ed3)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:458)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:112)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:45)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.942 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-3 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4ad1ed3)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:458)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:112)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:45)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.943 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [activemq-failure-check-thread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1079)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369)
 java.base@11.0.11/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278)
 org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$FailureCheckAndFlushThread.run(RemotingServiceImpl.java:764)
19-Jul-2021 13:01:49.943 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-2 (ActiveMQ-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1177)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.944 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-3 (ActiveMQ-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1177)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.945 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-4 (ActiveMQ-scheduled-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1177)
 java.base@11.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.946 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (activemq-netty-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/sun.nio.ch.EPoll.wait(Native Method)
 java.base@11.0.11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
 java.base@11.0.11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
 java.base@11.0.11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)
 io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)
 io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803)
 io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
 io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
 io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.947 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=f7d126e1-e87c-11eb-85e2-fa163e780169-1862512636)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.947 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-0 (ActiveMQ-client-global-threads)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
 java.base@11.0.11/java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:458)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:112)
 org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor$ThreadPoolQueue.poll(ActiveMQThreadPoolExecutor.java:45)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.948 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [Thread-1 (ActiveMQ-remoting-threads-ActiveMQServerImpl::serverUUID=f7d126e1-e87c-11eb-85e2-fa163e780169-1862512636)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/jdk.internal.misc.Unsafe.park(Native Method)
 java.base@11.0.11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 java.base@11.0.11/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1053)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 java.base@11.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
19-Jul-2021 13:01:49.949 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.950 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.950 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-3] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.951 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-4] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.951 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-5] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.952 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-6] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.952 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-7] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.953 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-8] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.953 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-9] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.954 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-10] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.954 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-11] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.955 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-12] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.955 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-13] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.956 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-14] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.956 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_Worker-15] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:568)
19-Jul-2021 13:01:49.957 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|34028481]-AdminTaskTimer] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 java.base@11.0.11/java.util.TimerThread.mainLoop(Timer.java:553)
 java.base@11.0.11/java.util.TimerThread.run(Timer.java:506)
19-Jul-2021 13:01:49.957 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|34028481]-HelperThread-#0] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.958 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|34028481]-HelperThread-#1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.958 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [C3P0PooledConnectionPoolManager[identityToken->1hgent9ai1gu53avtc0ehp|34028481]-HelperThread-#2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:683)
19-Jul-2021 13:01:49.959 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_QuartzScheduler-NON_CLUSTERED_MisfireHandler] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Thread.sleep(Native Method)
 org.quartz.impl.jdbcjobstore.JobStoreSupport$MisfireHandler.run(JobStoreSupport.java:4053)
19-Jul-2021 13:01:49.959 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [candlepin] appears to have started a thread named [QuartzScheduler_QuartzSchedulerThread] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 java.base@11.0.11/java.lang.Object.wait(Native Method)
 org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:427)
19-Jul-2021 13:01:49.961 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@63a9df26]) and a value of type [org.hibernate.internal.SessionImpl] (value [SessionImpl(502488879<open>)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.961 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@45934182]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.961 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@37ce54ef]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.962 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@688e9cc5]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.962 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@77c8e613]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.962 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@6ed9ab0]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.962 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@43c1dfae]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.962 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@5f350ba7]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.963 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@6f7d962c]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.963 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@52fb969]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.963 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [candlepin] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@7665c205]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@485e2acd]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
19-Jul-2021 13:01:49.969 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/var/lib/tomcat/webapps/candlepin] has finished in [12,902] ms
19-Jul-2021 13:01:49.972 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["https-jsse-nio-127.0.0.1-23443"]
19-Jul-2021 13:01:49.981 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [12,978] milliseconds
19-Jul-2021 13:01:50.951 INFO [activemq-failure-check-thread] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$FailureCheckAndFlushThread$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
        java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$FailureCheckAndFlushThread$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1385)
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1373)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1226)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1188)
                at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$FailureCheckAndFlushThread.run(RemotingServiceImpl.java:731)
19-Jul-2021 13:01:50.953 INFO [activemq-failure-check-thread] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [ch.qos.logback.classic.spi.ThrowableProxy]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
        java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [ch.qos.logback.classic.spi.ThrowableProxy]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1385)
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1373)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1226)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1188)
                at ch.qos.logback.classic.spi.LoggingEvent.<init>(LoggingEvent.java:119)
                at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:419)
                at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:383)
                at ch.qos.logback.classic.Logger.log(Logger.java:765)
                at jdk.internal.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
                at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                at org.jboss.logging.Slf4jLocationAwareLogger.doLog(Slf4jLocationAwareLogger.java:89)
                at org.jboss.logging.Slf4jLocationAwareLogger.doLog(Slf4jLocationAwareLogger.java:75)
                at org.jboss.logging.Logger.logv(Logger.java:2226)
                at org.apache.activemq.artemis.core.server.ActiveMQServerLogger_$logger.errorOnFailureCheck(ActiveMQServerLogger_$logger.java:1198)
                at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$FailureCheckAndFlushThread.run(RemotingServiceImpl.java:767)
19-Jul-2021 13:01:53.954 INFO [Thread-2 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4ad1ed3)] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.nio.file.FileStore]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
        java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.nio.file.FileStore]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1385)
                at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1373)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1226)
                at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1188)
                at org.apache.activemq.artemis.core.server.files.FileStoreMonitor.tick(FileStoreMonitor.java:102)
                at org.apache.activemq.artemis.core.server.files.FileStoreMonitor.run(FileStoreMonitor.java:93)
                at org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent$2.run(ActiveMQScheduledComponent.java:306)
                at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
                at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
                at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
                at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
                at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
                at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)

I tried installing 2.4 again, and then I hit another error about db::seed failing, which brought me back to this thread:

And lo and behold, with just / (slash) and a tmpfs /tmp partition, it seems to work:

2021-07-19 16:34:31 [NOTICE] [root] Loading installer configuration. This will take some time.

2021-07-19 16:34:37 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.

2021-07-19 16:34:37 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.

2021-07-19 16:39:03 [NOTICE] [configure] Starting system configuration.

2021-07-19 16:41:41 [NOTICE] [configure] 250 configuration steps out of 2022 steps complete.

2021-07-19 16:44:11 [NOTICE] [configure] 500 configuration steps out of 2022 steps complete.

2021-07-19 16:45:07 [NOTICE] [configure] 750 configuration steps out of 2025 steps complete.

2021-07-19 16:47:02 [NOTICE] [configure] 1000 configuration steps out of 2030 steps complete.

2021-07-19 16:49:21 [NOTICE] [configure] 1250 configuration steps out of 2053 steps complete.

2021-07-19 16:58:01 [NOTICE] [configure] 1500 configuration steps out of 2053 steps complete.

2021-07-19 16:59:38 [NOTICE] [configure] 1750 configuration steps out of 2053 steps complete.

2021-07-19 17:01:22 [NOTICE] [configure] 2000 configuration steps out of 2053 steps complete.

2021-07-19 17:01:49 [NOTICE] [configure] System configuration has finished.

Executing: foreman-rake upgrade:run

`/usr/share/foreman` is not writable.

Bundler will use `/tmp/bundler20210719-14406-1qcudl914406' as your home directory temporarily.

Rubocop not loaded.

=============================================

Upgrade Step 1/5: katello:correct_repositories. This may take a long while.

=============================================

Upgrade Step 2/5: katello:clean_backend_objects. This may take a long while.

0 orphaned consumer id(s) found in candlepin.

Candlepin orphaned consumers: []

=============================================

Upgrade Step 3/5: katello:upgrades:4.0:remove_ostree_puppet_content. =============================================

Upgrade Step 4/5: katello:upgrades:4.1:sync_noarch_content. =============================================

Upgrade Step 5/5: katello:upgrades:4.1:fix_invalid_pools. I, [2021-07-19T17:02:05.194548 #14406] INFO -- : Corrected 0 invalid pools

I, [2021-07-19T17:02:05.194597 #14406] INFO -- : Removed 0 orphaned pools

I can login, tomcat seems to have no issues.

[root@foreman-server ~]# foreman-maintain health check
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check number of fact names in database:                               [OK]
--------------------------------------------------------------------------------
Check to verify no empty CA cert requests exist:                      [OK]
--------------------------------------------------------------------------------
Check whether all services are running:                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using the ping call:           [OK]
--------------------------------------------------------------------------------

I really wonder where the problem is here.