Importing manifest taking for ever

Hey,

Is this happening recently, after an upgrade, or did it just never work?

Do you see anything in your logs @Balaji_Arun_Kumar_Sa ?
Can you share anything from there that might help us find a solution?

Hello Mcorr,

Below are from the log messages…

Sep 13 11:00:57 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 0474e410dcbb481c9d1ca59895419339
Sep 13 11:00:57 hostname pulpcore-api: - - [13/Sep/2021:16:00:57 +0000] “GET /pulp/api/v3/tasks/04343eaf-e544-41c7-b70f-737c36f12646/ HTTP/1.1” 200 630 “-” “OpenAPI-Generator/3.9.0/ruby”
Sep 13 11:00:58 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 9aca8239af9449f2a66efe31a5d7d38a
Sep 13 11:00:58 hostname pulpcore-api: - - [13/Sep/2021:16:00:58 +0000] “GET /pulp/api/v3/tasks/f788f5be-42d9-40a1-bb6f-fe90aaab6e1b/ HTTP/1.1” 200 506 “-” “OpenAPI-Generator/3.9.0/ruby”
Sep 13 11:00:58 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 8641cddc0cfe41c6af1b3162bf27c1f6
Sep 13 11:00:58 hostname pulpcore-api: - - [13/Sep/2021:16:00:58 +0000] “GET /pulp/api/v3/tasks/9dd7f0eb-da42-4fae-9fc6-4ce4c583bff3/ HTTP/1.1” 200 630 “-” “OpenAPI-Generator/3.9.0/ruby”

Today I tried cancelling the manifest import as it was running for days, I had to hit cancel about 4 times, every time hit cancel, the progress moved forward and completed.
From the manifest history, I see the import manifest was successful on 9/10/2021, I am not sure that it really imported the manifest, some of the repositories sync are failing, attached a screen shot of the same.

Hello Mcorr,

Forgot to mention, this is happening recently and I also tried to restore from backup, same issue.

Best regards,
Balaji Sankaran

Thanks for bringing this to our attention. I will see if some one from the Katello team has insight into what might be happening here.

@Balaji_Arun_Kumar_Sa you may want to upgrade to Katello 4.1, as we think this might be a problem with the pulp tasking system.

1 Like

Thank you!, jjeffers
I will upgrade and update you.

Best regards,
Balaji Sankaran

Great, thanks!

Hello jjeffers,

I am getting the following error when I run yum update, looks like the package qpid-proton-c-0.34.0-1.el7.x86_64 is no more available from epel.

Error: Package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64 (katello)
Requires: qpid-proton-c = 0.34.0
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c = 0.35.0-1.el7
Error: Package: python2-qpid-proton-0.34.0-1.el7.x86_64 (@epel)
Requires: qpid-proton-c(x86-64) = 0.34.0-1.el7
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c(x86-64) = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c(x86-64) = 0.35.0-1.el7


yum can be configured to try to resolve such errors by temporarily enabling
disabled repos and searching for missing dependencies.
To enable this functionality please set ‘notify_only=0’ in /etc/yum/pluginconf.d/search-disabled-repos.conf


→ Running transaction check
—> Package kernel.x86_64 0:3.10.0-1160.25.1.el7 will be erased
—> Package qpid-proton-c.x86_64 0:0.34.0-1.el7 will be updated
→ Processing Dependency: qpid-proton-c = 0.34.0 for package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64
→ Processing Dependency: qpid-proton-c(x86-64) = 0.34.0-1.el7 for package: python2-qpid-proton-0.34.0-1.el7.x86_64
—> Package tfm-rubygem-qpid_proton.x86_64 0:0.34.0-3.el7 will be an update
→ Processing Dependency: qpid-proton-c = 0.34.0 for package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64
→ Finished Dependency Resolution
Error: Package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64 (katello)
Requires: qpid-proton-c = 0.34.0
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c = 0.35.0-1.el7
Error: Package: python2-qpid-proton-0.34.0-1.el7.x86_64 (@epel)
Requires: qpid-proton-c(x86-64) = 0.34.0-1.el7
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c(x86-64) = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c(x86-64) = 0.35.0-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

Following is the yum repolist output:

repo id repo name status
centos-sclo-rh/x86_64 CentOS-7 - SCLo rh 7,650
*epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,665
foreman/x86_64 Foreman 2.5 712
foreman-plugins/x86_64 Foreman plugins 2.5 424
katello/x86_64 Katello 4.1 130
katello-candlepin/x86_64 Candlepin: an open source entitlement management system. 6
pulpcore/x86_64 pulpcore: Fetch, Upload, Organize, and Distribute Software Packages. 187
puppet6/x86_64 Puppet 6 Repository el 7 - x86_64 318
rhel-7-server-extras-rpms/x86_64 Red Hat Enterprise Linux 7 Server - Extras (RPMs) 1,406
rhel-7-server-optional-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server - Optional (RPMs) 23,069
rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs) 32,191

Best regards,
Balaji Sankaran

Check out the answer here:

Hello jjeffers,

I downloaded the packages manually and was able to run yum update successfully, but when I run foreman-installer I am getting errors as follows…

[root@hostname]# foreman-installer
2021-09-15 18:03:16 [NOTICE] [root] Loading installer configuration. This will take some time.
2021-09-15 18:03:22 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.
2021-09-15 18:03:22 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
2021-09-15 18:03:29 [NOTICE] [configure] Starting system configuration.
2021-09-15 18:03:45 [NOTICE] [configure] 250 configuration steps out of 2045 steps complete.
2021-09-15 18:04:00 [NOTICE] [configure] 500 configuration steps out of 2045 steps complete.
2021-09-15 18:04:01 [NOTICE] [configure] 750 configuration steps out of 2049 steps complete.
2021-09-15 18:04:03 [NOTICE] [configure] 1000 configuration steps out of 2052 steps complete.
2021-09-15 18:04:04 [NOTICE] [configure] 1250 configuration steps out of 2056 steps complete.
2021-09-15 18:04:55 [NOTICE] [configure] 1500 configuration steps out of 2057 steps complete.
2021-09-15 18:06:27 [NOTICE] [configure] 1750 configuration steps out of 2057 steps complete.
2021-09-15 18:07:01 [NOTICE] [configure] 2000 configuration steps out of 2057 steps complete.
2021-09-15 18:07:05 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Could not evaluate: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:06 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Failed to call refresh: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:06 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:09 [NOTICE] [configure] System configuration has finished.

There were errors detected during install.
Please address the errors and re-run the installer to ensure the system is properly configured.
Failing to do so is likely to result in broken functionality.

The full log is at /var/log/foreman-installer/katello.log

This may be related to Bug #32299: Installation of Katello 4 RC3 fails when --foreman-proxy-ssl-port is not set to default 9090 - SELinux - Foreman

Thank you so much jjeffers!
It was not working with selinux set to permissive, I set selinux to enforcing and was successfully able to upgrade.

Best regards,
Balaji Sankaran

1 Like

Great!

Hello jjeffers,

I think, I spoke too soon, after the upgrade, when login the foreman web gui, I get an error. Attached is the error, please help.

Best regards,
Balaji Sankaran

Check that you have enough RAM available, and restart Candlepin with

foreman-maintain service restart --only tomcat

Hello jeremylenz,

I have 32 GB RAM, I tried restarting the Candlepin with command you recommended, but I getting the same error.

Best regards,
Balaji Sankaran

hmm… what is the output of foreman-maintain service status?

All services are running OK and the following is what I see from the status command

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman Proxy…
Sep 21 15:50:29 hostname smart-proxy[1378]: /usr/share/foreman-proxy is not writable.
Sep 21 15:50:29 hostname smart-proxy[1378]: Bundler will use `/tmp/bundler20210921-1378-10jmmex1378’ as your home directory temporarily.
Sep 21 15:50:29 hostname smart-proxy[1378]: Your Gemfile lists the gem rsec (< 1) more than once.
Sep 21 15:50:29 hostname smart-proxy[1378]: You should probably keep only one of them.
Sep 21 15:50:29 hostname smart-proxy[1378]: Remove any duplicate entries and specify the gem only once.
Sep 21 15:50:29 hostname smart-proxy[1378]: While it’s not a problem now, it could cause errors if you change the version of one of them later.
Sep 21 15:50:30 hostname systemd[1]: Started Foreman Proxy.
Sep 21 16:05:21 hostname smart-proxy[1378]: 10.220.146.89 - - [21/Sep/2021:16:05:21 CDT] “GET /features HTTP/1.1” 200 73
Sep 21 16:05:21 hostname smart-proxy[1378]: - → /features
\ All services are running [OK]

Running Status Services

Get status of applicable services:

Displaying the following service(s):
rh-redis5-redis, postgresql, pulpcore-api, pulpcore-content, qdrouterd, qpidd, rh-redis5-redis, pulpcore-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, pulpcore-worker@5.service, pulpcore-worker@6.service, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, puppetserver, dynflow-sidekiq@worker-1, dynflow-sidekiq@worker-hosts-queue-1, foreman-proxy

  • displaying rh-redis5-redis
    ● rh-redis5-redis.service - Redis persistent key-value database
    Loaded: loaded (/usr/lib/systemd/system/rh-redis5-redis.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/rh-redis5-redis.service.d
    └─90-limits.conf
    Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
    Main PID: 1424 (redis-server)
    Tasks: 4
    CGroup: /system.slice/rh-redis5-redis.service
    └─1424 /opt/rh/rh-redis5/root/usr/bin/redis-server 127.0.0.1:6379

Sep 21 15:50:27 hostname systemd[1]: Starting Redis persistent key-value database…
Sep 21 15:50:27 hostname systemd[1]: Started Redis persistent key-value database.
\ displaying postgresql
● rh-postgresql12-postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/rh-postgresql12-postgresql.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Process: 1376 ExecStartPre=/opt/rh/rh-postgresql12/root/usr/libexec/postgresql-check-db-dir %N (code=exited, status=0/SUCCESS)
Main PID: 1453 (postmaster)
Tasks: 86
CGroup: /system.slice/rh-postgresql12-postgresql.service
├─1453 postmaster -D /var/opt/rh/rh-postgresql12/lib/pgsql/data
├─1665 postgres: logger
├─1703 postgres: checkpointer
├─1704 postgres: background writer
├─1705 postgres: walwriter
├─1706 postgres: autovacuum launcher
├─1707 postgres: stats collector
├─1708 postgres: logical replication launcher
├─2548 postgres: pulp pulpcore 127.0.0.1(59552) idle
├─2549 postgres: pulp pulpcore 127.0.0.1(59554) idle
├─2550 postgres: pulp pulpcore 127.0.0.1(59556) idle
├─2551 postgres: pulp pulpcore 127.0.0.1(59558) idle
├─2555 postgres: pulp pulpcore 127.0.0.1(59560) idle
├─2559 postgres: pulp pulpcore 127.0.0.1(59562) idle
├─2797 postgres: pulp pulpcore 127.0.0.1(59564) idle
├─2798 postgres: pulp pulpcore 127.0.0.1(59566) idle
├─2799 postgres: pulp pulpcore 127.0.0.1(59568) idle
├─2800 postgres: pulp pulpcore 127.0.0.1(59570) idle
├─2801 postgres: pulp pulpcore 127.0.0.1(59572) idle
├─2805 postgres: pulp pulpcore 127.0.0.1(59574) idle
├─2815 postgres: pulp pulpcore 127.0.0.1(59576) idle
├─2820 postgres: pulp pulpcore 127.0.0.1(59578) idle
├─2821 postgres: pulp pulpcore 127.0.0.1(59580) idle
├─2824 postgres: pulp pulpcore 127.0.0.1(59582) idle
├─2825 postgres: pulp pulpcore 127.0.0.1(59584) idle
├─2829 postgres: pulp pulpcore 127.0.0.1(59586) idle
├─2832 postgres: pulp pulpcore 127.0.0.1(59588) idle
├─2965 postgres: pulp pulpcore 127.0.0.1(59598) idle
├─2966 postgres: pulp pulpcore 127.0.0.1(59600) idle
├─2967 postgres: pulp pulpcore 127.0.0.1(59602) idle
├─2969 postgres: pulp pulpcore 127.0.0.1(59604) idle
├─2972 postgres: pulp pulpcore 127.0.0.1(59606) idle
├─2974 postgres: pulp pulpcore 127.0.0.1(59608) idle
├─2976 postgres: pulp pulpcore 127.0.0.1(59610) idle
├─2977 postgres: pulp pulpcore 127.0.0.1(59612) idle
├─2978 postgres: pulp pulpcore 127.0.0.1(59614) idle
├─2979 postgres: pulp pulpcore 127.0.0.1(59616) idle
├─3019 postgres: pulp pulpcore 127.0.0.1(59626) idle
├─3022 postgres: pulp pulpcore 127.0.0.1(59630) idle
├─3024 postgres: pulp pulpcore 127.0.0.1(59632) idle
├─3111 postgres: foreman foreman [local] idle
├─3208 postgres: foreman foreman [local] idle
├─3209 postgres: foreman foreman [local] idle
├─3210 postgres: foreman foreman [local] idle
├─3211 postgres: foreman foreman [local] idle
├─3212 postgres: foreman foreman [local] idle
├─3215 postgres: foreman foreman [local] idle
├─3218 postgres: foreman foreman [local] idle
├─3220 postgres: foreman foreman [local] idle
├─3221 postgres: foreman foreman [local] idle
├─3222 postgres: foreman foreman [local] idle
├─3224 postgres: foreman foreman [local] idle
├─3226 postgres: foreman foreman [local] idle
├─3231 postgres: foreman foreman [local] idle
├─3234 postgres: foreman foreman [local] idle
├─3235 postgres: foreman foreman [local] idle
├─3236 postgres: foreman foreman [local] idle
├─3241 postgres: foreman foreman [local] idle
├─3243 postgres: foreman foreman [local] idle
├─3245 postgres: foreman foreman [local] idle
├─3249 postgres: foreman foreman [local] idle
├─3251 postgres: foreman foreman [local] idle
├─3258 postgres: foreman foreman [local] idle
├─3268 postgres: foreman foreman [local] idle
├─3318 postgres: foreman foreman [local] idle
├─3326 postgres: foreman foreman [local] idle
├─3327 postgres: foreman foreman [local] idle
├─3328 postgres: foreman foreman [local] idle
├─3357 postgres: foreman foreman [local] idle
├─3361 postgres: foreman foreman [local] idle
├─3362 postgres: foreman foreman [local] idle
├─3377 postgres: foreman foreman [local] idle
├─3381 postgres: foreman foreman [local] idle
├─3382 postgres: foreman foreman [local] idle
├─3393 postgres: foreman foreman [local] idle
├─3398 postgres: foreman foreman [local] idle
├─3399 postgres: foreman foreman [local] idle
├─3542 postgres: foreman foreman [local] idle
├─4744 postgres: candlepin candlepin 127.0.0.1(59983) idle
├─4745 postgres: candlepin candlepin 127.0.0.1(59982) idle
├─4746 postgres: candlepin candlepin 127.0.0.1(59986) idle
├─4929 postgres: candlepin candlepin 127.0.0.1(60134) idle
├─4930 postgres: candlepin candlepin 127.0.0.1(60136) idle
├─4931 postgres: candlepin candlepin 127.0.0.1(60138) idle
├─4932 postgres: candlepin candlepin 127.0.0.1(60140) idle
└─4933 postgres: candlepin candlepin 127.0.0.1(60142) idle

Sep 21 15:50:27 hostname systemd[1]: Starting PostgreSQL database server…
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT LOG: starting PostgreSQL 12.7 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT LOG: listening on IPv4 address “127.0.0.1”, port 5432
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT LOG: listening on Unix socket “/var/run/postgresql/.s.PGSQL.5432”
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT LOG: listening on Unix socket “/tmp/.s.PGSQL.5432”
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT LOG: redirecting log output to logging collector process
Sep 21 15:50:27 hostname sh[1453]: 2021-09-21 15:50:27 CDT HINT: Future log output will appear in directory “log”.
Sep 21 15:50:27 hostname systemd[1]: Started PostgreSQL database server.
Warning: rh-postgresql12-postgresql.service changed on disk. Run ‘systemctl daemon-reload’ to reload units.
\ displaying pulpcore-api
● pulpcore-api.service - Pulp API Server
Loaded: loaded (/etc/systemd/system/pulpcore-api.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:29 CDT; 22min ago
Main PID: 1396 (gunicorn)
Status: “Gunicorn arbiter booted”
Tasks: 2
CGroup: /system.slice/pulpcore-api.service
├─1396 /usr/bin/python3 /usr/bin/gunicorn pulpcore.app.wsgi:application --timeout 90 -w 1 --access-logfile - --access-logformat pulp [%({correlation-id}o)s]: %(h)s %(l)s %(u)s %(t)s “%(r)s” %(s)s %(b)s “%(f)s” “%(a)s”
└─1903 /usr/bin/python3 /usr/bin/gunicorn pulpcore.app.wsgi:application --timeout 90 -w 1 --access-logfile - --access-logformat pulp [%({correlation-id}o)s]: %(h)s %(l)s %(u)s %(t)s “%(r)s” %(s)s %(b)s “%(f)s” “%(a)s”

Sep 21 15:50:27 hostname systemd[1]: Starting Pulp API Server…
Sep 21 15:50:29 hostname pulpcore-api[1396]: [2021-09-21 15:50:29 -0500] [1396] [INFO] Starting gunicorn 20.1.0
Sep 21 15:50:29 hostname pulpcore-api[1396]: [2021-09-21 15:50:29 -0500] [1396] [INFO] Listening at: unix:/run/pulpcore-api.sock (1396)
Sep 21 15:50:29 hostname pulpcore-api[1396]: [2021-09-21 15:50:29 -0500] [1396] [INFO] Using worker: sync
Sep 21 15:50:29 hostname systemd[1]: Started Pulp API Server.
Sep 21 15:50:29 hostname pulpcore-api[1396]: [2021-09-21 15:50:29 -0500] [1903] [INFO] Booting worker with pid: 1903
\ displaying pulpcore-content
● pulpcore-content.service - Pulp Content App
Loaded: loaded (/etc/systemd/system/pulpcore-content.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:30 CDT; 21min ago
Main PID: 1443 (gunicorn)
Status: “Gunicorn arbiter booted”
Tasks: 40
CGroup: /system.slice/pulpcore-content.service
├─1443 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2061 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2062 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2066 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2072 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2078 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2081 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2086 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2096 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2105 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2108 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2109 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
├─2110 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -
└─2111 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --timeout 90 --worker-class aiohttp.GunicornWebWorker -w 13 --access-logfile -

Sep 21 15:50:30 hostname pulpcore-content[1443]: [2021-09-21 15:50:30 -0500] [2072] [INFO] Booting worker with pid: 2072
Sep 21 15:50:30 hostname pulpcore-content[1443]: [2021-09-21 15:50:30 -0500] [2078] [INFO] Booting worker with pid: 2078
Sep 21 15:50:30 hostname pulpcore-content[1443]: [2021-09-21 15:50:30 -0500] [2081] [INFO] Booting worker with pid: 2081
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2086] [INFO] Booting worker with pid: 2086
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2096] [INFO] Booting worker with pid: 2096
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2105] [INFO] Booting worker with pid: 2105
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2108] [INFO] Booting worker with pid: 2108
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2109] [INFO] Booting worker with pid: 2109
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2110] [INFO] Booting worker with pid: 2110
Sep 21 15:50:31 hostname pulpcore-content[1443]: [2021-09-21 15:50:31 -0500] [2111] [INFO] Booting worker with pid: 2111
\ displaying qdrouterd
● qdrouterd.service - Qpid Dispatch router daemon
Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/qdrouterd.service.d
└─90-limits.conf
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1383 (qdrouterd)
Tasks: 7
CGroup: /system.slice/qdrouterd.service
└─1383 /usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf

Sep 21 15:50:28 hostname qdrouterd[1383]: 2021-09-21 15:50:28.210018 -0500 SERVER (notice) Process VmSize 279.91 MiB (31.26 GiB available memory)
Sep 21 15:50:28 hostname qdrouterd[1383]: 2021-09-21 15:50:28.218340 -0500 SERVER (notice) Listening on :5647
Sep 21 15:50:28 hostname qdrouterd[1383]: 2021-09-21 15:50:28.218499 -0500 SERVER (notice) Listening on :5646
Sep 21 15:50:28 hostname qdrouterd[1383]: 2021-09-21 15:50:28.242089 -0500 SERVER (info) [C1] Connection to localhost:5671 failed: proton:io Connection refused - disconnected localhost:5671
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.368790 -0500 ROUTER_CORE (info) [C2] Connection Opened: dir=out host=localhost:5671 vhost= encrypted=TLSv1/SSLv3 auth=EXTERNAL user=(null) container_id=989d5fb8-96d3-414a-804e-89403879a1e2 props={:product=“qpid-cpp”, :version=“1.39.0”, :platform=“Linux”, :host=“hostname”}
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.369481 -0500 ROUTER_CORE (info) Link Route Activated ‘linkRoute/0’ on connection broker
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.369512 -0500 ROUTER_CORE (info) Link Route Activated ‘linkRoute/1’ on connection broker
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.369526 -0500 ROUTER_CORE (info) Link Route Activated ‘linkRoute/2’ on connection broker
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.369537 -0500 ROUTER_CORE (info) Link Route Activated ‘linkRoute/3’ on connection broker
Sep 21 15:50:33 hostname qdrouterd[1383]: 2021-09-21 15:50:33.369548 -0500 ROUTER_CORE (info) Link Route Activated ‘linkRoute/4’ on connection broker
\ displaying qpidd
● qpidd.service - An AMQP message broker daemon.
Loaded: loaded (/usr/lib/systemd/system/qpidd.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/qpidd.service.d
└─90-limits.conf, wait-for-port.conf
Active: active (running) since Tue 2021-09-21 15:50:30 CDT; 21min ago
Docs: man:qpidd(1)
http://qpid.apache.org/
Process: 1380 ExecStartPost=/bin/bash -c while ! ss --no-header --tcp --listening --numeric sport = :5671 | grep -q “^LISTEN.*:5671”; do sleep 1; done (code=exited, status=0/SUCCESS)
Main PID: 1379 (qpidd)
Tasks: 8
CGroup: /system.slice/qpidd.service
└─1379 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf

Sep 21 15:50:27 hostname systemd[1]: Starting An AMQP message broker daemon…
Sep 21 15:50:30 hostname systemd[1]: Started An AMQP message broker daemon…
\ displaying rh-redis5-redis
● rh-redis5-redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/rh-redis5-redis.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/rh-redis5-redis.service.d
└─90-limits.conf
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1424 (redis-server)
Tasks: 4
CGroup: /system.slice/rh-redis5-redis.service
└─1424 /opt/rh/rh-redis5/root/usr/bin/redis-server 127.0.0.1:6379

Sep 21 15:50:27 hostname systemd[1]: Starting Redis persistent key-value database…
Sep 21 15:50:27 hostname systemd[1]: Started Redis persistent key-value database.
\ displaying pulpcore-worker@1.service
● pulpcore-worker@1.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1417 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@1.service
└─1417 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-1[1417]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-1[1417]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: Worker ‘1417@hostname’ is back online.
\ displaying pulpcore-worker@2.service
● pulpcore-worker@2.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1427 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@2.service
└─1427 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-2[1427]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-2[1427]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: New worker ‘1427@hostname’ discovered
\ displaying pulpcore-worker@3.service
● pulpcore-worker@3.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1418 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@3.service
└─1418 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-3[1418]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-3[1418]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: Worker ‘1418@hostname’ is back online.
\ displaying pulpcore-worker@4.service
● pulpcore-worker@4.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1422 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@4.service
└─1422 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-4[1422]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-4[1422]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: New worker ‘1422@hostname’ discovered
\ displaying pulpcore-worker@5.service
● pulpcore-worker@5.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1426 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@5.service
└─1426 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-5[1426]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-5[1426]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: Worker ‘1426@hostname’ is back online.
\ displaying pulpcore-worker@6.service
● pulpcore-worker@6.service - Pulp Worker
Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Main PID: 1415 (pulpcore-worker)
CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@6.service
└─1415 /usr/bin/python3 /usr/bin/pulpcore-worker

Sep 21 15:50:27 hostname systemd[1]: Started Pulp Worker.
Sep 21 15:50:49 hostname pulpcore-worker-6[1415]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Sep 21 15:50:50 hostname pulpcore-worker-6[1415]: pulp [None]: pulpcore.tasking.worker_watcher:INFO: Worker ‘1415@hostname’ is back online.
\ displaying tomcat
● tomcat.service - Apache Tomcat Web Application Container
Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 16:05:26 CDT; 7min ago
Main PID: 4616 (java)
Tasks: 69
CGroup: /system.slice/tomcat.service
└─4616 /usr/lib/jvm/jre-11/bin/java -Xms1024m -Xmx4096m -Djava.security.auth.login.config=/usr/share/tomcat/conf/login.config -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat/temp -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start

Sep 21 16:05:32 hostname server[4616]: Sep 21, 2021 4:05:32 PM com.google.inject.internal.ProxyFactory
Sep 21 16:05:32 hostname server[4616]: WARNING: Method [public void org.candlepin.model.ContentCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@77ad815]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all.
Sep 21 16:05:32 hostname server[4616]: Sep 21, 2021 4:05:32 PM com.google.inject.internal.ProxyFactory
Sep 21 16:05:32 hostname server[4616]: WARNING: Method [public void org.candlepin.model.EntitlementCertificateCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@77ad815]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all.
Sep 21 16:05:32 hostname server[4616]: Sep 21, 2021 4:05:32 PM com.google.inject.internal.ProxyFactory
Sep 21 16:05:32 hostname server[4616]: WARNING: Method [public java.lang.Iterable org.candlepin.resource.OwnerResource.createBatchContent(java.lang.String,java.util.List)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@77ad815]. This could indicate a bug. The method may be intercepted twice, or may not be intercepted at all.
Sep 21 16:05:40 hostname server[4616]: Sep 21, 2021 4:05:40 PM org.apache.catalina.startup.HostConfig deployDirectory
Sep 21 16:05:40 hostname server[4616]: INFO: Deployment of web application directory /var/lib/tomcat/webapps/candlepin has finished in 13,320 ms
Sep 21 16:05:40 hostname server[4616]: Sep 21, 2021 4:05:40 PM org.apache.catalina.startup.Catalina start
Sep 21 16:05:40 hostname server[4616]: INFO: Server startup in 13383 ms
\ displaying dynflow-sidekiq@orchestrator
● dynflow-sidekiq@orchestrator.service - Foreman jobs daemon - orchestrator on sidekiq
Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:52:03 CDT; 20min ago
Docs: https://theforeman.org
Main PID: 1473 (sidekiq)
CGroup: /system.slice/system-dynflow\x2dsidekiq.slice/dynflow-sidekiq@orchestrator.service
└─1473 sidekiq 5.2.7 [0 of 1 busy]

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman jobs daemon - orchestrator on sidekiq…
Sep 21 15:50:36 hostname dynflow-sidekiq@orchestrator[1473]: 2021-09-21T20:50:36.357Z 1473 TID-gl INFO: GitLab reliable fetch activated!
Sep 21 15:50:36 hostname dynflow-sidekiq@orchestrator[1473]: 2021-09-21T20:50:36.434Z 1473 TID-h5 INFO: Booting Sidekiq 5.2.7 with redis options {:id=>“Sidekiq-server-PID-1473”, :url=>“redis://localhost:6379/0”}
Sep 21 15:50:53 hostname dynflow-sidekiq@orchestrator[1473]: API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
Sep 21 15:51:01 hostname dynflow-sidekiq@orchestrator[1473]: /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_puppet-1.0.1/lib/foreman_puppet/register.rb:141: warning: already initialized constant Foreman::Plugin::RbacSupport::AUTO_EXTENDED_ROLES
Sep 21 15:51:01 hostname dynflow-sidekiq@orchestrator[1473]: /usr/share/foreman/app/registries/foreman/plugin/rbac_support.rb:5: warning: previous definition of AUTO_EXTENDED_ROLES was here
Sep 21 15:51:26 hostname dynflow-sidekiq@orchestrator[1473]: User with login admin already exists, not seeding as admin.
Sep 21 15:52:03 hostname systemd[1]: Started Foreman jobs daemon - orchestrator on sidekiq.
\ displaying foreman
● foreman.service - Foreman
Loaded: loaded (/usr/lib/systemd/system/foreman.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/foreman.service.d
└─installer.conf
Active: active (running) since Tue 2021-09-21 15:51:25 CDT; 21min ago
Docs: https://theforeman.org
Main PID: 1483 (ruby)
Tasks: 176
CGroup: /system.slice/foreman.service
├─1483 puma 5.3.2 (unix:///run/foreman.sock) [foreman]
├─3154 puma: cluster worker 0: 1483 [foreman]
├─3157 puma: cluster worker 1: 1483 [foreman]
├─3162 puma: cluster worker 2: 1483 [foreman]
├─3168 puma: cluster worker 3: 1483 [foreman]
├─3174 puma: cluster worker 4: 1483 [foreman]
├─3176 puma: cluster worker 5: 1483 [foreman]
├─3177 puma: cluster worker 6: 1483 [foreman]
├─3181 puma: cluster worker 7: 1483 [foreman]
└─3187 puma: cluster worker 8: 1483 [foreman]

Sep 21 15:51:25 hostname foreman[1483]: [1483] - Worker 7 (PID: 3181) booted in 0.24s, phase: 0
Sep 21 15:51:25 hostname foreman[1483]: [1483] - Worker 6 (PID: 3177) booted in 0.27s, phase: 0
Sep 21 15:51:25 hostname foreman[1483]: [1483] - Worker 5 (PID: 3176) booted in 0.28s, phase: 0
Sep 21 15:51:25 hostname systemd[1]: Started Foreman.
Sep 21 16:05:24 hostname foreman[1483]: warning: broker sent EOF, and connection not reliable
Sep 21 16:05:24 hostname foreman[1483]: #<Thread:0x0000000013ed5c98 /opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:471 run> terminated with exception (report_on_exception is true):
Sep 21 16:05:24 hostname foreman[1483]: /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:198:in block (2 levels) in start_listeners': Received message is nil, and connection not reliable (Stomp::Error::NilMessageError) Sep 21 16:05:24 hostname foreman[1483]: from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in loop’
Sep 21 16:05:24 hostname foreman[1483]: from /opt/theforeman/tfm/root/usr/share/gems/gems/stomp-1.4.9/lib/client/utils.rb:194:in block in start_listeners' Sep 21 16:05:24 hostname foreman[1483]: from /opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:474:in block in create_with_logging_context’
\ displaying httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:27 CDT; 22min ago
Docs: man:httpd(8)
man:apachectl(8)
Main PID: 1467 (httpd)
Status: “Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec”
Tasks: 12
CGroup: /system.slice/httpd.service
├─1467 /usr/sbin/httpd -DFOREGROUND
├─1657 /usr/sbin/httpd -DFOREGROUND
├─1658 /usr/sbin/httpd -DFOREGROUND
├─1659 /usr/sbin/httpd -DFOREGROUND
├─1660 /usr/sbin/httpd -DFOREGROUND
├─1661 /usr/sbin/httpd -DFOREGROUND
├─1662 /usr/sbin/httpd -DFOREGROUND
├─1663 /usr/sbin/httpd -DFOREGROUND
├─1664 /usr/sbin/httpd -DFOREGROUND
├─3549 /usr/sbin/httpd -DFOREGROUND
├─3551 /usr/sbin/httpd -DFOREGROUND
└─3552 /usr/sbin/httpd -DFOREGROUND

Sep 21 15:50:27 hostname systemd[1]: Starting The Apache HTTP Server…
Sep 21 15:50:27 hostname systemd[1]: Started The Apache HTTP Server.
\ displaying puppetserver
● puppetserver.service - puppetserver Service
Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:51:08 CDT; 21min ago
Process: 1377 ExecStart=/opt/puppetlabs/server/apps/puppetserver/bin/puppetserver start (code=exited, status=0/SUCCESS)
Main PID: 1827 (java)
Tasks: 52 (limit: 4915)
CGroup: /system.slice/puppetserver.service
└─1827 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:ReservedCodeCacheSize=512m -XX:OnOutOfMemoryError=“kill -9 %p” -XX:ErrorFile=/var/log/puppetlabs/puppetserver/puppetserver_err_pid%p.log -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/facter.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter

Sep 21 15:50:27 hostname systemd[1]: Starting puppetserver Service…
Sep 21 15:51:08 hostname systemd[1]: Started puppetserver Service.
\ displaying dynflow-sidekiq@worker-1
● dynflow-sidekiq@worker-1.service - Foreman jobs daemon - worker-1 on sidekiq
Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:51:29 CDT; 21min ago
Docs: https://theforeman.org
Main PID: 1481 (sidekiq)
CGroup: /system.slice/system-dynflow\x2dsidekiq.slice/dynflow-sidekiq@worker-1.service
└─1481 sidekiq 5.2.7 [0 of 5 busy]

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman jobs daemon - worker-1 on sidekiq…
Sep 21 15:50:36 hostname dynflow-sidekiq@worker-1[1481]: 2021-09-21T20:50:36.363Z 1481 TID-gt INFO: GitLab reliable fetch activated!
Sep 21 15:50:36 hostname dynflow-sidekiq@worker-1[1481]: 2021-09-21T20:50:36.435Z 1481 TID-gx INFO: Booting Sidekiq 5.2.7 with redis options {:id=>“Sidekiq-server-PID-1481”, :url=>“redis://localhost:6379/0”}
Sep 21 15:50:53 hostname dynflow-sidekiq@worker-1[1481]: API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
Sep 21 15:51:01 hostname dynflow-sidekiq@worker-1[1481]: /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_puppet-1.0.1/lib/foreman_puppet/register.rb:141: warning: already initialized constant Foreman::Plugin::RbacSupport::AUTO_EXTENDED_ROLES
Sep 21 15:51:01 hostname dynflow-sidekiq@worker-1[1481]: /usr/share/foreman/app/registries/foreman/plugin/rbac_support.rb:5: warning: previous definition of AUTO_EXTENDED_ROLES was here
Sep 21 15:51:24 hostname dynflow-sidekiq@worker-1[1481]: User with login admin already exists, not seeding as admin.
Sep 21 15:51:29 hostname systemd[1]: Started Foreman jobs daemon - worker-1 on sidekiq.
\ displaying dynflow-sidekiq@worker-hosts-queue-1
● dynflow-sidekiq@worker-hosts-queue-1.service - Foreman jobs daemon - worker-hosts-queue-1 on sidekiq
Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:51:27 CDT; 21min ago
Docs: https://theforeman.org
Main PID: 1478 (sidekiq)
CGroup: /system.slice/system-dynflow\x2dsidekiq.slice/dynflow-sidekiq@worker-hosts-queue-1.service
└─1478 sidekiq 5.2.7 [0 of 5 busy]

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman jobs daemon - worker-hosts-queue-1 on sidekiq…
Sep 21 15:50:36 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: 2021-09-21T20:50:36.399Z 1478 TID-gi INFO: GitLab reliable fetch activated!
Sep 21 15:50:36 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: 2021-09-21T20:50:36.448Z 1478 TID-ha INFO: Booting Sidekiq 5.2.7 with redis options {:id=>“Sidekiq-server-PID-1478”, :url=>“redis://localhost:6379/0”}
Sep 21 15:50:53 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
Sep 21 15:51:01 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_puppet-1.0.1/lib/foreman_puppet/register.rb:141: warning: already initialized constant Foreman::Plugin::RbacSupport::AUTO_EXTENDED_ROLES
Sep 21 15:51:01 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: /usr/share/foreman/app/registries/foreman/plugin/rbac_support.rb:5: warning: previous definition of AUTO_EXTENDED_ROLES was here
Sep 21 15:51:21 hostname dynflow-sidekiq@worker-hosts-queue-1[1478]: User with login admin already exists, not seeding as admin.
Sep 21 15:51:27 hostname systemd[1]: Started Foreman jobs daemon - worker-hosts-queue-1 on sidekiq.
\ displaying foreman-proxy
● foreman-proxy.service - Foreman Proxy
Loaded: loaded (/usr/lib/systemd/system/foreman-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-09-21 15:50:30 CDT; 21min ago
Main PID: 1378 (ruby)
Tasks: 5
CGroup: /system.slice/foreman-proxy.service
└─1378 ruby /usr/share/foreman-proxy/bin/smart-proxy --no-daemonize

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman Proxy…
Sep 21 15:50:29 hostname smart-proxy[1378]: /usr/share/foreman-proxy is not writable.
Sep 21 15:50:29 hostname smart-proxy[1378]: Bundler will use `/tmp/bundler20210921-1378-10jmmex1378’ as your home directory temporarily.
Sep 21 15:50:29 hostname smart-proxy[1378]: Your Gemfile lists the gem rsec (< 1) more than once.
Sep 21 15:50:29 hostname smart-proxy[1378]: You should probably keep only one of them.
Sep 21 15:50:29 hostname smart-proxy[1378]: Remove any duplicate entries and specify the gem only once.
Sep 21 15:50:29 hostname smart-proxy[1378]: While it’s not a problem now, it could cause errors if you change the version of one of them later.
Sep 21 15:50:30 hostname systemd[1]: Started Foreman Proxy.
Sep 21 16:05:21 hostname smart-proxy[1378]: 10.220.146.89 - - [21/Sep/2021:16:05:21 CDT] “GET /features HTTP/1.1” 200 73
Sep 21 16:05:21 hostname smart-proxy[1378]: - → /features
\ All services are running [OK]

I guess selinux is blocking something, we have an other server in a different environment, that was upgraded to Katello 4.2 with Foreman 3.0 with selinux disabled and that server is working perfectly fine without any issues, wonder why this server have issues with selinux disabled Katello 4.2 upgrade?
also with selinux enabled with enforcing have issues after the upgrade?