Upgrade issue to 4.4.1 - Internal Server Error (and a notice about 4.5 RC2)

Problem:

I finally tried to upgrade from 4.4.0.2 to 4.4.1, and the upgrade didn’t went as silent as all the previous upgrades.
It basically started with this error:

2022-06-23 22:45:38 [ERROR ] [configure] /Stage[main]/Foreman::Register/Foreman_host[foreman-foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.fritz.box/api/v2/hosts: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.fritz.box for detailed information

And there are also some more, which look pretty much the same afterwards. (long log at the end)

I then grabbed the URI from the log and tried to access it myself after the installer crashed, this gave me the following output:

{
  "error": {"message":"Cannot find rabl template 'katello/api/v2/content_facet/base_with_root' within registered ([\"/usr/share/foreman/app/views\", \"/usr/share/gems/gems/katello-4.4.0.2/app/views\", \"/usr/share/gems/gems/foreman_templates-9.1.0/app/views\"]) view paths!"}
}

More interesting, after that I rolled back the system and gave the Katello 4.5 RC2 a shot!
Which finished installing without any issues!
Looks like this has been changed there somehow.

A gotcha for 4.5 is, which is not in the docs up to now, that for EL8 the 3 Module Streams have to be enabled after the release package update

dnf module enable foreman
dnf module enable katello
dnf module enable pulpcore

If not anyone is working on making the upgrade documentation ready for EL8 I can give that a try in the next days! (this would be my first for the docs, so I don’t really know how this works here, am I allowed to just send PRs there?)

And furthermore I noticed that if I tried to do a bulk-remove of CV releases, it fails with this error, beside that didn’t see any other issues so far (but as this is kind of prod I rolled back for now):

2022-06-23T23:53:03 [E|app|916c7fd5] RuntimeError: There was an issue with the backend service pulp3: Pulp redis connection issue at https://foreman.example.com/pulp/api/v3.

Expected outcome:
As 4.4 is still supported, the upgrade for 4.4.1 should work, but as 4.5 looks to not have this problem for me it’s a minor problem.

For 4.5 bulk CV removes should work

Foreman and Proxy versions:
foreman 3.2.1-1.el8 → no upgrade / foreman-3.3.0-1.el8
katello 4.4.0-1.el8 → 4.4.1-1.el8 / katello-4.5.0-0.1.rc2.el8

Foreman and Proxy plugins:
VMware provider
foreman-tasks
foreman_ansible
foreman_bootdisk
foreman_puppet
foreman_remote_execution
foreman_snapshot_management
foreman_statistics
foreman_templates

Puppet 6 is still used, not switched to 7 for now

Distribution and version:
Rocky Linux 8.6

Other relevant data:
Visible output:

2022-06-23 22:45:00 [NOTICE] [configure] Starting system configuration.
2022-06-23 22:45:09 [NOTICE] [configure] 250 configuration steps out of 2009 steps complete.
2022-06-23 22:45:11 [NOTICE] [configure] 500 configuration steps out of 2009 steps complete.
2022-06-23 22:45:11 [NOTICE] [configure] 750 configuration steps out of 2013 steps complete.
2022-06-23 22:45:13 [NOTICE] [configure] 1000 configuration steps out of 2017 steps complete.
2022-06-23 22:45:14 [NOTICE] [configure] 1250 configuration steps out of 2038 steps complete.
2022-06-23 22:45:37 [NOTICE] [configure] 1500 configuration steps out of 2038 steps complete.
2022-06-23 22:45:38 [NOTICE] [configure] 1750 configuration steps out of 2038 steps complete.
2022-06-23 22:45:38 [ERROR ] [configure] /Stage[main]/Foreman::Register/Foreman_host[foreman-foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.fritz.box/api/v2/hosts: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.fritz.box for detailed information
2022-06-23 22:45:43 [NOTICE] [configure] 2000 configuration steps out of 2038 steps complete.
2022-06-23 22:45:43 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_host[foreman-proxy-foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.example.com/api/v2/hosts: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.example.com for detailed information
2022-06-23 22:45:43 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.example.com/api/v2/smart_proxies: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.example.com for detailed information
2022-06-23 22:45:46 [NOTICE] [configure] System configuration has finished.

And the log from the production.log:

2022-06-23 22:45:38 [DEBUG ] [configure] Class[Foreman::Service]: Starting to evaluate the resource (1911 of 2038)
2022-06-23 22:45:38 [DEBUG ] [configure] Class[Foreman::Service]: Evaluated in 0.00 seconds
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman/Anchor[foreman::service]: Starting to evaluate the resource (1912 of 2038)
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman/Anchor[foreman::service]: Evaluated in 0.00 seconds
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_host[foreman-foreman.example.com]: Starting to evaluate the resource (1913 of 2038)
2022-06-23 22:45:38 [DEBUG ] [configure] Foreman_host[foreman-foreman.example.com](provider=rest_v3): Making get request to https://foreman.fritz.box/api/v2/hosts?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:38 [DEBUG ] [configure] Foreman_host[foreman-foreman.example.com](provider=rest_v3): Received response 500 from request to https://foreman.fritz.box/api/v2/hosts?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:38 [ERROR ] [configure] /Stage[main]/Foreman::Register/Foreman_host[foreman-foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.fritz.box/api/v2/hosts: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.fritz.box for detailed information
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_host[foreman-foreman.example.com]: Evaluated in 0.09 seconds
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_instance_host[foreman-foreman.example.com]: Starting to evaluate the resource (1914 of 2038)
2022-06-23 22:45:38 [INFO  ] [configure] /Stage[main]/Foreman::Register/Foreman_instance_host[foreman-foreman.example.com]: Dependency Foreman_host[foreman-foreman.example.com] has failures: true
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_instance_host[foreman-foreman.example.com]: Skipping because of failed dependencies
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_instance_host[foreman-foreman.example.com]: Resource is being skipped, unscheduling all events
2022-06-23 22:45:38 [DEBUG ] [configure] /Stage[main]/Foreman::Register/Foreman_instance_host[foreman-foreman.example.com]: Evaluated in 0.00 seconds
2022-06-23 22:45:38 [DEBUG ] [configure] Class[Foreman::Register]: Starting to evaluate the resource (1915 of 2038)
2022-06-23 22:45:38 [DEBUG ] [configure] Class[Foreman::Register]: Resource is being skipped, unscheduling all events
2022-06-23 22:45:38 [DEBUG ] [configure] Class[Foreman::Register]: Evaluated in 0.00 seconds
...
2022-06-23 22:45:43 [DEBUG ] [configure] Class[Foreman_proxy::Service]: Starting to evaluate the resource (2005 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Class[Foreman_proxy::Service]: Evaluated in 0.00 seconds
2022-06-23 22:45:43 [DEBUG ] [configure] Class[Foreman_proxy::Register]: Starting to evaluate the resource (2006 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Class[Foreman_proxy::Register]: Evaluated in 0.00 seconds
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_host[foreman-proxy-foreman.example.com]: Starting to evaluate the resource (2007 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman_host[foreman-proxy-foreman.example.com](provider=rest_v3): Making get request to https://foreman.example.com/api/v2/hosts?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman_host[foreman-proxy-foreman.example.com](provider=rest_v3): Received response 500 from request to https://foreman.example.com/api/v2/hosts?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:43 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_host[foreman-proxy-foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.example.com/api/v2/hosts: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.example.com for detailed information
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_host[foreman-proxy-foreman.example.com]: Evaluated in 0.11 seconds
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Datacat_collector[foreman_proxy::enabled_features]: Starting to evaluate the resource (2008 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Datacat_collector[foreman_proxy::enabled_features](provider=datacat_collector): Collected {"features"=>["Puppet", "Puppet CA", "Logs", "Pulpcore", "Dynflow", "Ansible", "SSH"]}
2022-06-23 22:45:43 [DEBUG ] [configure] Datacat_collector[foreman_proxy::enabled_features](provider=datacat_collector): Selecting source_key features
2022-06-23 22:45:43 [DEBUG ] [configure] Datacat_collector[foreman_proxy::enabled_features](provider=datacat_collector): Now setting field :features
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Datacat_collector[foreman_proxy::enabled_features]: Evaluated in 0.00 seconds
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[foreman.example.com]: Starting to evaluate the resource (2009 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman_smartproxy[foreman.example.com](provider=rest_v3): Making get request to https://foreman.example.com/api/v2/smart_proxies?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman_smartproxy[foreman.example.com](provider=rest_v3): Received response 500 from request to https://foreman.example.com/api/v2/smart_proxies?search=name%3D%22foreman.example.com%22
2022-06-23 22:45:43 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[foreman.example.com]: Could not evaluate: Error making GET request to Foreman at https://foreman.example.com/api/v2/smart_proxies: Response: 500 Internal Server Error: Check /var/log/foreman/production.log on foreman.example.com for detailed information
2022-06-23 22:45:43 [DEBUG ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[foreman.example.com]: Evaluated in 0.05 seconds
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman::Rake[apipie:cache:index]: Starting to evaluate the resource (2010 of 2038)
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman::Rake[apipie:cache:index]: Resource is being skipped, unscheduling all events
2022-06-23 22:45:43 [DEBUG ] [configure] Foreman::Rake[apipie:cache:index]: Evaluated in 0.00 seconds

Hi @lumarel,

Yep, docs PRs are always welcome! I created an issue to include the EL8 upgrade information a little while back: Katello upgrade docs only show repository options for EL7 · Issue #1317 · theforeman/foreman-documentation · GitHub

I’m not sure what’s going on with redis. Did you try foreman-maintain service restart --only redis? Also, it’s worth retrying the Pulp DB migrations (since we upgraded Pulpcore to 3.18 for Katello 4.5):

sudo -u pulp PULP_SETTINGS='/etc/pulp/settings.py'  pulpcore-manager migrate
sudo systemctl restart pulpcore* --all

This error is really strange, if you see it again, I suppose double check that app/views/katello/api/v2/content_facet/base_with_root.json.rabl exists within the paths.

Hey @iballou,
thanks for looking into this!

Yep, docs PRs are always welcome! I created an issue to include the EL8 upgrade information a little while back

Okay cool, will get that in shape in the next days then :ok_hand:t2:

4.5

I’m not sure what’s going on with redis. Did you try foreman-maintain service restart --only redis? Also, it’s worth retrying the Pulp DB migrations (since we upgraded Pulpcore to 3.18 for Katello 4.5):

I was not sure about the restarting part, so I tried it and after the restart, the same behavior.

And after rerunning the pulpcore migration, also the same behavior.

2 Things to mention are also, it’s not only bulk-remove of CVs, but also the normal single remove, as well as publishing a new CV.

And there are some SELinux errors while restarting pulpcore, but these don’t affect the known issue, I switched to system permissive mode after noticing that (and foreman-maintain service restart), no change:

Jun 28 22:44:42 r8-foreman-prod dbus-daemon[1052]: [system] Successfully activated service 'org.fedoraproject.SetroubleshootPrivileged'
Jun 28 22:44:43 r8-foreman-prod /SetroubleshootPrivileged.py[23395]: failed to retrieve rpm info for /var/lib/selinux/targeted/active/modules/400/pulpcore
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from create access on the unix_dgram_socket labeled pulpcore_t. For complete SELinux messages run: sealert -l 424c087a-28c4-437d-8e38-a1375268088e
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from create access on the unix_dgram_socket labeled pulpcore_t.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that platform-python3.6 should be allowed create access on unix_dgram_socket labeled pulpcore_t by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'pulpcore-worker' --raw | audit2allow -M my-pulpcoreworker#012# semodule -X 300 -i my-pulpcoreworker.pp#012
Jun 28 22:44:43 r8-foreman-prod /SetroubleshootPrivileged.py[23395]: failed to retrieve rpm info for /var/lib/selinux/targeted/active/modules/400/pulpcore
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from ioctl access on the unix_dgram_socket unix_dgram_socket. For complete SELinux messages run: sealert -l bb6347bc-4a04-4631-973a-9d64e3676877
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from ioctl access on the unix_dgram_socket unix_dgram_socket.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that platform-python3.6 should be allowed ioctl access on the unix_dgram_socket unix_dgram_socket by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'pulpcore-worker' --raw | audit2allow -M my-pulpcoreworker#012# semodule -X 300 -i my-pulpcoreworker.pp#012
Jun 28 22:44:43 r8-foreman-prod /SetroubleshootPrivileged.py[23395]: failed to retrieve rpm info for /var/lib/selinux/targeted/active/modules/400/pulpcore
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from create access on the unix_dgram_socket labeled pulpcore_t. For complete SELinux messages run: sealert -l 424c087a-28c4-437d-8e38-a1375268088e
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from create access on the unix_dgram_socket labeled pulpcore_t.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that platform-python3.6 should be allowed create access on unix_dgram_socket labeled pulpcore_t by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'pulpcore-worker' --raw | audit2allow -M my-pulpcoreworker#012# semodule -X 300 -i my-pulpcoreworker.pp#012
Jun 28 22:44:43 r8-foreman-prod /SetroubleshootPrivileged.py[23395]: failed to retrieve rpm info for /var/lib/selinux/targeted/active/modules/400/pulpcore
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from ioctl access on the unix_dgram_socket unix_dgram_socket. For complete SELinux messages run: sealert -l bb6347bc-4a04-4631-973a-9d64e3676877
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing /usr/libexec/platform-python3.6 from ioctl access on the unix_dgram_socket unix_dgram_socket.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that platform-python3.6 should be allowed ioctl access on the unix_dgram_socket unix_dgram_socket by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'pulpcore-worker' --raw | audit2allow -M my-pulpcoreworker#012# semodule -X 300 -i my-pulpcoreworker.pp#012
Jun 28 22:44:43 r8-foreman-prod /SetroubleshootPrivileged.py[23395]: failed to retrieve rpm info for /var/lib/selinux/targeted/active/modules/400/pulpcore
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing gunicorn from ioctl access on the unix_dgram_socket unix_dgram_socket. For complete SELinux messages run: sealert -l 78fe2108-624f-487c-ab9a-b67aa6aee0f7
Jun 28 22:44:43 r8-foreman-prod setroubleshoot[23304]: SELinux is preventing gunicorn from ioctl access on the unix_dgram_socket unix_dgram_socket.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that gunicorn should be allowed ioctl access on the unix_dgram_socket unix_dgram_socket by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'gunicorn' --raw | audit2allow -M my-gunicorn#012# semodule -X 300 -i my-gunicorn.pp#012

4.4.1

This error is really strange, if you see it again, I suppose double check that app/views/katello/api/v2/content_facet/base_with_root.json.rabl exists within the paths.

It looks like the installer looks in the wrong folder, /usr/share/gems/gems/katello-4.4.0.2 does not have a app sub-folder, but 4.4.1 has:

# ll /usr/share/gems/gems/katello-4.4.1/app/views/katello/api/v2/content_facet/
total 16
-rw-r--r--. 1 root root 926 Jun  1 16:10 base.json.rabl
-rw-r--r--. 1 root root 120 Jun  1 16:10 base_with_root.json.rabl
-rw-r--r--. 1 root root 110 Jun  1 16:10 erratum.json.rabl
-rw-r--r--. 1 root root 943 Jun  1 16:10 show.json.rabl

Okay a bit of news!

Didn’t stop at that point, and tried to upgrade Puppet from 6 to 7 for the 4.4.1 upgrade now, and well, the foreman-install after the dnf update went flawlessly without the content_facet error!

This would be the whole upgrade process, that is needed for 4.4.0.2 to 4.4.1:

dnf remove puppet6-release
dnf install https://yum.puppet.com/puppet7-release-el-8.noarch.rpm
dnf update
puppetserver ca migrate
foreman-installer
1 Like

Glad to hear that upgrading to Puppet 7 helped! I’m a little surprised to hear there was no app sub-folder, since that’s where the source code should live. Maybe it was in an unexpected location? Otherwise that sounds like the Katello gem wasn’t properly installed.

As for the redis issue, are all of the Pulpcore services running properly? You can find the names with foreman-maintain service list.

I’ll try an upgrade to see if I can reproduce the problem.

1 Like

4.4.1

So I checked it once again now, and right now it looks like there is only in the 4.4.1 directory a app directory, maybe that gets cleaned up while some operation to safe space? :thinking: (this system has several dirs from 4.1.3 to 4.4.1) What I don’t understand is why foreman-install with Puppet 6 has a problem with this but with Puppet 7 not.
Anyway it’s a workaround, for everybody who runs into this :slight_smile:

4.5

# foreman-maintain service list
Running Service List
================================================================================
List applicable services:
dynflow-sidekiq@.service                   indirect
foreman-proxy.service                      enabled
foreman.service                            enabled
httpd.service                              enabled
postgresql.service                         enabled
pulpcore-api.service                       enabled
pulpcore-content.service                   enabled
pulpcore-worker@.service                   indirect
puppetserver.service                       enabled
redis.service                              enabled
tomcat.service                             enabled

All services listed                                                   [OK]
--------------------------------------------------------------------------------

Looks good, did a restart and another check as well then, but still the same behavior :+1:

As in the redis issue? Interesting… I couldn’t reproduce the problem but of course our configs may differ. I need more info from folks more knowledgeable about the redis setup.

Can we see the output of curl https://hostname/pulp/api/v3/status/ --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key ?

Also, in your /etc/pulp/settings.py, do you have something like REDIS_URL = "redis://localhost:6379/8" ?

Yeah this system was setup with Katello 4.1.3 and grew quite a lot since:

And yes exactly, this is how the REDIS_URL parameter looks in the settings.py file :+1:

@ekohl or @ehelms, do you know what else might make Pulp think Redis is offline? It seems to be a difference between Pulpcore 3.17 and 3.18, yet Pulpcore itself doesn’t seem to have had any major Redis changes since the very beginning of this year.

@lumarel

In /etc/pulp/settings.py check the CACHE_ENABLED setting. After upgrading to 4.5-RC2, I had the same redis issue. CACHE_ENABLED was set to False. There’s a note in the pulp documentation stating it should be set to True, however.

I changed the setting to True and restarted the pulpcore services. After that, repo syncing starting working again.

After around digging in foreman-installer, I found that pulpcore_cache_enabled was set to false in /etc/foreman-installer/scenarios.d/foreman-answers.yaml. I changed it to true and ran foreman-installer again and got the expected result ("CACHE_ENABLED = True") in /etc/pulp/settings.py.

7 Likes

Thank you very much, that was the solution!

So, if this should be enabled now by default, there might need to be something to change that, or will this be in the release notes to check?

I definitely never touched that setting, this is the default, which might have gotten carried forward from 3.0/4.1.3, and btw, this was for me in the /etc/foreman-installer/scenarios.d/katello-answers.yaml as this is a Katello install.

1 Like

Thank you! My problem was resolved with that simple change and a restart of services!

1 Like

For folks hitting this issue, please post your foreman-installer version and a copy of /etc/foreman-installer/scenarios.d/katello-answers.yaml

The installer should set the cache setting to True, it changed in the code around January Refs #34325 - enable redis cache by default · theforeman/puppet-foreman_proxy_content@0ee91ab · GitHub

To fix this more legitimately, as a workaround, run foreman-installer --foreman-proxy-content-pulpcore-cache-enabled=true. We’ve identified the issue as a missing installer migration, so it’ll be fixed up soon.

2 Likes

In case you still want it, my foreman-installer version as well as my foreman-installer-katello versions are below. My katello-answers.yaml definitely had the pulpcore_cache_enabled set to false prior to the recommended solution.
foreman-installer-3.3.0-1.el8.noarch
foreman-installer-katello-3.3.0-1.el8.noarch

Thank you! :slight_smile:

Thank you, Will try it.