Issue with using Leapp

Problem: I am following the guide located at Upgrading and Updating Foreman but running into issues with packages

Expected outcome: Using Leapp, it would resolve the needed packages

Foreman and Proxy versions: 3.3.1

Foreman and Proxy plugin versions:

Distribution and version: Centos 7.9

Other relevant data:

STDERR:
Failed to create directory /var/lib/leapp/el8userspace//sys/fs/selinux: Read-only file system
Failed to create directory /var/lib/leapp/el8userspace//sys/fs/selinux: Read-only file system
No matches found for the following disable plugin patterns: subscription-manager
Warning: Package marked by Leapp to install not found in repositories metadata: rubygem-foreman_ansible_core python38-pulp-python rubygem-hammer_cli_katello rubygem-foreman-tasks-core rubygem-katello python38-pulp-certguard postgresql-evr python3-javapackages python38-pulpcore python38-pulp-cli ivy-local python38-pulp-container python38-pulp-ansible rubygem-foreman_remote_execution_core python38-pulp-rpm log4j12 python38-pulp-deb python38-pulp-file
Warning: Package marked by Leapp to upgrade not found in repositories metadata: gpg-pubkey
Transaction check:

 Problem 1: package katello-4.5.1-1.el7.noarch requires candlepin >= 2.0, but none of the providers can be installed
  - package candlepin-4.1.11-1.el7.noarch requires /usr/bin/python, but none of the providers can be installed
  - conflicting requests
 Problem 2: package tfm-pulpcore-python3-createrepo_c-0.20.1-1.el7.x86_64 requires createrepo_c-libs = 0.20.1-1.el7, but none of the providers can be installed
  - cannot install the best candidate for the job
  - createrepo_c-libs-0.20.1-1.el7.x86_64 does not belong to a distupgrade repository
 Problem 3: package foreman-3.3.1-1.el8.noarch requires rubygem(facter), but none of the providers can be installed
  - package rubygem-foreman_remote_execution-7.2.2-1.fm3_3.el8.noarch requires foreman >= 3.3.1, but none of the providers can be installed
  - package rubygem-facter-4.0.51-2.el8.x86_64 requires rubygem(thor) < 2.0, but none of the providers can be installed
  - cannot install the best candidate for the job
  - foreman-3.3.1-1.el7.noarch does not belong to a distupgrade repository
  - conflicting requests


============================================================
                       END OF ERRORS
============================================================

I do see in the documentation that I may come across package dependency issues, however should I really remove katello and foreman packages?

You should not remove foreman or katello packages, no.

But you might have old (unused) packages on the system that prevent the upgrade to find a “good” path.

Would you mind posting a full rpm -qa?

Also, you’ve linked the Foreman Leapp guide, but you clearly have Katello installed. Which could be the reason for the first two issues (as you’re probably missing the Katello repos).

The right Katello guide is here: Upgrading and Updating Foreman – the only real difference is the list of repositories used.

2 Likes

Hi @evgeni . Thank you for that document, I was able to run Leapp without any major issues. However, the system is now up, it looks like Leapp is complete but I have no foreman/katello running. I checked /var/log/foreman-installer/katello.log and see that it ran into issues with repos, which would make sense since it gets it’s repos from itself.

2023-04-20 11:06:52 [DEBUG ] [configure] Apache::Mod[headers]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Globals]: Starting to evaluate the resource (153 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Globals]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Globals]: Starting to evaluate the resource (154 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Globals]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Params]: Starting to evaluate the resource (155 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Params]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Params]: Starting to evaluate the resource (156 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Params]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Client]: Starting to evaluate the resource (157 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Client]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Server]: Starting to evaluate the resource (158 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Server]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Dnfmodule]: Starting to evaluate the resource (159 of 2091)
2023-04-20 11:06:52 [DEBUG ] [configure] Class[Postgresql::Dnfmodule]: Evaluated in 0.00 seconds
2023-04-20 11:06:52 [DEBUG ] [configure] Prefetching dnfmodule resources for package
2023-04-20 11:06:52 [DEBUG ] [configure] Executing: '/usr/bin/dnf --version'
2023-04-20 11:06:53 [DEBUG ] [configure] Executing: '/usr/bin/dnf module list -d 0 -e 1'
2023-04-20 11:06:54 [ERROR ] [configure] Could not prefetch package provider 'dnfmodule': Execution of '/usr/bin/dnf module list -d 0 -e 1' returned 1: Error: Failed to download metadata for repo 'BCD_Travel_CentOS_CentOS_7_x86_64_Extras': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
2023-04-20 11:06:54 [DEBUG ] [configure] Storing state
2023-04-20 11:06:54 [DEBUG ] [configure] Pruned old state cache entries in 0.00 seconds
2023-04-20 11:06:54 [DEBUG ] [configure] Stored state in 0.05 seconds
2023-04-20 11:06:54 [ERROR ] [configure] Failed to apply catalog: Execution of '/usr/bin/dnf module list -d 0 -e 1' returned 1: Error: Failed to download metadata for repo 'BCD_Travel_CentOS_CentOS_7_x86_64_Extras': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

The only services that are running/installed are
foreman-maintain service restart

Running Restart Services
================================================================================
Check if command is run as root user:                                 [OK]
--------------------------------------------------------------------------------
Restart applicable services:

Stopping the following service(s):
qdrouterd, qpidd, puppetserver
| All services stopped

Starting the following service(s):
qdrouterd, qpidd, puppetserver
| All services started

Is there a way to now get foreman/katello installed?

Here is a screenshot of the terminal when it came up

From /var/log/leapp/leapp-upgrade.log

2023-04-20 11:06:21.876 INFO     PID: 8448 leapp.workflow.FirstBoot: Executing actor satellite_upgrader
2023-04-20 11:06:21.946 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: External command has started: ['foreman-installer']
2023-04-20 11:06:26.941 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:26^[[0m [^[[32mNOTICE^[[0m] [^[[36mroot^[[0m] Loading installer configuration. This will take some time.
2023-04-20 11:06:35.476 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:35^[[0m [^[[32mNOTICE^[[0m] [^[[36mroot^[[0m] Running installer with log based terminal output at level NOTICE.
2023-04-20 11:06:35.546 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:35^[[0m [^[[32mNOTICE^[[0m] [^[[36mroot^[[0m] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
2023-04-20 11:06:39.631 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:39^[[0m [^[[32mNOTICE^[[0m] [^[[36mconfigure^[[0m] Starting system configuration.
2023-04-20 11:06:54.520 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:54^[[0m [^[[31mERROR ^[[0m] [^[[36mconfigure^[[0m] Could not prefetch package provider 'dnfmodule': Execution of '/usr/bin/dnf module list -d 0 -e 1' returned 1: Error: Failed to download metadata for repo 'BCD_Travel_CentOS_CentOS_7_x86_64_Extras': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
2023-04-20 11:06:54.570 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:54^[[0m [^[[31mERROR ^[[0m] [^[[36mconfigure^[[0m] Failed to apply catalog: Execution of '/usr/bin/dnf module list -d 0 -e 1' returned 1: Error: Failed to download metadata for repo 'BCD_Travel_CentOS_CentOS_7_x86_64_Extras': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
2023-04-20 11:06:55.208 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: ^[[34m2023-04-20 11:06:55^[[0m [^[[32mNOTICE^[[0m] [^[[36mconfigure^[[0m] System configuration has finished.
2023-04-20 11:06:55.273 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:
2023-04-20 11:06:55.279 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:   ^[[1m^[[31mThere were errors detected during install.^[[0m
2023-04-20 11:06:55.283 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:   Please address the errors and re-run the installer to ensure the system is properly configured.
2023-04-20 11:06:55.287 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:   Failing to do so is likely to result in broken functionality.
2023-04-20 11:06:55.291 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:
2023-04-20 11:06:55.294 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader:   The full log is at ^[[1m^[[36m/var/log/foreman-installer/katello.log^[[0m
2023-04-20 11:06:55.298 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: Command ['foreman-installer'] failed with exit code 1.
2023-04-20 11:06:55.307 DEBUG    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: External command has finished: ['foreman-installer']
2023-04-20 11:06:55.311 ERROR    PID: 9053 leapp.workflow.FirstBoot.satellite_upgrader: Could not run the installer, please inspect the logs in /var/log/foreman-installer!
2023-04-20 11:06:55.327 INFO     PID: 8448 leapp.workflow.FirstBoot: Executing actor enable_rhsm_target_repos
2023-04-20 11:06:55.488 DEBUG    PID: 10392 leapp.workflow.FirstBoot.enable_rhsm_target_repos: Skipping setting the RHSM release due to --no-rhsm or environment variables.
2023-04-20 11:06:55.494 DEBUG    PID: 10392 leapp.workflow.FirstBoot.enable_rhsm_target_repos: Skipping enabling repositories through subscription-manager due to --no-rhsm or environment variables.
2023-04-20 11:06:55.511 INFO     PID: 8448 leapp.workflow.FirstBoot: Starting stage After of phase FirstBoot
2023-04-20 11:06:55.516 INFO     PID: 8448 leapp.workflow.FirstBoot: Executing actor remove_systemd_resume_service
2023-04-20 11:06:55.563 DEBUG    PID: 10441 leapp.workflow.FirstBoot.remove_systemd_resume_service: External command has started: ['systemctl', 'disable', 'leapp_resume.service']
2023-04-20 11:06:55.588 DEBUG    PID: 10441 leapp.workflow.FirstBoot.remove_systemd_resume_service: Removed /etc/systemd/system/default.target.wants/leapp_resume.service.
2023-04-20 11:06:55.751 DEBUG    PID: 10441 leapp.workflow.FirstBoot.remove_systemd_resume_service: External command has finished: ['systemctl', 'disable', 'leapp_resume.service']
2023-04-20 11:06:55.756 WARNING  PID: 10441 leapp.reporting: Stable Key report entry not provided, dynamically generating one - 47ca952fd7eb5b1844a9a58b41003daf349cee74
2023-04-20 11:06:55.799 INFO     PID: 8448 leapp: Answerfile will be created at /var/log/leapp/answerfile

I work with Gary and we were able to complete this leapp upgrade, with a few small alterations to the documented steps. We are using the documentation on the page @evgeni provided (silly us for not seeing that page before).
Fortunately, this was all done in our lab environment, so we were able to do it many times to get it right, reverting back to snapshots whenever we ran into major issues that we figured out how to fix.

First change was after leeap upgrade and before the reboot. Since this server was subscribed to foreman before, we had to remove the redhat.repo file so during the Rocky 8 upgrade, it wouldn’t fail looking for repos it would never find. We probably could have (should have?) just run subscription-manager remove, but we just removed the repo file instead.

Next, the doc has dnf module enable katello:el8 pulpcore:el8 long after the server comes back up after the reboot. However, the leapp upgrade would fail during foreman-installer.
Also, for some reason, when the upgrade copied files from /var/opt/rh/rh-postgresql12/lib/pgsql/data/ to /var/lib/pgsql/data/, they became owned by root, not postgres.

So what we ended up doing was immediately after the server booted back up into Rocky 8, I quickly logged in and ran, before leapp continued and began foreman-installer:

chown -R postgres:postgres /var/lib/pgsql/data
dnf module enable katello:el8 pulpcore:el8 foreman:el8 -y

That allowed leapp upgrade to complete successfully.

I ran into an issue during runuser -u postgres -- reindexdb -a,

reindexdb: error: reindexing of database "foreman" failed: ERROR:  could not create unique index "index_fact_names_on_name_and_type"
DETAIL:  Key (name, type)=(ssh::rsa::key, PuppetFactName) is duplicated.

but that was quickly resolved from this forum page - reindexdb: error

And lastly, though this doesn’t really have anything to do with leapp upgrade, we had to disjoin and rejoin the freeipa domain.

But now, we have a fully functional Foreman/Katello server upgraded from CentOS 7 to Rocky 8. This is going to really save our bacon if we can get this to work on our Production servers!

Glad you got it sorted!

Yeah, self-subscribed systems are not recommended for exactly this reason “where the heck do we get content during an upgrade” :slight_smile:

For the postgres ownership – I know we had merged a patch for that, but it might not have landed in the copr repo, I shall update that!

1 Like

Thanks for posting the updates @mshade. @evgeni again thank you very much for your help, we greatly appreciate it!