Rubygem-qpid-proton (and others) conflict on Rocky8

I have Katello 4.2.2 running nicely on Rocky8 (one of the more popular OS’s to repllace the old Centos8…)

Lately when I try yum updates, I get the following, which appears to be caused by conflict(s) between katello/foreman repos and EPEL8.

(If I disable epel8, I get a clean check and “Nothing to do”, but then I am not able to update rubygem-qpid-proton which comes from the katello repo?)

What is the uh, proper resolution here? Do I just skip using epel8? Or…?

[root@katello ~]# yum update
Last metadata expiration check: 1:08:03 ago on Fri 04 Feb 2022 12:04:10 PM CST.
Error:
Problem 1: package rubygem-qpid_proton-0.36.0-1.el8.x86_64 requires libruby.so.2.5()(64bit), but none of the providers can be installed

  • cannot install the best update candidate for package rubygem-qpid_proton-0.32.0-3.el8.x86_64
  • package ruby-libs-2.5.9-107.module+el8.4.0+592+03ff458a.x86_64 is filtered out by modular filtering
    Problem 2: package python3-pulpcore-3.14.9-1.el8.noarch conflicts with python3-django-filter >= 2.5 provided by python3-django-filter-21.1-1.el8.noarch
  • cannot install the best update candidate for package python3-pulpcore-3.14.9-1.el8.noarch
  • cannot install the best update candidate for package python3-django-filter-2.4.0-1.el8.noarch
    Problem 3: package foreman-3.0.1-1.el8.noarch requires rubygem(net-ssh) = 4.2.0, but none of the providers can be installed
  • cannot install both rubygem-net-ssh-5.1.0-2.el8.noarch and rubygem-net-ssh-4.2.0-3.el8.noarch
  • cannot install both rubygem-net-ssh-4.2.0-3.el8.noarch and rubygem-net-ssh-5.1.0-2.el8.noarch
  • cannot install the best update candidate for package rubygem-net-ssh-4.2.0-3.el8.noarch
  • cannot install the best update candidate for package foreman-3.0.1-1.el8.noarch
    (try to add ‘–allowerasing’ to command line to replace conflicting packages or ‘–skip-broken’ to skip uninstallable packages or ‘–nobest’ to use not only best candidate packages)

Hey @caseybea!

Do I understand that correctly you have the EPEL 8 repo enabled on your Foreman/Katello machine?

Because if you have a short look at the docs you might notice that this is not supported on EL8 :thinking: (was going to send you the docs for Katello 4.2, but the docs are missing there right now, but they are the same in that part)

Yeah, after further checking I messed up! I use ansible to configure my servers and adding EPEL is a regular thing. SHOOT. So now I have a system with some epel-things and not. I am not sure that is fixable without causing great harm. (I might try just for giggles).

Here’s a SUPER related question: If I do a KATELLO backup (foreman-maintain backup), should I be able to just restore that onto a newly rebuilt server (same version 4.2.) ?

No worries I also was there looking from where I can get Ansible for the Foreman server :sweat_smile:

You can actually do the following:
Enable the extras repo: dnf config-manager --enable extras
Install the CentOS Configmanagement key: dnf install centos-release-configmanagement
Install the CentOS Ansible repo: dnf install centos-release-ansible
And then you can install Ansible 2.9 :slight_smile:
If it didn’t install any packages from EPEL up to now it should be fine i it did, it may be good to replace them, but I’m not sure how well that will work

The last time I tried to migrate to a new machine doing the foreman-maintain backup/restore it broke, so I completely rebuilt it then, but it should work that way yes.

you misunderstood; I am not talking about ansible WITHIN katello. I have a separate server I use straight up ansible to configure my servers the same (no integration with katello at this time).

so, my ansible setup “added” the EPEL repo to my katello server— before I installled katello.

As such, what I have now are the following packages that all came FROM epel during the install process. And with the exception of the issue outlined in my OP, it works. (clearly, I have to fix it).

Here’s what I have that has to be removed. I’m thinking of just rpm-removing these (wthout removing dependencies), and then re-running the yum install of forman–installer-katello which should HOPEFULLY bring in the right pakcages again from the right place since I’ll have EPEL turned off:

libssh2.x86_64 1.9.0-5.el8 @epel
libtomcrypt.x86_64 1.18.2-5.el8 @epel
libtommath.x86_64 1.1.0-1.el8 @epel
python-django-bash-completion.noarch 2.2.24-1.el8 @epel
python-idna-ssl.noarch 1.1.0-9.el8 @epel
python3-aiohttp.x86_64 3.7.4-1.el8 @epel
python3-async-timeout.noarch 3.0.1-8.el8 @epel
python3-bracex.noarch 2.1.1-2.el8 @epel
python3-dataclasses.noarch 0.8-3.el8 @epel
python3-django.noarch 2.2.24-1.el8 @epel
python3-django-prometheus.noarch 2.1.0-1.el8 @epel
python3-inflection.noarch 0.5.1-1.el8 @epel
python3-mccabe.noarch 0.6.1-11.el8 @epel
python3-prometheus_client.noarch 0.9.0-1.el8 @epel
python3-pycryptodomex.x86_64 3.10.1-1.el8 @epel
python3-pyrsistent.x86_64 0.17.3-6.el8 @epel
python3-redis.noarch 3.5.3-1.el8 @epel
python3-tablib.noarch 3.0.0-1.el8 @epel
python3-xlwt.noarch 1.3.0-1.el8 @epel
qpid-proton-c.x86_64 0.35.0-2.el8 @epel
rubygem-gssapi.noarch 1.3.0-2.el8 @epel
rubygem-locale.noarch 2.1.2-3.el8.2 @epel
rubygem-mail.noarch 2.7.1-3.el8 @epel
rubygem-mime-types-data.noarch 3.2019.0331-1.el8 @epel
rubygem-mini_mime.noarch 1.1.0-1.el8 @epel
rubygem-rb-inotify.noarch 0.10.0-1.el8 @epel
screen.x86_64 4.6.2-12.el8 @epel
sshpass.x86_64 1.06-9.el8 @epel

And… one fo two things will happen. It could work, or I could end up with a brick. Which is why I asked about the backups. No time like the present to try it out.

NB: I am not gonna do anything until after next week, which is when my monthly patch night is scheduled. Once I’m past that I have tone of time to rebuild if needed. And I’m not angry about it at all, leaving EPEL enabled was my own stupid mistake. I remember EPEL was needed when katello was installed before on my Centos7 host. I never really noticed EPEL was NOT part of the install process for “Centos8”. oops.

Oh… okay sorry for misunderstanding then.

Okay I manually checked a few packages now, and there at least a handful which should come from the pulpcore repo instead of the epel repo, so you could just try to disable the epel repo and look how well it works to replace the packages with the version you can actually find in a repo while epel is disabled. (snapshotting or backuping the system before this is definitely a good idea)
There might even be a better way, maybe someone else will provide it later here.

Hope you find your way out of this in a few days then! :slight_smile:
(yep was a pretty severe change which was needed, as far as I understood it)

OK, so I more or less re-created the situation on a new Rocky 8 VM.

With EPEL, I did the whole install (now that there’s later versions of things, the install actually blows up now. Not a problem).

But once that far, I had the same setup-- pretty much all of the packages listed above haivng come from EPEL.

The fix is relatively easy, I just removed each of those packages (with --nodeps, SOME of these with deps would uinstall a crapload of things).

Then with EPEL gone, I just did a standard yum install of each of the packages. They all installed correctly, and they all come from either @foreman, @katello, for @foreman-pulpcore.

And with all of that re-done, the katello install competed .

So I’m pretty confident that I can fix my production server in a week.

Stay tuned :slight_smile:

1 Like

You should actually also being able to do a dnf replace (maybe also with the —nodeps parameter) (maybe needed to use the whole NEVRA as package name, maybe the whole link to the package in the correct repo), but if it works that way also good :slight_smile:

Shouldn’t “dnf distro-sync” do exactly that?

       dnf distro-sync [<package-spec>...]
              As  necessary  upgrades,  downgrades or keeps selected installed
              packages to match the latest version available from any  enabled
              repository.  If  no package is given, all installed packages are
              considered.

I didn’t know this option from dnf up to now!
Really sounds like the correct thing to use, thank you :slight_smile:

Interesting. Even Rocky8 (equiavlent of Centos 8.5) doesn’t have “dnf REPLACE”. Must be relatively new. I think in my situation just removing the package(s), removing EPEL, and the reinstalling the affected packages seems to work.

By the way, you commented above about a foremain-maintain restore not working properly.

I was curious, so I just built a big VM and just restored a backup recently created on my production server.

It went surprisingly well. So, in 2 days when I try the “package replacement” bit, if that somehow goes south, I now know I can just rebuild and use the backup and I’ll be golden. I logged into my newly-restored server and everything was there. Nice.

> [root@katello ~]# foreman-maintain restore /backups/katello-backup-2022-02-06-14-00-06/
> Running Restore backup
> ================================================================================
> Check if command is run as root user:                                 [OK]
> --------------------------------------------------------------------------------
> Validate backup has appropriate files:                                [OK]
> --------------------------------------------------------------------------------
> Confirm dropping databases and running restore:
> 
> WARNING: This script will drop and restore your database.
> Your existing installation will be replaced with the backup database.
> Once this operation is complete there is no going back.
> Do you want to proceed?, [y(yes), q(quit)] y
>                                                                       [OK]
> --------------------------------------------------------------------------------
> Validate hostname is the same as backup:                              [OK]
> --------------------------------------------------------------------------------
> Setting file security:
> / Restoring SELinux context                                           [OK]
> --------------------------------------------------------------------------------
> Restore configs from backup:
> / Restoring configs                                                   [OK]
> --------------------------------------------------------------------------------
> Run installer reset:
> - Installer reset                                                     [OK]
> --------------------------------------------------------------------------------
> Stop applicable services:
> 
> Stopping the following service(s):
> redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-api.socket, pulpcore-content.socket, pulpcore-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, pulpcore-worker@5.service, pulpcore-worker@6.service, pulpcore-worker@7.service, pulpcore-worker@8.service, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, puppetserver, foreman.socket, dynflow-sidekiq@worker-1, dynflow-sidekiq@worker-hosts-queue-1, foreman-proxy
> | All services stopped                                                [OK]
> --------------------------------------------------------------------------------
> Extract any existing tar files in backup:
> - Extracting pgsql data                                               [OK]
> --------------------------------------------------------------------------------
> Migrate pulpcore db:
> / Migrating pulpcore database                                         [OK]
> --------------------------------------------------------------------------------
> Start applicable services:
> 
> Starting the following service(s):
> redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-worker@1.service, pulpcore-worker@2.service, pulpcore-worker@3.service, pulpcore-worker@4.service, pulpcore-worker@5.service, pulpcore-worker@6.service, pulpcore-worker@7.service, pulpcore-worker@8.service, tomcat, dynflow-sidekiq@orchestrator, foreman, httpd, puppetserver, dynflow-sidekiq@worker-1, dynflow-sidekiq@worker-hosts-queue-1, foreman-proxy
> / All services started                                                [OK]
> --------------------------------------------------------------------------------
> Run daemon reload:                                                    [OK]
> --------------------------------------------------------------------------------
> 
> [root@katello ~]#

Maybe it was also only a dnf install to replace it :thinking:
But dnf distro-sync should really be the correct method

And okay good if it was only a isolated fail on my side, also good!