I recently noticed that dnf shows me a different upgrade result than the host details UI,
only saw it for EL 8 systems so far, because I especially kept an eye on the nginx packages.
I think after investigation it has to do with a module stream metadata mismatch on the Rocky side, but shouldn’t Katello ignore the packages like dnf does (here the 1.20 stream is enabled)?
# dnf update
Updating Subscription Management repositories.
subscription-manager plugin disabled 3 system repositories with respect of configuration in /etc/dnf/plugins/subscription-manager.conf
Zabbix RHEL 8 - Non-Supported 37 kB/s | 1.5 kB 00:00
Zabbix RHEL 8 - Supported 37 kB/s | 1.5 kB 00:00
Foreman Client EL8 44 kB/s | 1.8 kB 00:00
Extra Packages for Enterprise Linux 8 49 kB/s | 2.3 kB 00:00
Puppet EL 8 37 kB/s | 1.5 kB 00:00
Rocky Linux 8 - PowerTools 63 kB/s | 2.6 kB 00:00
Rocky Linux 8 - AppStream 60 kB/s | 2.6 kB 00:00
Rocky Linux 8 - BaseOS 54 kB/s | 2.3 kB 00:00
Extra Packages for Enterprise Linux 8 Modular 54 kB/s | 2.3 kB 00:00
Dependencies resolved.
==============================================================================================================================================================
Package Architecture Version Repository Size
==============================================================================================================================================================
Upgrading:
libnghttp2 x86_64 1.33.0-5.el8_8 LUMA_rockycdn8_rockycdn8_baseos 77 k
platform-python x86_64 3.6.8-51.el8_8.2.rocky.0 LUMA_rockycdn8_rockycdn8_baseos 86 k
python3-libs x86_64 3.6.8-51.el8_8.2.rocky.0 LUMA_rockycdn8_rockycdn8_baseos 7.8 M
Transaction Summary
==============================================================================================================================================================
Upgrade 3 Packages
Total download size: 8.0 M
Is this ok [y/N]:
Which made the module metadata issue apparent to me was, if I disable the module at all I get some updates.
Or is Katello dealing with this a bit differently and only on the Rocky side the change has to be made?
It’s the nginx:1.20 stream this machine is using, but theoretically it should happen with any nginx stream, even 1.22, the newly built artifacts are just not part of the application stream metadata (which makes them regular packages)
We should be able to reproduce this with our test repositories that we carry in the Katello codebase. It is odd though because we have logic checking if the installed RPMs exist in a module stream and to limit the search to that module stream if so. So I’m thinking, in this case, the modular RPMs might have been reported as “modular” by Pulp even though they existed in no module stream.
It’s not possible that the newer packages were added to the 1.20 stream by Rocky accidentally?
Tbh I’m currently trying to figure out where nginx 1.20.1-6.module_el8+12928+992082b2.x86_64 was coming from (the currently installed version), because Rocky never built that version, I suspect it was once part of the EPEL 8 application streams.
(=> oh yeah that’s really what happened, this version was coming from EPEL, glory to dnf history!)
But that’s also basically irrelevant (at least for dnf), the application stream config for dnf is set, so it also counts for the later appearing nginx:1.20 stream.
Only this exact package version/artifact is not present in the module metadata anymore, since EPEL removed that. Hrm maybe it’s that, the case, that the metadata for 1.20 got removed in the EPEL repo.
I’m having a pretty close eye to the Rocky module metadata somehow (something something qa), so pretty sure everything was in there until last 2 weeks where the new version was built, where the new metadata was missing at first and now got added
(and yes there is also a new build of 1.20 now, which also got added to the metadata now, but as both 1.20 and 1.22 were not recognized as module packages, it wanted to upgrade to 1.22)
Gotcha, so it really should be a missing metadata case rather than a “1.22 packages made it into 1.20” case. Your case about the EPEL module metadata would certainly attest to that.
Without doing a self-refresher on applicability that makes sense logically to me – the applicable content should be based on what’s in Library / Default Content View because, if it’s needed but not in the CVV, you can pull it in via incremental update or a new CV publish.
So it looks like in your photo above the applicability is correct now? Since it’s 1.20.1 → 1.20.1 instead of 1.20.1 → impossible newer version. Seems those packages are actually all in the same module stream now.
And then dnf doesn’t pick it up because the packages are applicable but not installable.
So, rambles aside, it sounds like with the correct metadata in Rocky everything is looking better?
We’ve put that issue on our backlog, hopefully we can get to it soon.
Took me a moment as well right now (actually multiple moments… sorry I didn’t want to scratch all of it again, because there is valuable information…)
(basically tl;dr;) I think the key is, applicable (errata/package available in the top library) vs. installable/upgradable (errata/package available directly to the host by the assigned CV), as far as I understood it up to now the packages (upgradable filter) and errata tabs in the new host ui show installable/upgradable. (and beside this metadata thing that broke out of that scheme it also is correct on my systems to that assumption)
So, as dnf only know the end result, it also only shows installable/upgradable.
Examples, that show the behavior (uh... found another thing, but minor)
Unfortunately not, at the time I shot that picture, the packages only were applicable, not upgradable. (and dnf only saw the built metadata so could only show the correct state → not having nginx update)
Yes now as everything promoted through Rocky’s and my staging everything is correct and (seemingly) flawless again
But okay, once again thank you (and sorry) for taking the time!
Good catch there, I totally hadn’t realized that yet either! @jeremylenz you might be interested in what was found above, we used to be able to filter a host’s applicable errata by different lifecycle environments.
Interesting, so the installability of a package it determined by the repositories that are “bound” to the host. If a package is applicable to a host but not in the bound repositories, then it’s just applicable. If the package is in the bound repositories, then it’s installable. That means then that those nginx packages were in your host’s bound repositories somehow.
If this pops up again, you can try this:
# foreman-rake console
::Host.find(<host id here>).content_facet.bound_repositories
That will return what repositories Katello thinks your host is consuming from. If it’s out of date, a subscription-manager repos call will update them. We also have a new feature where you can trigger a remote execution job to send over the new bound repositories from the host details page. I can’t remember if that made it into Katello 4.10…
For errata, if the host is assigned to a non-default content view and/or lifecycle environment, the new host UI will have a toggle with two options. “Applicable” is equivalant to “Library Synced Content”, while “Installable” will show the installable errata in the host’s environment(s). My memory isn’t perfect, but if I had to guess I’d say we probably decided that displaying errata based on other environments that are not the host’s, isn’t relevant to that host, so isn’t needed on that page.