Patch Management Guidance

Greetings, I’m currently trying to develop a patching plan for my homelab, ideally using best practices.
I’ll be managing the following systems.

~ 6 AlmaLinux 8.5 hosts
2 Ubuntu 20.04 LTS hosts
1 Ubuntu 18.04 LTS hosts
1 Raspberry Pi Debian host

I’m the process of manually adding the product repos currently in use on the systems to the Foreman Content Library, such as

  • AlmaLinux
  • AlmaLinux: 8.X BaseOS x86_64
  • AlmaLinux: 8.X AppStream x86_64
  • AlmaLinux: 8.X Extras x86_64
  • EPEL
  • EPEL 8 x86_64
  • EPEL 8 Modular x86_64
  • ElasticSearch
  • ElasticSearch EL8
  • Chrome
  • VSCode
    — ETC
    (I’ve not started to work on the Debian/Ubuntu Repos yet)

Most of these are configured for On-Demand/Content Only download policy to limit the amount of disk space consumed as I’ll only be needing a fraction of the packages.

Now here’s the part where I’m a bit gray,
Although it looks like you can subscribe systems to the Default Org View and “Library”, in which case they’ll use whatever the most recent packages are, it’s preferred to use content views?

It looks like content views by default are intended to be manually managed an in order for systems to receive newer patches, a newer content view containing those patches must first be published to those systems/lifecycle environment. This would however have a fair amount of overhead and require at a minimum manually publishing the content views once a month or so.
There do appear to be some scripts floating around out there that can automate this via hammer or ansible although I’m curious why this isn’t designed into the product and whether automating it via scripts would be the correct approach. Is there perhaps a better practice for doing this?

“Ideally” I’d like to have 3 groups of systems.

  • High risk systems that receive critical patches shortly after release (Library Environment?)
  • Lower risk systems that receive patches ~ 7th day of the month (Test environment)
  • Production systems that receive patches ~ 14th day of the month (Prod environment)

What I “Think” I need to do is

  • Set up a Library/Test/Prod enviroment and activation keys.
  • Create an automated process for publishing content that does the following.
  • Have two Content Views
    – Concurrent one that gets published once a week via script
    – Test/Prod one that has the following automations
    – Day 1 of month, new CV is published
    – Day 7 of month, new CV is prompted to Test
    – Day 14 of month, new CV is prompted to Prod
    – and repeat

I’m not sure if this is a good or sustainable practice though?
I also have the following questions.

  • Would Patch Management for ubuntu require that I do a full mirror of their repositories to the foreman server/smart proxy?
  • What’s the best way to handle automatic product assignment for less common packages, such as EPEL, Chrome, VSCode, ETC. I’ve been doing this manually via Content Hosts, but is there perhaps an easier more dynamic way to do this aside from generating a large number of Activation Keys for each possible combination?

le bump

Le bumps

I would say that in general, automate generation of content views for production on a specific date each month is a bad idea.
Only when a content view is tested for the application it should be published to production. Who knows if this is done in a day or a couple of months.
I instead make the errata be the driver for new CVs. If there is a critical patch released I want to deploy, create a CV and start testing.

2 Likes

If you want to have stricter control over what packages are available to your clients, you’d go for content views. If you’re using it just to make the latest content available, using the library environment and default content view is the simplest way to do it.

There do appear to be some scripts floating around out there that can automate this via hammer or ansible although I’m curious why this isn’t designed into the product and whether automating it via scripts would be the correct approach. Is there perhaps a better practice for doing this?

I believe integrating some sort of sync plan parallel for CVs has been on the radar for a while but has always dropped out of releases over lack of man hours to finish the task amid other priorities. A recurring publish or an automated publish when new content gets added to component repositories of CV would be a very handy feature. +100 to that.

  • Would Patch Management for ubuntu require that I do a full mirror of their repositories to the foreman server/smart proxy?

I know some folks over at ATIX and other users in the community have been planning to add errata support for debian content for a while. I am not sure where that is in the pipeline. I’d refer to them for best practices for debian content management and limitations with that in katello.

Only content-view method of doing this I can think of is via composite content views. You’d have the less used products in separate CVs and the common set of packages in different CVs. And create composite CVs with the combination of CVs instead of combinations of products. Don’t know if that suits your use case.

Hopefully, other users in the community who have similar workloads on katello will have better ideas around this.

2 Likes

What I do (at work) is the following:

Each month, I create a new content view.

Once that content view is created, I promote this most recent version to “Development”.
And then, the previous (now a month old) content view gets promoted to “Production”.

My idea is this— it gives our development team a chance to make sure that whatever new patches recently arrived didn’t break anything. And to do so before production systems are updated.

Development servers are therefore always reasonably current.
Production servers are of course, averaging a month behind.

Everyone has their own strategy, that’s how I have my environment set up.

2 Likes

Having machines attached directly to Library is a matter of taste.

I would say the canonical workflow is to have three environments (Dev/Test/Prod) and not use Library directly. Whenever the developers want new packages in Dev they create a new CV version and promote it. Once They are happy with some state it can go to Test, and only if all Tests are successful is a version promoted to Prod.

While this is the platonic ideal of how things are meant to work, it is ultimately up to you and your needs if you want/need that level of step wise stabilization. If it works better for you to attach hosts directly to Library, then that is fine to.

Regarding Debian/Ubuntu: If you only sync repos on_demand, you run the risk of packages not being available when they are removed from the upstream repo, (which I believe is something that happens?).

On a unrelated Debian/Ubuntu note, we recently found a horrible performance issue (Implementation of does_batch in DeclarativeContent is wrong · Issue #2557 · pulp/pulpcore · GitHub) affecting large APT repo syncs. You may want to wait for the relevant fix to release…

1 Like

Yeah syncing the Ubuntu repo hogs the forman servers CPU for an hour or so.

I’m going to have to think a bit about approach. Complained standards gives
us a pretty short deadline for applying patches. I’m having a bit of
flashbacks to the days of manually approving updates on a wsus server
bi-weekly.

1 Like

The fix for Implementation of does_batch in DeclarativeContent is wrong · Issue #2557 · pulp/pulpcore · GitHub should speed up those syncs by a factor of three or so. It is literally just one line of code changed that effectively prevented batching. With the bug, instead of sending one DB querry per 500 units, it sends 250 querries for two units each, which just kills the DB… The fix should be available with the next pulpcore bugfix release for Katello 4.3 and up.