Katello Patches via remote job execution and foreman_monitoring

Problem:
We would like to use Foreman + Katello for end to end patching of our hosts. Due to having to evaluate the reboot of a host I anticipate this needed to be done via a (scheduled) remote execution task.

This process would include:

  • setting appropriate downtime for a host using foreman_monitoring;
  • executing any hostgroup specific application shutdown tasks;
  • upgrading packages via yum
  • evaluating if a reboot is required and rebooting if so
  • performing application startup tasks
  • ending downtime for a host via foreman_monitoring.

Expected outcome:

I would expect a job template to be able to cater to this task.
Using the hosts hostgroup to include specific snippets which define application start up and shutdown tasks
Host reboot requirements can be done easily enough using needs-restarting

Foreman would issue any host downtime as it iterates through each applicable host.

Foreman and Proxy versions:
using Foreman 1.16
Katello 3.5

Foreman and Proxy plugin versions:

tfm-rubygem-foreman_hooks-0.3.14-1.fm1_16.el7.noarch
tfm-rubygem-foreman_openscap-0.8.3-2.fm1_16.el7.noarch
tfm-rubygem-foreman_remote_execution-1.3.3-1.fm1_16.el7.noarch
tfm-rubygem-foreman_graphite-0.0.3-3.fm1_11.el7.noarch
tfm-rubygem-foreman_ansible-1.4.5-1.fm1_16.el7.noarch
tfm-rubygem-foreman-tasks-core-0.2.0-1.fm1_16.el7.noarch
tfm-rubygem-foreman-tasks-0.10.0-2.fm1_16.el7.noarch
tfm-rubygem-foreman_remote_execution_core-1.0.5-1.fm1_16.el7.noarch
tfm-rubygem-foreman_docker-3.2.1-1.fm1_16.el7.noarch
tfm-rubygem-foreman_monitoring-0.1.1-1.fm1_16.el7.noarch
tfm-rubygem-foreman_ansible_core-1.1.1-1.fm1_16.el7.noarch

Other relevant data:
There appears to be an issue with foreman_monitoring and its ability to set downtimes which has been logged in this github issue.: https://github.com/theforeman/foreman_monitoring/issues/25

While not specific to this use case this functionality would need to work in order to integrate it into the job task.

Questions

  1. The remote execution tasks are executed on each remote system. While possible, I would like to avoid having the remote host issue its downtime to the icinga2 api directly. My preference would be to have the foreman host issue this downtime request locally, before it commences the remote execution.

  2. How can i find what functions of foreman_monitoring plugin available to be exposed and used via a job template

  3. Is there a method to retrieve a before/after content host patching compliance report from existing data? The content hosts view is not really suitable. This report would be used to track progress and report patching compliance for a group of hosts.

No doubt the discussion of this might raise further questions…

Thanks!

Andrew

I made mention of this as a possible work around relating to the foreman-monitoring functionality integration. Should this not be possible I would need to interact with Icinga API directly.

Hi Andrew,

first of all, full disclosure: I am the author of the monitoring plugin but short on time right now. But I hope, we’ll fix the bug soon.

Let me briefly describe the workflow we’re using to patch most of our systems. We actually prefer a pull approach over a push approach. So we basically have a cronjob running on each system that does the patching for us. Foreman servers as a distributed lock manager (via a custom plugin, that we’ll open source soon, probably in approx two weeks time) and makes sure, that only one host out of a cluster patches and boots at a time. This plugin can interact with the foreman_monitoring plugin and issue a downtime when the host checks in.
What do you think about that approach?

Right now, foreman_monitoring actions are not available in remote execution at all. That would definitely be an RFE. Sorry.

Cheers,

Timo

All good Timo, appreciate the full disclosure and update on your development road map. We have an alternative available to us via Ansible and Foreman so not a show stopper as we can skin this cat another way. I’m not sure a pull approach would work with our environment but its food for thought.

Cheers,

Andrew