Good day folks!
I’m looking for some advise from those out there with large established environments who have implemented the “Roles and Profiles” pattern using Foreman as the means to supply parameters. I’m working with Foreman 1.18/Katello 3.7 at $day_job, and Foreman 1.19 at home (home prod/lab), using Puppet 5 in both places. At $day_job, I am working on a reference implementation, which will include Foreman/Katello, Ansible integration for multi-node orchestration and some on-off tasks, while Puppet 5 will be used for the on-going configuration and compliance management. I’m currently implementing the roles and profiles, and need to support multiple flavors of *nix and Windows too. The git repo contains all puppet modules used, so there will be consistent point-in-time deployments of the config management code. There would be another git repo for Ansible playbooks, and I would similarly hope to leverage Foreman’s dashboard there as well for setting variables and parameters. I’m trying to get the foundations in place to move towards containers as well.
I’m struggling with where to implement parameters, especially since I’m updating some stuff from stuff that was originally developed for the Puppet Dashboard 1.3/Puppet 2 era. I want to have as much of the puppet modules (and Ansible playbooks) “fixed” in git-managed code, and expose out those parameters that may change depending on deployment environments. I want to leverage the Foreman ENC to supply the parameters, and utilize the data that already exists in the Foreman dashboard (like the DNS servers associated with a subnet). I want to also be able to set parameters by domain, location, or subnet for stuff like log servers, metrics collectors, etc. I’ve currently implemented a filter in ignored_environments.yaml to filter out all classes from the foreman dashboard except for the role and profile modules classes.
The design pattern I’m currently working towards would look like the following:
An example role, the goal of which is that each system would have one assigned role:
class role::example inherits role {
include profile::example
}
The role class itself would include the base profile (common core services configuration and OS baseline configuration):
class role {
include profile::base
}
Profile base (would include common dependency classes, and then pull in specific classes based on Facter data):
class profile::base (
Array $nameservers = $::nameservers,
String $domain = $::domain,
$search = $::search,
Array $ntpservers = $::ntpservers,
) {
# Common for all OS
include ::stdlib
# Include OS Specific Customization
include "profile::${downcase($::kernel)}::base"
}
There would be directories for linux, windows, solaris, aix, etc. Each would have it’s respective OS configuration, like the linux example:
class profile::linux::base (
Array $nameservers = $::nameservers,
String $domain = $::domain,
$search = $::search,
Array $ntpservers = $::ntpservers,
String $relayhost = $::relayhost,
String $syslog_server = $::syslog_server,
String $collectd_server = $::collectd_server,
String $puppetmaster = $::puppetmaster
) {
# Include Linux OS specific configuration parameters
include "profile::linux::${downcase($::operatingsystem)}"
# Common Includes for all Linux Systems
include profile::linux::puppet
include profile::linux::dnsclient
include profile::linux::time
include profile::linux::logging
...
}
The idea is that things that need to be customized for CentOS, RHEL, SuSE, Debian, Ubuntu, etc. would be customized in the down-cased $::operatingsystem file. Each core service would have some decision logic for configuring the systemd versions of core services, or the traditional ones. You’ll note that I have some nameservers and ntpservers parameters in profile::base and profile::linux::base; part of my uncertainty is where to place parameters so that operations doesn’t have to put Smart Parameter overrides all over the place in the Foreman dashboard (and people lose track of what is applied where). My hope was to put core stuff (DNS, NTP, domains, etc.) in the profile::base, and then handle OS specific configurations in their classes, but maybe I’m actually causing more sprawl that way, and should just leave all parameters in each OS specific class… And at this point, I’m still working on infrastructure level classes, and haven’t gotten to workload specific classes yet, which will be a whole different pile of fun…
So, currently, I have Smart Class parameter overrides for stuff like nameservers, which I set as ‘array’ and set the default value to ["<%= @host.subnet.dns_primary %>","<%= @host.subnet.dns_secondary %>"]. I set other things as parameters in Subnets, domains, locations, as needed for those things, because they are specific to a place/network/group/etc. I have some global parameters like ‘hostgroup_name’ being set to <%= @host.hostgroup.name %>. So there is a bit of scatter on that front as well. It’s currently handled using scripts that use the hammer_cli to set the parameters/overrides/settings, and could be documented appropriately when the design pattern is fully established.
Puppet 4 and 5 seem to handle inheritance differently than Puppet 2 & 3, and a lot of the examples out there reference hiera lookups, not pulling from Foreman parameters. Before I go too far down my proposed path (and maybe paint myself into a corner), I’m hoping someone might give me a sanity check on what I’m currently doing based on their experiences. This implementation has the potential to get big and move to a number of non-connected environments, so I’m trying to get a solid procedural foundation and class framework in place to accommodate that, and have a good flow from the engineering side to the ops side.
I would greatly appreciate any feedback you all would care to share! Thanks in advance for your time and comments!