Hi everyone! We’re looking at using Foreman for provisioning so I’m trying to put together a design document for our infrastructure. The main concern I have is how to appropriately handle scaling issues. I’ve read through Foreman :: Journey to High Availability (and also watched the youtube video) but since that was 8 years ago, I’m wondering how relevant it is for the current version of Foreman. For example, in the document they utilized the memcached plugin with separate memcached servers for the rails cache, but this PR adds support for redis as a backend. Should that be used instead of memcached now? Also, I’m wondering, is there’s a rule of thumb for determining how many Foreman nodes you need?
For a bit of background on our environment, we currently have about 3500 nodes that we need to manage. Most of those are currently in the same datacenter, but we have two other datacenters that Foreman will need to expand to in the near future. We’re almost exclusively an Ubuntu shop, so I don’t believe we will be using Katello (someone please correct me if this is wrong). We use SaltStack for our config management, so we’ll be wanting to make use of the salt plugin. Any use of Puppet will be strictly limited to whatever Foreman needs to manage itself (The docs say it’s possible to setup Foreman without Puppet, but all I’ve found online is people that tried and weren’t able to get it to work).
You can see a rough estimate in Tuning Performance of Foreman. If you manage 1000+ hosts, throwing an extra CPU core and a bit of memory at your Foreman instance and Smart Proxies is IMHO very reasonable.
Thank you very much for the reply, @maximilian and for pointing me to the correct documentation. Since I haven’t been using Katello, I didn’t think to look at the Katello-specific docs.
From reading what you’ve linked, it only mentions load-balancing the proxies and not the main foreman server. Does this mean that the only real benefit of having multiple foreman master servers would be for redundancy/HA as opposed to increasing performance?
Regarding Katello, the documentation shows that it must be installed on an EL operating system. Since we don’t have any EL systems, we would prefer installing Foreman on Ubuntu, but would be open to the idea of having a one-off EL system if Katello provides enough of a benefit. I’ve only found a few blurbs about what Katello adds, but I’m not sure I fully understand. Can you speak at all to what is gained by adding Katello?
I am not sure if you can/should run Foreman server in HA. I believe multiple Smart Proxy suffice because they can handle provisioning, configuration management, and deliver content. I don’t have any first hand experience though.
Yes, Foreman+Katello only runs on EL8. The benefit of Katello is lifecycle management. You can sync upstream repositories, version them, only include or exclude specific packages, and make them consumable by your managed hosts (and more). You gain a lot of control and independence from public mirrors.
Example: If one of your Smart Proxies manages 100 hosts, then you’ll only have to sync packages from the internet/an (outside) mirror once, and therefore save time and network resources.
If you’re interested in security errata for Debian and Ubuntu, you should look into orcharhino. It’s an enterprise product based on Foreman and Katello and is able to provide security errata for Debian and Ubuntu. See orcharhino.com. Disclaimer: I work for ATIX and I’d be happy to connect you.
Having Foreman high available is possible and using Redis instead of Memcache is the way to go as Redis is already used for other components. But typically I do not need HA here as in most environments availability is high enough without it and it avoids making the environment more complicated. And it gets more and more complicated with every plugin you add as not all are designed with HA in mind.
So I would also recommend scaling via Smart Proxy instead of HA.
I saw you asked for an external redis in another thread and yes this was exactly why I added this for a customer, it is just the matter of the url used.
The number after the slash is redis database number, /4 is the one chosen when making redis the default rails cache. But you can use every number you want as it will be created on the fly with the first write, just use different ones for different workflows. /4 was taken to avoid conflicts with other software which are unlikely but better save than sorry. For other use cases in Foreman/Katello other numbers were already used like pulp (/8) and dynflow (/6).
Have you experienced increased lag after setting up redis cache on another server?
Do you know how to check what’s the source of the lag?
On which host ideally redis should be configured?
The same as database? Can it be an another host?
We did not see a lag, but the system used already memcache before.
In this environment we had a dedicated server, same as the database is possible of course but then you have potentially two services requiring memory.
For how to check and find the source of the lag I have not so deep knowledge with redis. What I would try is the redis-cli and see if there is already some lag when accessing redis on the same system to identify if it is caused by the network in between or not.
Thanks for the reply. Ok it turned out that lag is caused by the fact that redis server couldn’t be reached (firewall). After adding extra firewall rule problem was fixed!