RFC: Foreman Production Installation via Containers and Podman Quadlets

Installation via Containers and Podman Quadlets

Problem Statement

In order to run Foreman on a given operating system, hundreds of native packages need to be created and then tested. This has to be repeated for every operating system the project wishes to support. Across the ecosystem of projects included in Foreman installations there are multiple languages involved, Ruby, Python, and Java to name a few. For each new operating system the Foreman project must ensure each of these has a corresponding runtime, and dependencies, present to operate which makes onboarding multiple operating systems cumbersome.

The current installation, when upgraded many times, often suffers from packages that linger on the system and trigger security scanners. There have been times where projects with the same runtime language, but requiring different versions, for example Pulp and Ansible, have caused conflicts for installation.

Proposal

Align on using containers as our deployment artifact, taking a build once and deploy many approach. This would cut down on our build overhead, while opening up more deployment possibilities. In order to maintain continuity, initially the containers would be built with RPMs instead of source. This provides the project the ability to maintain the same build artifacts, for traditional and container based, while developing the container-based installation.

The approach will be to first encapsulate our current systemd services as containers running as Podman Quadlet’s that can provide a common service runtime framework. This will enable making the deployment more operating system agnostic, avoiding clashes with the operating system and providing a path to a production level Kubernetes deployment.

There is on going research into this idea occurring at GitHub - theforeman/foreman-quadlet repository. This repository uses light-weight Ansible to orchestrate and allow development to focus on the container and quadlet piece.

Considerations

The research up to this point has identified some major areas of additional investigation and open questions that will need answered. These are tracked as issues. There are some major ones worth highlighting here:

Installer

A container and quadlet model introduces a new way of thinking about our deployment and the services running within it. This presents the question of whether the existing puppet-based foreman-installer should be adapted or if we should start with a fresh installer focused on this deployment model free of the past. Evolving the current installer allows for progressive enhancement but at the cost of coupling multiple deployment methods together. A new installer allows us to start with a fresh approach, aiming for simplicity and a container-native approach but comes at the cost of having to upgrade existing installations and translate the current installer.

Upgrades of existing setups

Regardless of the installer decision, we will need to research and plan how the upgrade path for existing installations will look like.

Development Environment

We will need to ensure that this is built with developers in mind to ensure that an environment can be created for production and development. The goal being to ensure that they mirror each other closely, while providing a fun and efficiency environment for developers.

Packaging

The containers used for research were built based on our RPM packages, but could also be run on other operating systems (like Debian or even SUSE), as long as that operating system can run containers.
One could argue this makes dedicated (Debian) packaging obsolete, so time and effort could be saved. Later on also RPM packaging could be re-considered.

Smart Proxy

The initial container work will focus on Foreman and not the smart-proxy itself. The smart-proxy has many considerations with how it interacts with external services, and handles plugins. This will need it’s own dedicated exploration.

5 Likes

+1 to using containers as distribution of Foreman bits.

Do we have a plan right now on how we will sign the images/artifacts, can we expect to aim towards SLSA 3 during the RFC process?

1 Like

Oh god, yes please. This would make life much easier for both the maintainers and users.

1 Like

I’ll say that I’m in favor of this idea, especially given the whole Perforce & Puppet open source issues now. While there will be a fork, I’m gradually coming around to using containers. The quadlet model is IMHO very native for existing sysadmins used to systemd. I have some ideas around an installer.

We should learn from our existing installer. It had some great things and some not so great.

What I really liked is that the Puppet modules provided an explicit interface that could be introspected. The installer could then use these. It did end up with tons of installer parameters that were hard to navigate.

In our new environment I think the containers should provide that clear boundary. This isn’t about a specific technical implementation, but I like containers to clearly describe how you tune the behavior. Just dumping in some config files is IMHO not a well defined interface because it’s not going to work well if you upgrade components. For example, Tomcat’s server.xml might change between versions so if you expose that as an “API” to users, you’re going to have a bad time in the future.

My suggestion is that now, after a proof of concept that was breadth-first (establishing all services in a minimal fashion) to look depth-first to deliver a single service in a production-like quality. My suggestion is to start with Candlepin because it’s already rather isolated.

To start off, I’ve opened Candlepin configurability · Issue #68 · theforeman/foreman-quadlet · GitHub to start exploring a well defined interface.

Then there are 2 sets of installer options that we document: external DB and loggers. For the former I opened Idea: use PgBouncer to handle PostgreSQL connections · Issue #67 · theforeman/foreman-quadlet · GitHub so we separate out the concern from the container itself. The latter is in Handle debug logging for troubleshooting · Issue #69 · theforeman/foreman-quadlet · GitHub. I’d prefer to have the discussions in the issues to avoid exploding this thread.

3 Likes

thank you for putting this RFC together, big +1 with some comment for consideration

I really welcome the concept of a container based deployment, and now with a few external challenges such as puppet/perforce seems to be the correct time to either prepare to implement or execute the implementation.

I like the GitHub threads you’ve started and the concept of of moving one build at a time, I look forward to being able to contribute to this work as it’s an area I’ve long tried to work on, on my own, but also it opens the door to wider possibilities.

There are a few areas I’d welcome discussion on, some of them maybe either too bold, or too wide a scope, but moving to a containerised deployment seems the right time to either solve these issues, or be a launchpad to solving them as a fast follow

1.) how modular does/should the foreman environment go, how few/man containers make up the foreman service. Allowing for scalability options or even component based upgrades deployments in isolation

2.) Allowing for models such as Kubernetes deployments, some environments will have already picked a containerisation platform, which ones of these are supported, why and how are they supported or rejected

3.) externalising configuration as much as possible into the database or something else. I see references to some of the issues I’m thinking about in the RFC thread already, such as the dumping of configuration files and welcome and look forward to contributing to the POC you suggest in this thread but externalising configuration and state to the right level is a key outcome and benefit of this.

4.) Solving the ephemeral cloud deployment pattern, to be able to run, foreman in a cloud native environment and pattern would be a huge benefit unlock and aligning this to a container based deployment seems the right place also. There are various patterns and issues running foreman in a cloud native environment such as AWS, not all of them foreman (such as maintainer a puppet CA certificate state in an ephemeral environment) but externalising the configuration outside of the container, using approaches such as you suggest with PgBouncer to manage the database interface, and making the containers modular enough to deploy/not deploy a component, or point at an endpoint rather than a script, opens the doors to AWS and GCP (less so with Azure and other more traditional virtualisation clouds) to use searches such as AWS RDS, Loadbalancers, secret manager, Anthos, EKS, etc etc - it’s a big ask, but it’s a goal I’d like to see considered as part of the outcomes of a container based deployment pattern and being bold seems the right way, as this has long been a complaint and concern about investing into the foreman/katello/even Satellite eco system for managing, or reporting Cloud based resources (AWS and GCP specifically)

5.) Multi-node deployment for components (sort of summary of all the points above but without explicit challenges) the ability to distribute Foreman components across multiple nodes with ease

6.) is now the time to revisit how we maintain, deploy, configure and integrate plugins ?

I don’t think time should be invested in upgrade methods for existing installs, this is a breaking change that comes with a huge amount of potential, I’d welcome an approach that is more aligned to you will need to re-deploy your foreman environment to move to the new platform.

While unlikely, I’d also like to just raise the possibility of having a non-container based deployment pattern also, for a minority of use cases, containers and container based deployment as scary and come with overhead (technical, security or procedural) think banks being scared of not being able to monitor overlay networks of components talking to each other, that sort of thing. which may make adoption (more so if you consider doing an in-situ upgrade/migration process) troublesome. This is a niche situation though

In the same way the current installer does a lot of non-foreman work for you, eg: setups a puppet master and CA for you, should there be a consideration to a bootstrap process also, some users will have a container platform that they want to deploy to, other users will have infrastructure with no container knowledge or capabilities, how will they get an environent up and running, without some of the poor practices you see in other projects such as

curl http://someurl.com/prep-forman.sh | sudo su && wget -o docker_install.sh www.docker.com/getdocker.sh && sudo ./docker_install.sh

I would be sad to see all the professionalism of foreman robust packaging and installer be lost on the above style of work.

1 Like

In general I think we all agree on this consideration. Are there specifics that come to mind for you?

I think we have to focus on supporting core components and interfaces to give flexibility. And then, where someones is willing to invest and maintain, support specific mechanism for a given platform that may not be as portable. I think this goes along with your fourth point around where do we pick and choose to support certain providers, where do we draw the line at interfaces and leave the rest as an exercise to the deployer.

I’d love to get more details on this, or an example or two.

I’d put this in a future-looking but not immediate aim. This one for me is a guard-rail. We shouldn’t make this harder to do, and should keep it in mind for any design work but at the outset should not be aiming to make this reality at the cost of moving the underlying distribution to containers.

What did you have in mind?

Do you mean maintaining the package-based installation or via some other method?

I think we will continue to have both an opinionated method with an eye towards flexibility for the advanced users. The analogy being our current installer and those who choose to use the puppet modules directly. Our focus will need to be on what we feel we can build and maintain for the community and our personal and company use cases. Additional elements will need to come from across the community with maintainability in mind and a focus on common interfaces that provide the ability to react to change.

Given the amount of configuration options, the pieces and parts that can be deployed and orchestrated, I would find it hard to take this approach. Asking every user to re-deploy something that often sits in the middle of their infrastructure is a big, big ask. I strongly believe we will have to have a upgrade path. If you read through some of our debates, we are still asking ourselves if we first adapt the puppet-based installer to containers before we re-write the installer entirely.

We would appreciate as many thoughts in these areas as possible, and welcome you, and others, to come contribute and play with the PoC.

1 Like

Since first posting the RFC there have been a number of updates as we’ve done research, had some direct discussions and made some additional decisions about direction.

First, the work and issues continue to be tracked in the foreman-quadlet repository.

Updates

CLI

We’ve decided to move forward with a CLI based approach using Ansible under the hood and build upon our wrapping tool Obsah. This tools is also used by our packaging tool Obal. This initial work is being prototyped.

Parameters

The CLI approach is designed to give us control over what parameter are exposed to the end user along with constraints for situations such as two parameters being required together. As parameters are defined they are being documented starting with the database parameters. As the parameters are defined, we are defining how they map to parameters from the foreman-installer and will be identifying parameters that won’t carry forward. This will be useful input for how we handle migration.

Builds

The approach to container image builds, versioning and development has been captured in a document that outlines the approach.

Installation Workflow

We have defined initial installation workflows from happy path to advanced workflows and captured those in a pending document.