Containers: Plugins

Absolutely, just to clarify by “Foreman” in my initial post I though purely the Rails App and engines, not the whole ecosystem with backend systems. Natural break up into containers (e.g. dynflow, cron jobs, candlepin, pulp) does make much more sense.

Related to this, there is a part of the RFC related to container image build I did not send along yet. This can currently be found at: https://github.com/theforeman/forklift/pull/831/files#diff-e07f67c313ca192a3243f21b6b518fd4

This implies all plugins are enabled all the time which could get to be a bit much for users and put a greater burden community to ensure all plugins work all the time together. I do get the sentiment though. I just worry about practicality when compared to configuration options or users building their own images.

One key to remember is, the Foreman application itself is just a Rails monolith at present but the runtime requirements and ecosystem around it are a variety of services. Everything from databases, to cache store, to async task runner to smart proxy and plugins that bring along external services. Micro services do provide their own risks and challenges. I don’t want to disrupt the ecosystem for plugins, which is why I am glad we are discussing evolutionary patterns.

1 Like

Apologies, I should have kept reading before fully replying!

It would be interesting to see what other Rails apps do for plugins in containerized deployments. Discourse is one example of this, and we have @Gwmngilfen who already played with it quite a bit. Other one that pops to mind is Redmine, while we don’t run it in containers afaik I think it is possible and it also has plugins. There are probably others that can be investigated so we don’t have to reinvent the wheel here.

1 Like

What if there is a service that can build you container according to your needs? A hosted SaaS build your container, like:

https://rom-o-matic.eu/

The main container thread suggested an approach of starting with a simpler approach of running containers via systemd. The first step to this is the RFC and ramping up work to run Foreman as a systemd service behind an Apache proxy. This opens the door to replace the system process with a container instead. However, the discussions in this thread are still very relevant. How do we handle plugins?

Let’s assume we are dealing with a container running via Docker. We still have to ask ourselves how we enable plugins. Someone asked how Discourse handles this. @Gwmngilfen can correct me where I go wrong, when we investigated this I believe we arrived at Discourse rebuilding the image locally based upon the set of plugins enabled and a command ran by the operator.

Summarizing options so far:

  1. Single container for Foreman with all plugins installed but disabled by default
  2. Single core Foreman container that is rebuilt whenever a plugin is enabled
  3. Single core Foreman container, each plugin is a container that runs, adds the code to a shared directory on the host, then shuts down making it now available to Foreman container
  4. Operators of a Foreman, rebuild the Foreman image configured per their own requirements enabling plugins as they need and creating a custom image

The little I have researched seems to indicate that 3 as a sidecar is the preferred model. (4) sounds nice if we deliver scripting to help rebuild it and do the migrations.

@ehelms yes, (4) is what Discourse does. Their process is

  • Add git clone https://<repo-url-for-plugin> to a container config file
  • Rebuild main container via /var/discourse/launcher app rebuild

I think that probably gives us the least headaches (“only” some script tooling to make this smooth), provided we’re OK with asking users to rebuild containers. Based on my experience with Discourse, I don’t think it’s any more complex than yum install plugin - both involve a small amount of command line use, and a small amount of service downtime (Foreman has to restart to enable the plugin in any case).

Looking at the Discourse launcher script [link] might be a good start for such tooling? Apologies if that’s in progress, I’m out of the loop :wink:

I like this one, what are pros and cons? Makes sense to me.

This has one big drawback - it nullifies one important aspect of container delivery: immutable artifact. But I understand we need to enable users to install their own plugins somehow. Therefore I lean towards having this as the second option with a visible flag in UI/API/logs showing the container was custom-made as a warning for us when providing help.

What about plugins we’re not aware of? Either 3rd party plugins or locally-developed ones. Making it hard to install plugins other than ones we’ve “sanctioned” in some way seems like increasing the barrier-to-entry for both users (who might need a plugin we haven’t included), and for plugin authors (who’ll want to test their plugin on their system). That’s not a good idea.

Also we have 90+ plugins that we know of - just keeping them up to date given they all have their own release cadence seems like a recipe for disaster.

I think focussing on the consistent runtime is more important from our point of view. Much of our time is spent on dealing with inconsistencies between OS versions - that all goes away, and the user is highly unlikely to mess with that. If we provide good tooling, they’ll have no reason to log in to the image, only to rebuild it via our tools (which should work in predictable ways, since we wrote them).

1 Like

It’s clear we need still to provide a way to deploy custom plugins, there is no doubt about it. This does not rule out this option, we have the numbers from The Survey and we can say with some degree of confidence (I am sure you know how to even measure it :slight_smile: ) that our container covers 90 % of users.

I don’t see a value in trying to decide how to curate which plugins should or should not be in the container - we already have many discussions over this for what should be included in the installer, and I’m not convinced it adds value. Let’s treat all plugins equally, and they can all have the same install method.

Actually, that’s quite hard to do, since we don’t collect stats from running Foreman instances. We certainly don’t have any view of locally developed plugins that are not released to the community.

It is reasonable to assume that users with larger deployments which also have their own custom plugins, so they’re affected more strongly by this two-tier approach. I’d like to get rid of that. Our plugins, other plugins, in-development plugins - all should have the same deployment method.

I see huge benefit in covering large portion of our user with an immutable pre-populated container with most useful plugins. We also do something similar today - RPM/DEB ship only portion the plugins. You like to type “90+ plugins” in these containers discussions a lot, but it’s fair to mention that we also ship a fraction of this in linux packages. The remaining 80+ plugins are DIY installations, people are deploying Foreman from sources because of that sometimes. It is very similar situation, yet the added value of having bunch of plugins being released in the same distribution format with the same pace as Foreman core is clear.

We actually are already opinionated, and if we choose the one opinionated container it will be no drastic change.

Yes we do, and I don’t agree with it there either. This is an opportunity to review how we do things, and moving to containers is already a drastic change. If there is ever a time to change this, it is now.

Yes, and they really shouldn’t have to - for a user to have to work from the raw source is, in my view, a failure on our part to both users and plugin authors. It’s a barrier I’d like to remove.

Yes I do, because it matters. If we act like the edges of our development community don’t exist, then pretty soon they won’t exist. I don’t believe that’s a goal you’re trying to get to, so it’s my duty to point out what I see as the consequences of your suggestion.

Don’t make life harder for new plugin developers (who’ll have a harder time developing their new plugin, a harder time getting users to use the new plugin, and see existing plugins getting preferential treatment) and don’t make life harder for core developers (who’ll have to maintain multiple ways to deploy plugins, and also decide which plugins are allowed into the core image). One method, all plugins, less work.

I’m unsure why you focus on “immutable”. Users will change things, the question is how much change we support - my interpretation of your use of the word “immutable” means you would like to support zero changes. I would prefer to support small amounts of change, made via the tools we provide. In return for that, we get a much simpler set of systems to maintain.

Note, I’m not against pre-populating the system with things we consider “core” to the project. It’s fine to be opinionated about that. But opinionated selection of plugins is a different debate to the deployment method, and comes after we’ve figured out how we’re going to build this thing. Under option (4) this opinionated list becomes just some extra lines in the “list of plugins to include at build time” - which is, in fact, what Discourse also does.

Hopefully my talk next week will be recorded, since it’s on exactly this kind of thing :slight_smile: - in the meantime, I suspect neither of us is going to convince the other to change their mind. Let’s see what others think.

1 Like

I am very open-minded and this is not black or white situation at all, like we’ve had in the past (e.g. Discourse migration). I am trying to sell immutability as the main feature of containers from the developers perspective. One artifact to pass testing is a dream we can’t achieve for sure, but the more I am seeing this in the design better.

Have a good one! Wrapping up here.

Some of the themes I can extracting from the conversation are around ensuring plugin support for all facets of plugins and their origins and “certified” images provided by the community.

From the notes so far, we have the core Foreman application itself, we have plugins that are “widely known”, less known plugins and private plugins. To mirror our current strategies, we want to ensure that whatever image we the community of developers produce with Foreman core has been tested and “certified” by our current and evolving standards. I would argue that “widely known” (or curated) plugins are those that have most of the following: a test job in Jenkins, documentation on theforeman.org, packaged as an RPM or Deb. While less known plugins are those that may only exist in source, or may not be packaged for a Linux distro. Private plugins are more obvious in that they are those built in house by someone and not shared publicly.

For less well known plugins, there is a question of whether they would be more willing to be part of the broader community plugin set if they were not required to do any packaging. That is, if all they had to do was put a link to their plugin on our site, and the build included their source would that be enough to provide a Foreman core + all known plugins source code image.

Simply by accounting for private plugins, we need to provide a way to rebuild the Foreman image. I think we all agree somewhere, somehow a rebuild of the image will have to happen. The question is more around where that rebuild happens. Is it on a system running Foreman and rebuilt in real time on the system? Is it rebuilt outside of the system, pushed to a container registry and then pulled down onto the system?

Would that also automatically happen when a new plugin version is released?

Depends on the overall tactic we took. What I meant in this context is, no matter which route we take we need to provide the community with a rock solid, easy path to rebuild the Foreman image to suite their needs.

Even with the “well known” plugins, different users will want to use different ones. I think that the separation into different classes of plugins doesn’t help us here - we should strive for one solution that works for all plugins, regardless of their class, and make setting up the environment with any combination of plugins as easy as possible. That will also make it easier to run CI tests with multiple plugins if we want, something that has been brought up multiple times in the past but never came to maturity.

I very much like this term, combining it into similar approach like Linux kernel:

  • certified image - an image that has limited features but it’s been tested (like kernel from Fedora)
  • tainted image - an image customized and built by a user (like kernel compiled from Fedora or vanilla with a custom module)

This approach would definitely be better if we provided some place where Foreman users could share their images. I am not thinking the registry itself (datastore of images) but rather a site where users would put their image URLs with deeper metadata (version of OS, Foreman core, plugins). We want to build a community around this.

You are not giving arguments why we should do this, except the obvious we would build two things instead just one. Building everything from scratch or dynamically generating containers or content in containers is a step back in quality assurance, a certified image is a huge benefit in support we provide. Ability to write “can you test this with the latest stable certified image” will pay off in the future.