Containers: Plugins

Yes we do, and I don’t agree with it there either. This is an opportunity to review how we do things, and moving to containers is already a drastic change. If there is ever a time to change this, it is now.

Yes, and they really shouldn’t have to - for a user to have to work from the raw source is, in my view, a failure on our part to both users and plugin authors. It’s a barrier I’d like to remove.

Yes I do, because it matters. If we act like the edges of our development community don’t exist, then pretty soon they won’t exist. I don’t believe that’s a goal you’re trying to get to, so it’s my duty to point out what I see as the consequences of your suggestion.

Don’t make life harder for new plugin developers (who’ll have a harder time developing their new plugin, a harder time getting users to use the new plugin, and see existing plugins getting preferential treatment) and don’t make life harder for core developers (who’ll have to maintain multiple ways to deploy plugins, and also decide which plugins are allowed into the core image). One method, all plugins, less work.

I’m unsure why you focus on “immutable”. Users will change things, the question is how much change we support - my interpretation of your use of the word “immutable” means you would like to support zero changes. I would prefer to support small amounts of change, made via the tools we provide. In return for that, we get a much simpler set of systems to maintain.

Note, I’m not against pre-populating the system with things we consider “core” to the project. It’s fine to be opinionated about that. But opinionated selection of plugins is a different debate to the deployment method, and comes after we’ve figured out how we’re going to build this thing. Under option (4) this opinionated list becomes just some extra lines in the “list of plugins to include at build time” - which is, in fact, what Discourse also does.

Hopefully my talk next week will be recorded, since it’s on exactly this kind of thing :slight_smile: - in the meantime, I suspect neither of us is going to convince the other to change their mind. Let’s see what others think.

1 Like

I am very open-minded and this is not black or white situation at all, like we’ve had in the past (e.g. Discourse migration). I am trying to sell immutability as the main feature of containers from the developers perspective. One artifact to pass testing is a dream we can’t achieve for sure, but the more I am seeing this in the design better.

Have a good one! Wrapping up here.

Some of the themes I can extracting from the conversation are around ensuring plugin support for all facets of plugins and their origins and “certified” images provided by the community.

From the notes so far, we have the core Foreman application itself, we have plugins that are “widely known”, less known plugins and private plugins. To mirror our current strategies, we want to ensure that whatever image we the community of developers produce with Foreman core has been tested and “certified” by our current and evolving standards. I would argue that “widely known” (or curated) plugins are those that have most of the following: a test job in Jenkins, documentation on theforeman.org, packaged as an RPM or Deb. While less known plugins are those that may only exist in source, or may not be packaged for a Linux distro. Private plugins are more obvious in that they are those built in house by someone and not shared publicly.

For less well known plugins, there is a question of whether they would be more willing to be part of the broader community plugin set if they were not required to do any packaging. That is, if all they had to do was put a link to their plugin on our site, and the build included their source would that be enough to provide a Foreman core + all known plugins source code image.

Simply by accounting for private plugins, we need to provide a way to rebuild the Foreman image. I think we all agree somewhere, somehow a rebuild of the image will have to happen. The question is more around where that rebuild happens. Is it on a system running Foreman and rebuilt in real time on the system? Is it rebuilt outside of the system, pushed to a container registry and then pulled down onto the system?

Would that also automatically happen when a new plugin version is released?

Depends on the overall tactic we took. What I meant in this context is, no matter which route we take we need to provide the community with a rock solid, easy path to rebuild the Foreman image to suite their needs.

Even with the “well known” plugins, different users will want to use different ones. I think that the separation into different classes of plugins doesn’t help us here - we should strive for one solution that works for all plugins, regardless of their class, and make setting up the environment with any combination of plugins as easy as possible. That will also make it easier to run CI tests with multiple plugins if we want, something that has been brought up multiple times in the past but never came to maturity.

I very much like this term, combining it into similar approach like Linux kernel:

  • certified image - an image that has limited features but it’s been tested (like kernel from Fedora)
  • tainted image - an image customized and built by a user (like kernel compiled from Fedora or vanilla with a custom module)

This approach would definitely be better if we provided some place where Foreman users could share their images. I am not thinking the registry itself (datastore of images) but rather a site where users would put their image URLs with deeper metadata (version of OS, Foreman core, plugins). We want to build a community around this.

You are not giving arguments why we should do this, except the obvious we would build two things instead just one. Building everything from scratch or dynamically generating containers or content in containers is a step back in quality assurance, a certified image is a huge benefit in support we provide. Ability to write “can you test this with the latest stable certified image” will pay off in the future.

(Back from conferences, thanks for the patience…)

Is this nothing more than a list of which plugins have been tested together? We could probably find a way to publish that from Jenkins. I don’t see a need to publish the image in that scenario - just the list of tested components (many of which I assume would be enabled by default anyway).

I was arguing we should drop the current strategy of curation. This is a great opportunity to do so. I can understand the concern when we give people the power to add anything to the image directly, but I think we do better with a level playing field and focussing our efforts on helping plugin authors to write good plugins instead.

Huge +1, but I doubt anyone is surprised I agree with this :slight_smile:

Depending on exactly how we build it, it’s possible up update plugins without rebuilding the image - that only needs to happen as a last resort. To borrow from Discourse (again, as it’s the closest in style), the plugins are git clones, so you can run git pull / db:migrate / touch tmp/restart within the container (and Discourse allows it from the UI) which avoids a rebuild. Image rebuilds are only necessary either at times of very large change, or as a last resort when something isn’t working.

That said, I’m not a huge fan of using git-refs as “versions” so there’s work to do to refine that idea…

I don’t like that at all. We already discriminate against some plugins by making it harder for them to be a part of our ecosystem. Adding words like “tainted” implies they are also doing something bad, which will make it even harder to grow our ecosystem. Choice of language has a huge impact in this space.

(This comment to Eric too…) I’m unclear when we need to share the images themselves (which are large) when we could share the config files / build definitions instead. Is that not a better approach? Seems like it would save a lot of storage/bandwidth.

As I’ve already said - we should do this to have a properly equal space for plugins to be developed in. I hear this from plugin authors (since talking to our community is part of what I do) - the two-level approach is actively harmful to participation. If our goal is to have more people contributing to our ecosystem (and I believe that’s a good thing), then making the rules the same for all plugins is the way to go.

While I was writing my presentation on exactly this topic for OSS-EU (which sadly wasn’t recorded, although I’d be happy to repeat it), I came across Kohsuke Kawaguchi’s talk on the plugin ecosystem for Jenkins (which was given at FOSDEM in 2013). I’d highly recommend watching it or at least checking out the slides. Much of what he says resonates with my own thoughts on this, so hopefully it helps to illustrate what I’m trying to get at.

1 Like