RFC: A new plugin for unified version control management


I would like to hear your opinion on a new plugin that adds centralized version control integration.
This would be a spin-off of the proposed Git-integration in foreman-ansible, but more generalized.

We have foreman_templates, which allows users to checkout provisioning templates from a VCS repository.
I think this functionality would also be very useful elsewhere. Some examples that come to mind are Puppet modules or Salt states.

This new plugin would be built in a universal and modular way and would combine the syncing of different config-management objects behind a unified API and GUI.

This would have multiple advantages:

  • Deduplication, as only one plugin would be needed to handle version-control management for all config-management objects.
  • Since only one plugin is needed, more time could be spent on features like proper secret-management for GitLab-tokens and such.
  • Adding of new objects to be managed, for example puppet modules, would just be a Pull-Request

This plugin would work based on foreman-tasks and would provide a main management UI and API and inject appropriate buttons in their respective places.
For example: “Get States from VCS” under Configure > Salt > States
The import action could then automatically be executed after the sync task has completed.

Here is an example how this plugin may be used to add Git-integration to foreman-ansible:

Any feedback on this proposal is very much welcome!

Best Regards


No thoughts on this, but, TIL the “foreman_templates” module lets me store my templates in git!

I am going to have to play with that, thanks!

There have been different ideas for this in the past.

For templates exists also GitHub - dm-drogeriemarkt/foreman_git_templates: Foreman plugin to retrieve host templates from git which was the take of directly using templates from git.

For getting configuration management code we discussed already the problem of syncing smart proxies in the past and probably using Pulp for this. Another idea was if we could also get some staging via this to Ansible as some are missing this from Puppet.

The generic approach lacks for me always a bit of handling best practices, like using resource.txt/Puppetfile/whatever for dependency handling, but for sure it can solve some use cases.

So I would hope we can perhaps get here some discussion going on about the use cases and find some ideas on how to solve them. (Or I would also be happy to have this discussion in person for example on the Foreman birthday as this could be even more productive.)

1 Like

I share this concern. IMHO the best practice for Puppet is to use a Puppetfile and manage it in git together with your profile modules. Then apply CI & CD to this. For example, use Jenkins to test your modules all work together (including version constraints). Then once merged you deploy it (again, using Jenkins or some other tool).

Having said that, I can see some value in easily distributing this to various Smart Proxies. In the past Pulp with Puppet automatically wrote out Puppet environments. This was a poor implementation (I’ll spare you the details), but conceptually it can still be useful.

I can imagine that your CD’s final step is to upload the environment to Pulp. Then a Smart Proxy is somehow configured to deploy this.

On the Smart Proxy I can imagine an API call to deploy an environment (implemented in smart_proxy_pulp). This could become part of the Smart Proxy content sync: first you sync Pulp and once complete, you deploy.

Within Pulp this could be its own content type that is smarter (for example, it could support Puppetfile and requirements.txt) but it’s probably easier to prototype with just a file type. Puppet calls these repositories control repositories.

Note that with Puppet you also want to hook into the server to flush caches, if enabled. That means using Environment cache. Somehow reusing the Smart Proxy (which already has credentials to call this API) would be beneficial.

In short: in my mind building belongs in your version control (Puppet: control repositories) with CICD. That should upload the artifact(s) into Pulp. Something (Katello or a new plugin) then makes sure it’s deployed to locations on your Smart Proxy. A bit simplified, but something like this:
In reality you’ll also have a Pulp on your Smart Proxies and they can sync, but I hope the idea is clear.

To be honest, I am not a fan of either solution (neither the initial one nor the pulp based solution).

I’ll start with feedback on the initial suggestion:

I am absolutely in the same boat. I do not know about salt, but both the Puppet and the Ansible ecosystem already have tools to solve these problems. Those would be Puppetfile with either r10k/g10k/librarian and requirements.txt with ansible-galaxy (or whatever other tools might be out there) respectively. I have no experience with salt, but even if salt does not have tools like this, two out of the three main config-management systems already have solutions for these problems.
In my opinion, instead of reimplementing what is already there this effort would be better put towards integrating the existing solutions.
Even when I put this concern aside, I am absolutely not a fan of the UX suggested in the screenshot. I personally do not see the advantage of this UI over manually doing cd <ansible-path>; git clone <url>. To me, the UI in the screenshot even looks more complicated than just doing things on the CLI. From what the screenshot shows, I am also missing an easy solution to update/pull existing repositories, both single ones and in bulk, which imho would be a hard requirement to at least put it on a similar level of usefulness to what ansible-galaxy and r10k are already providing today.

Regarding the idea to use pulp for this:
Aside from the horror that Puppet content in Pulp has been to me personally in the past, I do not think we should expect users to have/setup a full Katello stack just to be able to deploy config management code. Not only is that quite overkill imo, but you also still cannot add Katello to an existing Foreman afaik, and expecting users to setup a completely new stack just to deploy config management code is unrealistic to say the least. Again, I think a solution that integrates the already existing tools for the job would in my opinion be the best suited solution.

Fundamentally the problem with this proposal is that it chooses the wrong unit to manage. Individual repositories are not interesting. Puppet has environments while for Ansible it looks like Execution Environments will become the equivalent.

Effectively you are building the UI to describe these. In my experience a Puppet environment will always have a Puppetfile and various locally managed classes in a profiles module. I never got used to the roles modules, but that’s also common. And I’ve also become used to using manifests/site.pp to manage hosts instead of providing this via the ENC. You can see that in foreman-infra/puppet at master · theforeman/foreman-infra · GitHub. If you only allow combining different modules, you only build a UI to write a Puppetfile which is insufficient.

Applying the same to Ansible, you’d essentially build a UI to write your execution-environment.yml and requirements.yml.

For the deployment side, Puppet already has r10k and g10k that already solve this problem. For Ansible I don’t think there’s an equivalent tool, but I’ll also admit I haven’t looked at all for this.

Note that execution environments end up becoming containers, so Pulp can already sync that.

I thought about the same thing. Traditionally how we’ve solved this is to allow multiple providers in the Smart Proxy.

You may have recognized I spoke about artifacts. Ideally we’d have a standardized format (which could very well be a zip/tar file) that’s pulled from a certain location. If you provide a URL then you could implement pulp:///some/path/that/identifies/an/artifact. The Smart Proxy can parse that and look at the Pulp module to figure out the real URL to pull from. And a HTTP(S) URL should also be supported (but reject other protocols like file:// to prevent security issues.

The deployment target is a bit trickier with Puppet, because the Puppetserver can technically be remote. Smart Proxy only knows about a server URL, allowing you to split the Smart Proxy from a Puppet Server compile cluster behind a load balancer.

This isn’t an issue for Ansible because of its architecture.

I think the biggest reason nobody has implemented this is that it’s hard to make it generic enough to be useful to many people while also being flexible enough.

If you focus it only on Ansible, then implementing support for Execution Environments in foreman_ansible and smart_proxy_ansible would be great. Then if you can implement pulling containers from the local Pulp you would also solve the distribution issue.

If I’m allowed, I had also something in mind how the whole Ansible part could work in the future.

Pretty similar to what ekohl, has already written, only with a few key differences.

The biggest problem with Ansible Environments right now is the updating part,
due to that the whole project (and also our company use) moves/d more and more to only use Execution Environments, because they can somewhat reliably be built.
And then you can either directly use these EEs, or use the container image as base, aka init-container. Directly using is of course faster.

Then the whole execution part can either be run in AWX (AAP) or ansible-navigator (which is basically a pretty advanced wrapper around podman/docker + several ansible-* commands running in the container + a TUI that can be proud of itself).
Maybe there are even more options, but these 2 are at least the once I’m aware of.

And then if you want to run playbooks instead directly attaching roles (i.e. Application-centric deployment does that if I remember correctly), the code needs to be cloned somewhere first. Somewhere could be outside of the container, to keep it cached, or inside.
If you are using roles (roles in collections) you don’t really need them cached, they just need to be part of the EE, either already in there or added in the init process.
(guess this little part was the actual topic…)
When everything around already takes care of the environment, then cloning comes really down to just a <source-control> clone. Here not to forget how to clone, without authentication, with Basic authentication, with SSH authentication. (and if these parameters are controlled by the clone or by the environment)

Here then comes the execution-environment.yml and requirements.yml (and maybe even a bindep.txt (system) and a requirements.txt (pip)) into the play, these would be necessary for the init process path.
When these would be in the repo of whole execution, then it will have to be cloned twice (if not even always twice, because Foreman needs to know the roles and variables…)

And then you can get the Execution Environment image either from the Smart Proxies or from an external Container Registry.

Finally, to have it running somewhere.
Personally I would prefer if it could do all of that via AWX (AAP) because then the resource management gets handled by AWX, could be in one environment could be in a different one, could be a remote agent connected to AWX. But then this whole thing needs to be able to talk via the AWX API.
The other 2 options are having Podman installed on the Master/Smart Proxies and running it there or letting REX talk with a different host which acts as an Execution node.

Which tool is actually used, could or might even should work similar to the normal prioritization schema, just: Global Settings → Smart Proxy → Organization → Location → Domain → Host Group → FQDN
(Priority configurable if necessary) to make the configuration overhead smaller.

So, this basically gives you multiple options that can be combined, a multi-dimensional matrix of:

  • optionally, cloning
  • Predefined EE or EE with added content in the init process
  • optionally, adding requirements for the init process
  • EE image source Katello managed or external
  • Running the EE directly on the Master/Smart Proxy, Running REX to other defined host to run the EE there, send commands to AWX API to run the EE there

The most similar thing would be ansible-galaxy collection install ..., but everything more sophisticated can only be found directly built into ansible-builder or AWX itself (as far as I’m aware of)

Totally agree, making it generic for all different solutions is hard.
Speaking from our uses right now, we use Ansible and Salt.
Ansible get pushed as Galaxy artifact, that gets rather half-automated installed via the before mentioned ansible-galaxy collection install command and then a import triggered in the UI to get the variables.
Salt on the other hand, we push that with a deployment agent to a NFS share, which is directly hooked in the Master (and then manually import triggered in the UI).
Ansible needs for us the install of the dependencies, Salt not.

(also I would have done already more than just writing about it, but my coding skill didn’t reach the needed level yet :slight_smile: )
(I’m sorry that kinda got a lot)

This is actually what foreman ACD does: smart_proxy_acd/lib/smart_proxy_acd/acd_runner.rb at master · ATIX-AG/smart_proxy_acd · GitHub

There are a lot of discussions regarding ansible and puppet but this plugin aims to have files under version control management on the foreman. One use-case would be to use this for ansible or puppet but there are others, too. Like to have a nice VCS view for foreman_acd playbooks, templates, common files which are used in foreman like ISO or shim files which are used during provisioning.

This “generic” plugin would deliver a UI to do VCS on foreman and can interact other plugins or even foreman itself to use this UI and maybe run tasks afterward the sync has happend.

The point it’s missing is that a VCS is the source, but at least Puppet and Ansible then apply some build process to produce artifacts. Pulp is already an artifact storage with built in versioning that can be used.

As for ISOs and shim files: I’d consider those artifacts too and git is poorly suited to storing (large) binary files. You can purge old versions, but then what’s the point of a VCS in the first place?

That’s why I can see value in easily consuming artifacts from Puip on Smart Proxies, but not in a unified version control management system.