I have started looking at tooling for automated image builds and what I came up with mostly follows Eric’s RFC, initial effort can be seen at https://github.com/theforeman/grapple/pull/2
A notable point is a build mechanism - after looking at quay.io, I decided not to use it as an external build service because we need more customized builds than what quay.io currently offers.
Reasons why quay.io is not the best choice for us as a build service:
Quay.io expects Dockerfile to be present in a repository together with code. In our case, Dockerfile is a build artifact that gets generated from a template based on what plugins will be included in an image and we would need to commit it for quay.io to have access to it.
Even if we commit a generated Dockerfile, the application code does not live in grapple. It is not a concern now as we leverage packaging in image builds but it will become a problem once we move to building from source. We would essentially need to
git clone inside Dockerfile for core and every plugin we want inside the image. I’d rather go the other way around and prepare build context to keep Dockerfile as short as possible.
Additional concern is that by using quay.io as builder, we show that as a way to build foreman images. This significantly raises a barrier for anyone in community who would like to quickly build an image with a custom set of plugins as they would essentially need to replicate our build infrastructure - not to mention they will not see how our builds are set up in quay.
With these points in mind I opted for providing Ansible playbook that would build an image locally with optional push into a registry of choice, which gave me a simpler workflow and better control over the whole build process. I have also included a github action that runs the playbook and can be triggered manually. The intended workflow is that repo mashing in Jenkins triggers the github action that works as a builder and pushes the images into registry on successful build. To make this happen, the following things are needed:
- add a step into Jenkins release pipeline that triggers the github action
- make sure Jenkins is allowed to trigger that action
- set up secrets so that github action can push into registry
Which brings me to a question which registry to use for our images. We might use the existing foreman organization in quay.io, which would mean stopping the experimental image builds based on Dockerfile in foreman repo. Or should we use something else instead?
Once we have the images in registry, we can move forward with using them for demo in containers.
I’m not part of the development team, so feel free to take my comments with a grain of salt. Though for $DAYJOB I am working on automated builds for OS images for on-prem and multi-cloud environments, and use a tool called Packer to generate the builds.
Builders exist for Docker, and multiple on-prem and cloud environments. Provisioners handle the customization aspects via shell script, Puppet, Ansible, etc. and post-provisioners handle the movement of the built artifacts to other stages in the process (Docker tag and push, convert VM to template, generate manifest files and checksums, etc.)
I use it in conjunction with an on-prem GitLab instance and gitlab-runners as the CI/CD runners, and use Foreman Container repositories as source materials, commit built containers to GitLab’s container registry, and pull the built containers back into another Foreman repository as part of the supply chain aspects.
Just thought I’d offer up some food for thought…
Thanks, I’ll take a look at Packer as well.
I’d be +1 for this. Even if we decide not pushing to quay.io (I don’t have a better suggestion though), it’s better to have one stream to avoid confusion on a consumer side.
I think it only make sense to use quay.io as we are already using it now, if we don’t have better alternative. The process you’re describing has room for changing it quite quickly to another platform.
Definitely +1 for Jenkins release step triggering the Docker release.
Just what images should we build? Should we decide on three containers (eg. containing no plugins, the most used plugins and then Katello build)?
What do we need to setup the build process? How can we help?
I think it’s key to keep the Dockerfile in the repo and ci-test it like we do with every other commit. At work we do use containers quite a bit and have always struggled if it was stored in a separate repo.
Can we use the tooling to build the container in core with every PR?
We don’t have to push it in the first iteration, I just want to make sure the build does not break when doing changes in core.
Not yet, but it is definitely something I’d like to have. The tooling should be flexible enough for any plugin to build an image on PR if they wish to have such a job.