One of the Katello's main goals is to provide a life-cycle management
around the content in the infrastructure, a.k.a are my hosts
up-to-date? No? Make it so!
As part of a Adam Ruzicka's bachelor’s thesis, we are looking into adding a
possibility to introduce Katello's patch management into the docker
As we've got into a point, where we have something that can be shown ,
as well as looking for feedback and input for further direction,
we would like to invite you for a short demo of the current
status with Q&A and further discussion if needed.
The event will be tomorrow 14:30 GMT via
Some background info:
The core of the functionality is build based on content views: one can
select the content view and environment or an activation key, a git repository containing the Dockerfile to build and the base image to be used.
When the build is triggered, the container is set up to consume content from Katello using subscription-manager.
Also, the original FROM image is (optionally) replaced by the one specified in the
build configuration: this gives us the ability to have full control over the
image. One could take the base image (let’s say a CentOS)
and move that through the dev->test->production lifecycle and
base the rest of the images on the production version
of the base image. We then also know, that when the base image
is updated, what are the other images that need rebuilding as well.
Once the image is produced, we can push the metadata about the
installed images back to the Katello and let Pulp compute the
applicable updates later, as we do that already for the traditional
For the build service itself. we’ve initially taken an approach of
building images inside a container. The core of it is the project
Dock , which provides a build container with pluggable architecture
(with setting the content view repositories, or pushing the images
back to Katello as plugins).
The reasons for this are:
- from it’s nature, we can expect that the compute resource providing
a docker runtime is already available
- other projects (such as OpenShift) takes the same approach so that
we can share common code (as we already did with 
- with further Kubernetes integration, we should get the scale-out
functionality for free. (fire-and-forget tasks seem as a perfect
match for the technology as the Docker is)
- ability to easily test the builds on a local infra with minimum
We went thought the initial stage, where we are now able to get the
data about the build from the user, trigger the docker container to
build the image based on this data, track the progress of the build
and collect the metadata (as the list of installed images) afterwards.
This will be shown as part of the deep-dive.
The next steps are:
- mapping the Pulp consumer to the images and sending the image
package profiles to the Pulp
- the UI/API around the updates applicability of the images next to
the standard content hosts
- bulk updates of the images, a.k.a shellshock!!!
- ability to rebuild the images without Dockerfile provided (just yum
update or something like that)
We would like to use the deep-dive to help us moving the project the