Hi,
as part of my bachelor's thesis, I was looking into integrating
Katello and Docker and a sneak-peek of my efforts was demonstrated
in a deepdive roughly two months ago[1]. Now, I would like to invite
you for a short demo to show its current status with Q&A session and
further discussion if needed.
automatic builds of images triggered by content view version publish
bulk build of images
updates of images based on applicable errata or package version changes
reworked UI
The deepdive will cover these features along with necessary details about its internals.
Background info
···
---------------
> The core of the functionality is build based on content views: one can
> select the content view and environment or an activation key, a git repository containing the Dockerfile to build and the base image to be used.
>
> When the build is triggered, the container is set up to consume content from Katello using subscription-manager.
>
> Also, the original FROM image is (optionally) replaced by the one specified in the
> build configuration: this gives us the ability to have full control over the
> image. One could take the base image (let's say a CentOS)
> and move that through the dev->test->production lifecycle and
> base the rest of the images on the production version
> of the base image. We then also know, that when the base image
> is updated, what are the other images that need rebuilding as well.
>
> Once the image is produced, we can push the metadata about the
> installed images back to the Katello and let Pulp compute the
> applicable updates later, as we do that already for the traditional
> hosts.
>
> For the build service itself. we've initially taken an approach of
> building images inside a container. The core of it is the project
> Dock [1], which provides a build container with pluggable architecture
> (with setting the content view repositories, or pushing the images
> back to Katello as plugins).
>
> The reasons for this are:
>
> * from it's nature, we can expect that the compute resource providing
> a docker runtime is already available
> * other projects (such as OpenShift) takes the same approach so that
> we can share common code (as we already did with [1]
> * with further Kubernetes integration, we should get the scale-out
> functionality for free. (fire-and-forget tasks seem as a perfect
> match for the technology as the Docker is)
> * ability to easily test the builds on a local infra with minimum
> dependencies.
>
> [1] - https://github.com/DBuildService/dock
Feel free to raise you questions/comments/suggestions directly in this mail thread.
– Ivan
···
----- Original Message -----
> Hi,
> as part of my bachelor's thesis, I was looking into integrating
> Katello and Docker and a sneak-peek of my efforts was demonstrated
> in a deepdive roughly two months ago[1]. Now, I would like to invite
> you for a short demo to show its current status with Q&A session and
> further discussion if needed.
>
> The event will be on Wednesday 20.5.2015 at 15:00GMT/UTC via
> https://plus.google.com/events/cl2hfek9mthqorvtbag0m92mgak
>
> Changes since last deepdive:
> * automatic builds of images triggered by content view version publish
> * bulk build of images
> * updates of images based on applicable errata or package version changes
> * reworked UI
>
> The deepdive will cover these features along with necessary details about its
> internals.
>
> Background info
> ---------------
> > The core of the functionality is build based on content views: one can
> > select the content view and environment or an activation key, a git
> > repository containing the Dockerfile to build and the base image to be
> > used.
> >
> > When the build is triggered, the container is set up to consume content
> > from Katello using subscription-manager.
> >
> > Also, the original FROM image is (optionally) replaced by the one specified
> > in the
> > build configuration: this gives us the ability to have full control over
> > the
> > image. One could take the base image (let's say a CentOS)
> > and move that through the dev->test->production lifecycle and
> > base the rest of the images on the production version
> > of the base image. We then also know, that when the base image
> > is updated, what are the other images that need rebuilding as well.
> >
> > Once the image is produced, we can push the metadata about the
> > installed images back to the Katello and let Pulp compute the
> > applicable updates later, as we do that already for the traditional
> > hosts.
> >
> > For the build service itself. we've initially taken an approach of
> > building images inside a container. The core of it is the project
> > Dock [1], which provides a build container with pluggable architecture
> > (with setting the content view repositories, or pushing the images
> > back to Katello as plugins).
> >
> > The reasons for this are:
> >
> > * from it's nature, we can expect that the compute resource providing
> > a docker runtime is already available
> > * other projects (such as OpenShift) takes the same approach so that
> > we can share common code (as we already did with [1]
> > * with further Kubernetes integration, we should get the scale-out
> > functionality for free. (fire-and-forget tasks seem as a perfect
> > match for the technology as the Docker is)
> > * ability to easily test the builds on a local infra with minimum
> > dependencies.
> >
> > [1] - https://github.com/DBuildService/dock
>
>
> [1] - https://www.youtube.com/watch?v=e-LBOc7laAAHi,
>
> -- Adam
>
> --
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-dev+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>