Deep dive: Docker Image Build Service with Foreman & Katello

Hi,

One of the Katello's main goals is to provide a life-cycle management
around the content in the infrastructure, a.k.a are my hosts
up-to-date? No? Make it so!

As part of a Adam Ruzicka's bachelor’s thesis, we are looking into adding a
possibility to introduce Katello's patch management into the docker
containers.

As we've got into a point, where we have something that can be shown [1],
as well as looking for feedback and input for further direction,
we would like to invite you for a short demo of the current
status with Q&A and further discussion if needed.

The event will be tomorrow 14:30 GMT via
https://plus.google.com/u/0/events/chpovibe252hpu0d68sj9po6dj0

Some background info:

··· ---------------------

The core of the functionality is build based on content views: one can
select the content view and environment or an activation key, a git repository containing the Dockerfile to build and the base image to be used.

When the build is triggered, the container is set up to consume content from Katello using subscription-manager.

Also, the original FROM image is (optionally) replaced by the one specified in the
build configuration: this gives us the ability to have full control over the
image. One could take the base image (let’s say a CentOS)
and move that through the dev->test->production lifecycle and
base the rest of the images on the production version
of the base image. We then also know, that when the base image
is updated, what are the other images that need rebuilding as well.

Once the image is produced, we can push the metadata about the
installed images back to the Katello and let Pulp compute the
applicable updates later, as we do that already for the traditional
hosts.

For the build service itself. we’ve initially taken an approach of
building images inside a container. The core of it is the project
Dock [1], which provides a build container with pluggable architecture
(with setting the content view repositories, or pushing the images
back to Katello as plugins).

The reasons for this are:

  • from it’s nature, we can expect that the compute resource providing
    a docker runtime is already available
  • other projects (such as OpenShift) takes the same approach so that
    we can share common code (as we already did with [2]
  • with further Kubernetes integration, we should get the scale-out
    functionality for free. (fire-and-forget tasks seem as a perfect
    match for the technology as the Docker is)
  • ability to easily test the builds on a local infra with minimum
    dependencies.

Current status

We went thought the initial stage, where we are now able to get the
data about the build from the user, trigger the docker container to
build the image based on this data, track the progress of the build
and collect the metadata (as the list of installed images) afterwards.
This will be shown as part of the deep-dive.

The next steps are:

  • mapping the Pulp consumer to the images and sending the image
    package profiles to the Pulp
  • the UI/API around the updates applicability of the images next to
    the standard content hosts
  • bulk updates of the images, a.k.a shellshock!!!
  • ability to rebuild the images without Dockerfile provided (just yum
    update or something like that)

We would like to use the deep-dive to help us moving the project the
right direction.

[1] - https://github.com/adamruzicka/dockerro
[2] - https://github.com/DBuildService/

– Ivan

Here is the recording of the deep-dive

let's use this thread for the followup discussion and feedback.

One of the questions we have right now is how to incorporate
the feature into existing Katello model, so that one doesn't
have to use the "New image" form every time, but instead
predefining the behaviour on the content-view or repository level?

Also, what would be the best way to do the automatic builds, when it
makes sense to do etc.

– Ivan

··· ----- Original Message ----- > Hi, > > One of the Katello's main goals is to provide a life-cycle management > around the content in the infrastructure, a.k.a are my hosts > up-to-date? No? Make it so! > > As part of a Adam Ruzicka's bachelor’s thesis, we are looking into adding a > possibility to introduce Katello's patch management into the docker > containers. > > As we've got into a point, where we have something that can be shown [1], > as well as looking for feedback and input for further direction, > we would like to invite you for a short demo of the current > status with Q&A and further discussion if needed. > > The event will be tomorrow 14:30 GMT via > https://plus.google.com/u/0/events/chpovibe252hpu0d68sj9po6dj0 > > Some background info: > --------------------- > > The core of the functionality is build based on content views: one can > select the content view and environment or an activation key, a git > repository containing the Dockerfile to build and the base image to be used. > > When the build is triggered, the container is set up to consume content from > Katello using subscription-manager. > > Also, the original FROM image is (optionally) replaced by the one specified > in the > build configuration: this gives us the ability to have full control over the > image. One could take the base image (let's say a CentOS) > and move that through the dev->test->production lifecycle and > base the rest of the images on the production version > of the base image. We then also know, that when the base image > is updated, what are the other images that need rebuilding as well. > > Once the image is produced, we can push the metadata about the > installed images back to the Katello and let Pulp compute the > applicable updates later, as we do that already for the traditional > hosts. > > For the build service itself. we've initially taken an approach of > building images inside a container. The core of it is the project > Dock [1], which provides a build container with pluggable architecture > (with setting the content view repositories, or pushing the images > back to Katello as plugins). > > The reasons for this are: > > * from it's nature, we can expect that the compute resource providing > a docker runtime is already available > * other projects (such as OpenShift) takes the same approach so that > we can share common code (as we already did with [2] > * with further Kubernetes integration, we should get the scale-out > functionality for free. (fire-and-forget tasks seem as a perfect > match for the technology as the Docker is) > * ability to easily test the builds on a local infra with minimum > dependencies. > > Current status > -------------- > > We went thought the initial stage, where we are now able to get the > data about the build from the user, trigger the docker container to > build the image based on this data, track the progress of the build > and collect the metadata (as the list of installed images) afterwards. > This will be shown as part of the deep-dive. > > The next steps are: > * mapping the Pulp consumer to the images and sending the image > package profiles to the Pulp > * the UI/API around the updates applicability of the images next to > the standard content hosts > * bulk updates of the images, a.k.a shellshock!!! > * ability to rebuild the images without Dockerfile provided (just yum > update or something like that) > > We would like to use the deep-dive to help us moving the project the > right direction. > > > [1] - https://github.com/adamruzicka/dockerro > [2] - https://github.com/DBuildService/ > > -- Ivan > > -- > You received this message because you are subscribed to the Google Groups > "foreman-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to foreman-dev+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. >

My feedback:

  • dock is able to push images to pulp, here's the plugin [1]. Either
    provide url, username and password to the plugin in build json, or set it
    as env vars [2]: if you prefer the plugin reading pulp config or connecting
    to pulp in some other way, just open up an issue and we can figure
    something out

  • it looks like you are using some task queue to run the build containers,
    if not, how you orchestrate build containers?

  • are you planning to reuse built images as base images?

  • elaborating: how about chain rebuilds? (I have image A which is base
    image of image B, I rebuild A and would like the service to transitively
    also rebuild B)

[1]
https://github.com/DBuildService/dock/blob/master/dock/plugins/post_push_to_pulp.py
[2]
https://github.com/DBuildService/dock/blob/master/dock/plugins/post_push_to_pulp.py#L284

Tomas

··· On Tuesday, 10 March 2015 14:40:53 UTC+1, Ivan Necas wrote: > > Hi, > > One of the Katello's main goals is to provide a life-cycle management > around the content in the infrastructure, a.k.a are my hosts > up-to-date? No? Make it so! > > As part of a Adam Ruzicka's bachelor’s thesis, we are looking into adding > a > possibility to introduce Katello's patch management into the docker > containers. > > As we've got into a point, where we have something that can be shown [1], > as well as looking for feedback and input for further direction, > we would like to invite you for a short demo of the current > status with Q&A and further discussion if needed. > > The event will be tomorrow 14:30 GMT via > https://plus.google.com/u/0/events/chpovibe252hpu0d68sj9po6dj0 > > Some background info: > --------------------- > > The core of the functionality is build based on content views: one can > select the content view and environment or an activation key, a git > repository containing the Dockerfile to build and the base image to be > used. > > When the build is triggered, the container is set up to consume content > from Katello using subscription-manager. > > Also, the original FROM image is (optionally) replaced by the one > specified in the > build configuration: this gives us the ability to have full control over > the > image. One could take the base image (let's say a CentOS) > and move that through the dev->test->production lifecycle and > base the rest of the images on the production version > of the base image. We then also know, that when the base image > is updated, what are the other images that need rebuilding as well. > > Once the image is produced, we can push the metadata about the > installed images back to the Katello and let Pulp compute the > applicable updates later, as we do that already for the traditional > hosts. > > For the build service itself. we've initially taken an approach of > building images inside a container. The core of it is the project > Dock [1], which provides a build container with pluggable architecture > (with setting the content view repositories, or pushing the images > back to Katello as plugins). > > The reasons for this are: > > * from it's nature, we can expect that the compute resource providing > a docker runtime is already available > * other projects (such as OpenShift) takes the same approach so that > we can share common code (as we already did with [2] > * with further Kubernetes integration, we should get the scale-out > functionality for free. (fire-and-forget tasks seem as a perfect > match for the technology as the Docker is) > * ability to easily test the builds on a local infra with minimum > dependencies. > > Current status > -------------- > > We went thought the initial stage, where we are now able to get the > data about the build from the user, trigger the docker container to > build the image based on this data, track the progress of the build > and collect the metadata (as the list of installed images) afterwards. > This will be shown as part of the deep-dive. > > The next steps are: > * mapping the Pulp consumer to the images and sending the image > package profiles to the Pulp > * the UI/API around the updates applicability of the images next to > the standard content hosts > * bulk updates of the images, a.k.a shellshock!!! > * ability to rebuild the images without Dockerfile provided (just yum > update or something like that) > > We would like to use the deep-dive to help us moving the project the > right direction. > > > [1] - https://github.com/adamruzicka/dockerro > [2] - https://github.com/DBuildService/ > > -- Ivan >

> From: "Ivan Necas" <inecas@redhat.com>
> To: foreman-dev@googlegroups.com
> Sent: Wednesday, March 11, 2015 11:12:10 AM
> Subject: Re: [foreman-dev] Deep dive: Docker Image Build Service with Foreman & Katello
>
> Here is the recording of the deep-dive
>
> https://www.youtube.com/watch?v=e-LBOc7laAA
>
> let's use this thread for the followup discussion and feedback.
>
> One of the questions we have right now is how to incorporate
> the feature into existing Katello model, so that one doesn't
> have to use the "New image" form every time, but instead
> predefining the behaviour on the content-view or repository level?
>
> Also, what would be the best way to do the automatic builds, when it
> makes sense to do etc.
>
> – Ivan
Nice demo. Nice ability to manipulate images. Given this I wonder if it would make sense to combine "publishing a content view" and "building an image" concepts.
So for example we could have a content view with a Dockerfile and yum repos. When one publishes the content view a new repository is created in the published version with the build of a new image. If the packages get updated in the content view and a new version is desired we can always republish the content view. Gives a nice way to track history. Think that fits better with the Katello model.

Partha

··· ----- Original Message -----

----- Original Message -----

Hi,

One of the Katello’s main goals is to provide a life-cycle management
around the content in the infrastructure, a.k.a are my hosts
up-to-date? No? Make it so!

As part of a Adam Ruzicka’s bachelor’s thesis, we are looking into adding a
possibility to introduce Katello’s patch management into the docker
containers.

As we’ve got into a point, where we have something that can be shown [1],
as well as looking for feedback and input for further direction,
we would like to invite you for a short demo of the current
status with Q&A and further discussion if needed.

The event will be tomorrow 14:30 GMT via
https://plus.google.com/u/0/events/chpovibe252hpu0d68sj9po6dj0

Some background info:

The core of the functionality is build based on content views: one can
select the content view and environment or an activation key, a git
repository containing the Dockerfile to build and the base image to be
used.

When the build is triggered, the container is set up to consume content
from
Katello using subscription-manager.

Also, the original FROM image is (optionally) replaced by the one specified
in the
build configuration: this gives us the ability to have full control over
the
image. One could take the base image (let’s say a CentOS)
and move that through the dev->test->production lifecycle and
base the rest of the images on the production version
of the base image. We then also know, that when the base image
is updated, what are the other images that need rebuilding as well.

Once the image is produced, we can push the metadata about the
installed images back to the Katello and let Pulp compute the
applicable updates later, as we do that already for the traditional
hosts.

For the build service itself. we’ve initially taken an approach of
building images inside a container. The core of it is the project
Dock [1], which provides a build container with pluggable architecture
(with setting the content view repositories, or pushing the images
back to Katello as plugins).

The reasons for this are:

  • from it’s nature, we can expect that the compute resource providing
    a docker runtime is already available
  • other projects (such as OpenShift) takes the same approach so that
    we can share common code (as we already did with [2]
  • with further Kubernetes integration, we should get the scale-out
    functionality for free. (fire-and-forget tasks seem as a perfect
    match for the technology as the Docker is)
  • ability to easily test the builds on a local infra with minimum
    dependencies.

Current status

We went thought the initial stage, where we are now able to get the
data about the build from the user, trigger the docker container to
build the image based on this data, track the progress of the build
and collect the metadata (as the list of installed images) afterwards.
This will be shown as part of the deep-dive.

The next steps are:

  • mapping the Pulp consumer to the images and sending the image
    package profiles to the Pulp
  • the UI/API around the updates applicability of the images next to
    the standard content hosts
  • bulk updates of the images, a.k.a shellshock!!!
  • ability to rebuild the images without Dockerfile provided (just yum
    update or something like that)

We would like to use the deep-dive to help us moving the project the
right direction.

[1] - https://github.com/adamruzicka/dockerro
[2] - https://github.com/DBuildService/

– Ivan


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

My feedback:

> * dock is able to push images to pulp, here's the plugin [1]. Either
> provide url, username and password to the plugin in build json, or set it
> as env vars [2]: if you prefer the plugin reading pulp config or connecting
> to pulp in some other way, just open up an issue and we can figure
> something out

We know about the plugin but aren't quite fond of giving access credentials
to external services. Ideal solution would be to provide dock some sort of
certificate
which would allow it to authenticate against Katello and have Katello proxy
the payload
to Pulp.

> * it looks like you are using some task queue to run the build containers,
> if not, how you orchestrate build containers?

We use Dynflow[1] to orchestrate the builds.

> * are you planning to reuse built images as base images?

Yes, we intend to do that.

> * elaborating: how about chain rebuilds? (I have image A which is base
> image of image B, I rebuild A and would like the service to transitively
> also rebuild B)

Yes, we intend to provide this feature. No matter if the base image was built
using this build service or imported from external sources (eg. CDN).

> [1]
> https://github.com/DBuildService/dock/blob/master/dock/plugins/post_push_to_pulp.py
> [2]
> https://github.com/DBuildService/dock/blob/master/dock/plugins/post_push_to_pulp.py#L284
>
>
> Tomas
>

[1] https://github.com/Dynflow/dynflow

Adam

··· ----- Original Message -----

> > From: "Ivan Necas" <inecas@redhat.com>
> > To: foreman-dev@googlegroups.com
> > Sent: Wednesday, March 11, 2015 11:12:10 AM
> > Subject: Re: [foreman-dev] Deep dive: Docker Image Build Service with
> > Foreman & Katello
> >
> > Here is the recording of the deep-dive
> >
> > https://www.youtube.com/watch?v=e-LBOc7laAA
> >
> > let's use this thread for the followup discussion and feedback.
> >
> > One of the questions we have right now is how to incorporate
> > the feature into existing Katello model, so that one doesn't
> > have to use the "New image" form every time, but instead
> > predefining the behaviour on the content-view or repository level?
> >
> > Also, what would be the best way to do the automatic builds, when it
> > makes sense to do etc.
> >
> > – Ivan
> Nice demo. Nice ability to manipulate images. Given this I wonder if it would
> make sense to combine "publishing a content view" and "building an image"
> concepts.
> So for example we could have a content view with a Dockerfile and yum repos.
> When one publishes the content view a new repository is created in the
> published version with the build of a new image. If the packages get updated
> in the content view and a new version is desired we can always republish the
> content view. Gives a nice way to track history. Think that fits better with
> the Katello model.

Yes, that's the way to go. When thinking about this, we hit one question we're
not sure about the answer:

Should the content-view -> Dockerfile map 1:1 or 1:N. When thinking about this,
I always think about a content view representing let's say a wordpress. On
the other hand, if we wanted to do this for something more complicated,
such as Katello, we would probably have Katello content view, but the
app itself runs multiple images, so there would me multiple dockerfiles as well.

What do you think?

– Ivan

··· ----- Original Message ----- > ----- Original Message -----

Partha

----- Original Message -----

Hi,

One of the Katello’s main goals is to provide a life-cycle management
around the content in the infrastructure, a.k.a are my hosts
up-to-date? No? Make it so!

As part of a Adam Ruzicka’s bachelor’s thesis, we are looking into adding
a
possibility to introduce Katello’s patch management into the docker
containers.

As we’ve got into a point, where we have something that can be shown [1],
as well as looking for feedback and input for further direction,
we would like to invite you for a short demo of the current
status with Q&A and further discussion if needed.

The event will be tomorrow 14:30 GMT via
https://plus.google.com/u/0/events/chpovibe252hpu0d68sj9po6dj0

Some background info:

The core of the functionality is build based on content views: one can
select the content view and environment or an activation key, a git
repository containing the Dockerfile to build and the base image to be
used.

When the build is triggered, the container is set up to consume content
from
Katello using subscription-manager.

Also, the original FROM image is (optionally) replaced by the one
specified
in the
build configuration: this gives us the ability to have full control over
the
image. One could take the base image (let’s say a CentOS)
and move that through the dev->test->production lifecycle and
base the rest of the images on the production version
of the base image. We then also know, that when the base image
is updated, what are the other images that need rebuilding as well.

Once the image is produced, we can push the metadata about the
installed images back to the Katello and let Pulp compute the
applicable updates later, as we do that already for the traditional
hosts.

For the build service itself. we’ve initially taken an approach of
building images inside a container. The core of it is the project
Dock [1], which provides a build container with pluggable architecture
(with setting the content view repositories, or pushing the images
back to Katello as plugins).

The reasons for this are:

  • from it’s nature, we can expect that the compute resource providing
    a docker runtime is already available
  • other projects (such as OpenShift) takes the same approach so that
    we can share common code (as we already did with [2]
  • with further Kubernetes integration, we should get the scale-out
    functionality for free. (fire-and-forget tasks seem as a perfect
    match for the technology as the Docker is)
  • ability to easily test the builds on a local infra with minimum
    dependencies.

Current status

We went thought the initial stage, where we are now able to get the
data about the build from the user, trigger the docker container to
build the image based on this data, track the progress of the build
and collect the metadata (as the list of installed images) afterwards.
This will be shown as part of the deep-dive.

The next steps are:

  • mapping the Pulp consumer to the images and sending the image
    package profiles to the Pulp
  • the UI/API around the updates applicability of the images next to
    the standard content hosts
  • bulk updates of the images, a.k.a shellshock!!!
  • ability to rebuild the images without Dockerfile provided (just yum
    update or something like that)

We would like to use the deep-dive to help us moving the project the
right direction.

[1] - https://github.com/adamruzicka/dockerro
[2] - https://github.com/DBuildService/

– Ivan


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.