[katello] Repo/Content View/Lifecycle Structuring

Just starting out with Katello (coming from an old mrepo setup) – really
like what I see especially the slice-n-dice ability with content views and
lifecycle environments. At this point I'm only concerned with getting my
repositories up and running; provisioning through Foreman/Puppet is down
the road. Has anyone assembled a guide/howto/best practices to structuring
repos, content views and lifecycles? The flexibility in layering content
views feels like this could quickly lead to a mess as it scales.

I'm in a primarily vendor-app driven environment, where an application may
be tied to a specific RHEL major/minor version. Most production systems
have at least an associated test system, sometimes a dedicated development
system too. Applications may have other constraints so it seems
appropriate to keep everything separate.

In selecting the repositories to sync, the Katello guide suggests not to
select individual point release but instead the {5,6,7}Server repos to
sync, then use filtering via content views to weed out unwanted packages.
I like this as it will save a ton of disk space. How do I craft a filter
that gets me a specific point release, say RHEL 5.8? My guess is I need to
use errata, but must confess I'm a little shaking in my understanding of
it. Can I actually lock a content view to a specific point release? Maybe
this boils down to my understanding of how releases are managed. At this
point I'm just syncing the {5,6,7}Server repos.

On content views, I've setup ones to mirror the individual repos: i.e.,
rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content
views based on team/application/function/etc that bundles needed repos by
release and arch: i.e. appdev-composite-rhel-6-x86_64,
appname-composite-rhel-5-x86_64. I decided to use composite views as it
seems to be the only way to tie multiple repositories to an activation key.

Regarding lifecycles, I have a generic Dev > Test > Prod and application
specific ones that follow similar sequences. Maybe this only needs to be a
single Dev > Test > Prod lifecycle?

I then publish the individual repos to the Library env. Then, as needed, I
bump the versions used within the composite, followed by a publish and
promotion through the appropriate lifecycle.

Suggestions? Am I heading down the "right" path?

Thanks,
-Chris

Hi,

I was hoping there would be responses to the interesting questions raised
here. Unfortunately I am not qualified to respond with answers myself, but
I thought I would throw in a couple of observations and additional
questions (hope that's okay).

Regards,
Rotty

>
> Just starting out with Katello (coming from an old mrepo setup) – really
> like what I see especially the slice-n-dice ability with content views and
> lifecycle environments. At this point I'm only concerned with getting my
> repositories up and running; provisioning through Foreman/Puppet is down
> the road. Has anyone assembled a guide/howto/best practices to structuring
> repos, content views and lifecycles? The flexibility in layering content
> views feels like this could quickly lead to a mess as it scales.
>

I'm worried by this myself.

>
> I'm in a primarily vendor-app driven environment, where an application may
> be tied to a specific RHEL major/minor version. Most production systems
> have at least an associated test system, sometimes a dedicated development
> system too. Applications may have other constraints so it seems
> appropriate to keep everything separate.
>
> In selecting the repositories to sync, the Katello guide suggests not to
> select individual point release but instead the {5,6,7}Server repos to
> sync, then use filtering via content views to weed out unwanted packages.
> I like this as it will save a ton of disk space. How do I craft a filter
> that gets me a specific point release, say RHEL 5.8? My guess is I need to
> use errata, but must confess I'm a little shaking in my understanding of
> it. Can I actually lock a content view to a specific point release? Maybe
> this boils down to my understanding of how releases are managed. At this
> point I'm just syncing the {5,6,7}Server repos.
>
>
Where did you see that in the Katello documentation? I'd not seen that.

What I had seen was section 3.3 of
this https://access.redhat.com/documentation/en-US/Red_Hat_Satellite/6.0/html/Provisioning_Guide/sect-Red_Hat_Satellite-Provisioning_Guide-Importing_Subscriptions_and_Synchronizing_Content-Enabling_Red_Hat_Repositories.html
that I had taken to mean it is possible to provision a 6.5 server from the
6.5 kickstart and RPM repos. I've just run a test of that and it seems to
work (the only caveat being that for some unknown reason I had to suppress
the 'yum update' in the provisioning template to prevent an immediate
upgrade to a higher minor version). Is there some reason why it might be
better to use 6server repos with filters than this approach?

> On content views, I've setup ones to mirror the individual repos: i.e.,
> rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content
> views based on team/application/function/etc that bundles needed repos by
> release and arch: i.e. appdev-composite-rhel-6-x86_64,
> appname-composite-rhel-5-x86_64. I decided to use composite views as it
> seems to be the only way to tie multiple repositories to an activation key.
>
> Regarding lifecycles, I have a generic Dev > Test > Prod and application
> specific ones that follow similar sequences. Maybe this only needs to be a
> single Dev > Test > Prod lifecycle?
>

It is the same generic lifecycle here for me. What I'm finding is that it
takes far too long to publish and promote a content view through to
Production. I can envisage a scenario where a fix to production is
required immediately and directly without the time spent in dev and test
(this thread would not be a good place to argue the rights or wrongs of
that - it is a simply a fact that I must live with in this environment).
With that in mind I have been trying to find a way to reduce the size of
content views to make them more manageable.

··· On Tuesday, 6 January 2015 19:03:19 UTC, Chris Clonch wrote:

I then publish the individual repos to the Library env. Then, as needed,
I bump the versions used within the composite, followed by a publish and
promotion through the appropriate lifecycle.

Suggestions? Am I heading down the “right” path?

Thanks,
-Chris

Hello,

This is how i work … i have created CVs for my RHEL6 common rpms, RHEL6
rpms, puppet modules and then created CCV for this called "RHEL 6 Stable",
they i do testing for it in DEV env and release it to stage and then to
production ENV …

Now every six months i take create snapshot of "RHEL 6 Stable" called "RHEL
6 Stable <DATE>" and then i update all my child CVs and publish repos from
dev->stage->production …

by doing this i have my old repository present incase if i want to build
host with old repository to findout what was there before… i can do it now

Now here is issue i am facing … i have my 50 Application groups CVs for
yum and puppet repositores … and i have to add my "RHEL 6 Stable" to
all my 50 application team's CVs as only single CV is visible to host :frowning:
… this is BAD WAY … issue i am facing is … when i update my "RHEL 6
Stable" i have to go to all 50 and do publish/promote … ahhhhh Sad … no
use :slight_smile: … ( here they are looking at "Use latest" feature where i will
not have to go to all 50 and do promote/publish as if i will do on child CV
it will be available to parent CV automatically ) … in this "use
latest" feature as well there is issue where i have to add my RHEL repos to
application goups CVs which i dont want … so i have requested feature
where uses can assign muliple CVs to hosts …

many issues comes up when i design hostgroups … i have 3 LifeCycle
ENV, 3 different Regions, 20 Datacenters, 50 Application Groups, for each
LifeCycle ENV in each region i have capusle so total ( 3 *3 = 9 capsules )
, 100 subnets(approx) … now i am not sure how i will lay out my
HOstGoups so when i create host i have to put minimun information and
things pop up from HostGroups … ( here difficulities are with CVs as i
said single CV can assign to host, puppet classes, as i add application
team's classes to my Hosts i looose INFRA puppet classes as they both comes
from diff CVs ) …

Regards,
DJ

··· On Wednesday, 7 January 2015 00:48:19 UTC+5:45, Chris Clonch wrote: > > Just starting out with Katello (coming from an old mrepo setup) -- really > like what I see especially the slice-n-dice ability with content views and > lifecycle environments. At this point I'm only concerned with getting my > repositories up and running; provisioning through Foreman/Puppet is down > the road. Has anyone assembled a guide/howto/best practices to structuring > repos, content views and lifecycles? The flexibility in layering content > views feels like this could quickly lead to a mess as it scales. > > I'm in a primarily vendor-app driven environment, where an application may > be tied to a specific RHEL major/minor version. Most production systems > have at least an associated test system, sometimes a dedicated development > system too. Applications may have other constraints so it seems > appropriate to keep everything separate. > > In selecting the repositories to sync, the Katello guide suggests not to > select individual point release but instead the {5,6,7}Server repos to > sync, then use filtering via content views to weed out unwanted packages. > I like this as it will save a ton of disk space. How do I craft a filter > that gets me a specific point release, say RHEL 5.8? My guess is I need to > use errata, but must confess I'm a little shaking in my understanding of > it. Can I actually lock a content view to a specific point release? Maybe > this boils down to my understanding of how releases are managed. At this > point I'm just syncing the {5,6,7}Server repos. > > On content views, I've setup ones to mirror the individual repos: i.e., > rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content > views based on team/application/function/etc that bundles needed repos by > release and arch: i.e. appdev-composite-rhel-6-x86_64, > appname-composite-rhel-5-x86_64. I decided to use composite views as it > seems to be the only way to tie multiple repositories to an activation key. > > Regarding lifecycles, I have a generic Dev > Test > Prod and application > specific ones that follow similar sequences. Maybe this only needs to be a > single Dev > Test > Prod lifecycle? > > I then publish the individual repos to the Library env. Then, as needed, > I bump the versions used within the composite, followed by a publish and > promotion through the appropriate lifecycle. > > Suggestions? Am I heading down the "right" path? > > Thanks, > -Chris >

All;

This is a fairly old thread but I was hoping some people might still be
trolling it and have some general insight. I am standing up Satellite 6 for
a new datacenter. The only systems I am aware of are the core
infrastructure systems that will be ever present. Eventually, we will host
several applications but their use-cases can't be predicted at the moment.
I am not a developer so am curious on some general best practices for patch
management:

a) It would seem to me that at the start I only have static infrastructure
systems. I work in an environment where security patches are mandatory and
need to be applied in short turn. Therefore, I would think that the default
'Library' environment would suffice to start. I don't have and development
or QA infrastructure systems on which to test patches first so it seems to
make sense to allow those systems to have access to the latest patches at
all times.
b) I see how one might want to provide stability through an application
lifecycle by creating a static patch baseline (content view) in development
and have this same baseline follow the application into production.
However, different applications will have different patch/package
requirements. Does this generally mean that we might very well create
dedicated lifecycle paths for each unique application in our
infrastructure? (e.g. App1(dev->test->prod), App2(dev->test->prod)
c) Generally, is there an argument where one would always want to deploy
systems only into a particular lifecycle (other than the Library)?

Thanks!
-LJK

··· On Tuesday, January 6, 2015 at 1:03:19 PM UTC-6, Chris Clonch wrote: > > Just starting out with Katello (coming from an old mrepo setup) -- really > like what I see especially the slice-n-dice ability with content views and > lifecycle environments. At this point I'm only concerned with getting my > repositories up and running; provisioning through Foreman/Puppet is down > the road. Has anyone assembled a guide/howto/best practices to structuring > repos, content views and lifecycles? The flexibility in layering content > views feels like this could quickly lead to a mess as it scales. > > I'm in a primarily vendor-app driven environment, where an application may > be tied to a specific RHEL major/minor version. Most production systems > have at least an associated test system, sometimes a dedicated development > system too. Applications may have other constraints so it seems > appropriate to keep everything separate. > > In selecting the repositories to sync, the Katello guide suggests not to > select individual point release but instead the {5,6,7}Server repos to > sync, then use filtering via content views to weed out unwanted packages. > I like this as it will save a ton of disk space. How do I craft a filter > that gets me a specific point release, say RHEL 5.8? My guess is I need to > use errata, but must confess I'm a little shaking in my understanding of > it. Can I actually lock a content view to a specific point release? Maybe > this boils down to my understanding of how releases are managed. At this > point I'm just syncing the {5,6,7}Server repos. > > On content views, I've setup ones to mirror the individual repos: i.e., > rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content > views based on team/application/function/etc that bundles needed repos by > release and arch: i.e. appdev-composite-rhel-6-x86_64, > appname-composite-rhel-5-x86_64. I decided to use composite views as it > seems to be the only way to tie multiple repositories to an activation key. > > Regarding lifecycles, I have a generic Dev > Test > Prod and application > specific ones that follow similar sequences. Maybe this only needs to be a > single Dev > Test > Prod lifecycle? > > I then publish the individual repos to the Library env. Then, as needed, > I bump the versions used within the composite, followed by a publish and > promotion through the appropriate lifecycle. > > Suggestions? Am I heading down the "right" path? > > Thanks, > -Chris >

Thanks for that input, Eric. I was really thinking along the same lines as
what you stated. I found this document
(https://access.redhat.com/sites/default/files/pages/attachments/2015-10_Steps_to_Build_a_Standard_Operating_Environment.pdf)
wherein the authors seem to indicate that the purpose/power of content
views is really for filtering what packages are available to the hosts
using that view. I guess without filtering the content view really just
becomes a "snapshot" in time of the contained repositories. They seem to
discourage using content views as snapshots because there is overhead in
getting content views updated with the latest versions of packages if/when
one needs newer patches for security, etc. I guess my take away is to shoot
for a minimalist approach until something necessitates a more complex
approach.

··· On Tuesday, January 6, 2015 at 1:03:19 PM UTC-6, Chris Clonch wrote: > > Just starting out with Katello (coming from an old mrepo setup) -- really > like what I see especially the slice-n-dice ability with content views and > lifecycle environments. At this point I'm only concerned with getting my > repositories up and running; provisioning through Foreman/Puppet is down > the road. Has anyone assembled a guide/howto/best practices to structuring > repos, content views and lifecycles? The flexibility in layering content > views feels like this could quickly lead to a mess as it scales. > > I'm in a primarily vendor-app driven environment, where an application may > be tied to a specific RHEL major/minor version. Most production systems > have at least an associated test system, sometimes a dedicated development > system too. Applications may have other constraints so it seems > appropriate to keep everything separate. > > In selecting the repositories to sync, the Katello guide suggests not to > select individual point release but instead the {5,6,7}Server repos to > sync, then use filtering via content views to weed out unwanted packages. > I like this as it will save a ton of disk space. How do I craft a filter > that gets me a specific point release, say RHEL 5.8? My guess is I need to > use errata, but must confess I'm a little shaking in my understanding of > it. Can I actually lock a content view to a specific point release? Maybe > this boils down to my understanding of how releases are managed. At this > point I'm just syncing the {5,6,7}Server repos. > > On content views, I've setup ones to mirror the individual repos: i.e., > rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content > views based on team/application/function/etc that bundles needed repos by > release and arch: i.e. appdev-composite-rhel-6-x86_64, > appname-composite-rhel-5-x86_64. I decided to use composite views as it > seems to be the only way to tie multiple repositories to an activation key. > > Regarding lifecycles, I have a generic Dev > Test > Prod and application > specific ones that follow similar sequences. Maybe this only needs to be a > single Dev > Test > Prod lifecycle? > > I then publish the individual repos to the Library env. Then, as needed, > I bump the versions used within the composite, followed by a publish and > promotion through the appropriate lifecycle. > > Suggestions? Am I heading down the "right" path? > > Thanks, > -Chris >

Absolutely! The more questions and thoughts on this the better. Here's
some more…

>
> Where did you see that in the Katello documentation? I'd not seen that.
>
> What I had seen was section 3.3 of this
> https://access.redhat.com/documentation/en-US/Red_Hat_Satellite/6.0/html/Provisioning_Guide/sect-Red_Hat_Satellite-Provisioning_Guide-Importing_Subscriptions_and_Synchronizing_Content-Enabling_Red_Hat_Repositories.html
> that I had taken to mean it is possible to provision a 6.5 server from the
> 6.5 kickstart and RPM repos. I've just run a test of that and it seems to
> work (the only caveat being that for some unknown reason I had to suppress
> the 'yum update' in the provisioning template to prevent an immediate
> upgrade to a higher minor version). Is there some reason why it might be
> better to use 6server repos with filters than this approach?
>

http://www.katello.org/docs//user_guide/red_hat_content/content.html under
notes in "Enable Red Hat Repositories":

> When enabling a RHEL repository, Red Hat recommends selecting the Server
> repo (e.g. 6Server, 5Server) versus a specific release (e.g. 6.2). When a
> specific release is necessary, the preferred way is to create a Content
> View with filters that narrow the content to the desired version (e.g. 6.2)

This makes since as you gain space savings by not mirroring the
individually accessible repositories and cherry pick the packages that
comprise the individual minor releases from the big pool. However, no
solid examples are given on how to actually define the filters. My
google-fu has not resulted in anything either.

It is the same generic lifecycle here for me. What I'm finding is that it
> takes far too long to publish and promote a content view through to
> Production. I can envisage a scenario where a fix to production is
> required immediately and directly without the time spent in dev and test
> (this thread would not be a good place to argue the rights or wrongs of
> that - it is a simply a fact that I must live with in this environment).
> With that in mind I have been trying to find a way to reduce the size of
> content views to make them more manageable.
>

I also see the long times to publish and promote. The recent bash
vulnerabilities and subsequent package updates are a good example of the
scenario you describe. It does not appear that one can push a single
package when using composite views.

My thinking was to automate using the the API so I have more control over
the process:

  1. Updating the repositories nightly
  2. Automatically publishing to the Library of each individual content view
  3. Then likely automatically publishing to the development lifecycle of
    each composite view

This is akin to continuous integration and a build system like Jenkins.
This make the Development lifecycle continuous and promotion to Testing and
beyond only happen through a QA process.

Using the Bash scenario and my structure, there is no way to promote the
updated bash packages separately. I would have to promote all of the
packages in the Development lifecycle. So it seems the use of composite
views based on the Library lifecycle of the individual views are not good.
I should probably also promote the individual views through the lifecycle,
then cross publish/promote using the composite. Seems redundant and a lot
more time in publishing/promoting. So strike the use of composite views to
bundle multiple repositories? How do I handle linking multiple content
views with activation keys then?

··· On Thursday, January 8, 2015 8:40:27 AM UTC-5, David Evans wrote:

> All;
>
> This is a fairly old thread but I was hoping some people might still be
> trolling it and have some general insight. I am standing up Satellite 6 for
> a new datacenter. The only systems I am aware of are the core
> infrastructure systems that will be ever present. Eventually, we will host
> several applications but their use-cases can't be predicted at the moment.
> I am not a developer so am curious on some general best practices for patch
> management:
>
> a) It would seem to me that at the start I only have static infrastructure
> systems. I work in an environment where security patches are mandatory and
> need to be applied in short turn. Therefore, I would think that the default
> 'Library' environment would suffice to start. I don't have and development
> or QA infrastructure systems on which to test patches first so it seems to
> make sense to allow those systems to have access to the latest patches at
> all times.
>

This sounds reasonable for getting the fixes to systems both the quickest
and easiest to begin with.

> b) I see how one might want to provide stability through an application
> lifecycle by creating a static patch baseline (content view) in development
> and have this same baseline follow the application into production.
> However, different applications will have different patch/package
> requirements. Does this generally mean that we might very well create
> dedicated lifecycle paths for each unique application in our
> infrastructure? (e.g. App1(dev->test->prod), App2(dev->test->prod)
>

Since you can't name a path currently (to know easily this is App1's path,
this is App2's) I would tend towards a single path if the environments are
the same naming wise. An environment is light weight in that you give it a
name, you can create a chain of them and then put whatever content views
you want into an environment with any set of systems. So you will always
know your system is attached to dev/App1 or dev/App2. And there shouldn't
be any clashes.

> c) Generally, is there an argument where one would always want to deploy
> systems only into a particular lifecycle (other than the Library)?
>

For me, there are two perspectives to consider – content lifecycle and
system lifecycle. If you are doing custom content that revisions are being
generated for, then you can think about the lifecycle environment as the
release pipeline for that content. For example, if you have a custom
application that you build RPMs for, you can use the environments to
represent different stages of testing that it goes through before landing
in "production". You could, for example, use Jenkins to publish content
views for your applications whenever a particular job or test is complete
or gets approval from a group to be pushed along the pipeline.

When thinking about system lifecycle, you may also need to consider content
lifecycle, or if you are just trying to get patches out to your systems and
if there should be any test phases before pushing out to production. If you
wanted to be able to push a package or errata out to a set of test systems,
to ensure the update does not break anything either by running tests on the
system or by having it run actively for a time period and then push the
update out to your production systems.

And since you can skip environments, or do incremental updates, it really
comes down to what your process is and where you want it to go eventually.
You can start easy and build out environments and paths as you go along, or
take your existing process and model it with the tools available.

As always, if you have feedback or ideas on how we can make things
better/easier please let us know.

Eric

··· On Fri, Mar 25, 2016 at 4:06 PM, Lesley Kimmel wrote:

Thanks!
-LJK

On Tuesday, January 6, 2015 at 1:03:19 PM UTC-6, Chris Clonch wrote:

Just starting out with Katello (coming from an old mrepo setup) – really
like what I see especially the slice-n-dice ability with content views and
lifecycle environments. At this point I’m only concerned with getting my
repositories up and running; provisioning through Foreman/Puppet is down
the road. Has anyone assembled a guide/howto/best practices to structuring
repos, content views and lifecycles? The flexibility in layering content
views feels like this could quickly lead to a mess as it scales.

I’m in a primarily vendor-app driven environment, where an application
may be tied to a specific RHEL major/minor version. Most production
systems have at least an associated test system, sometimes a dedicated
development system too. Applications may have other constraints so it
seems appropriate to keep everything separate.

In selecting the repositories to sync, the Katello guide suggests not to
select individual point release but instead the {5,6,7}Server repos to
sync, then use filtering via content views to weed out unwanted packages.
I like this as it will save a ton of disk space. How do I craft a filter
that gets me a specific point release, say RHEL 5.8? My guess is I need to
use errata, but must confess I’m a little shaking in my understanding of
it. Can I actually lock a content view to a specific point release? Maybe
this boils down to my understanding of how releases are managed. At this
point I’m just syncing the {5,6,7}Server repos.

On content views, I’ve setup ones to mirror the individual repos: i.e.,
rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content
views based on team/application/function/etc that bundles needed repos by
release and arch: i.e. appdev-composite-rhel-6-x86_64,
appname-composite-rhel-5-x86_64. I decided to use composite views as it
seems to be the only way to tie multiple repositories to an activation key.

Regarding lifecycles, I have a generic Dev > Test > Prod and application
specific ones that follow similar sequences. Maybe this only needs to be a
single Dev > Test > Prod lifecycle?

I then publish the individual repos to the Library env. Then, as needed,
I bump the versions used within the composite, followed by a publish and
promotion through the appropriate lifecycle.

Suggestions? Am I heading down the “right” path?

Thanks,
-Chris


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

The SOE is a good document and sits on my desk. That being said its pretty
generic. We are planning Satellite for a large environment (20,000+
servers) over multiple regions with (currently) 20 capsules (expected to
grow!). We use the RBAC controls extensively to permission users. We have
some regions which need to be protected from others etc etc. Authentication
is done via AD and we are looking to standardize on the AD group names. We
filed a BZ to allow locations to be set at the group level so users coming
in get assigned to the correct locations

In terms of structuring data, what seems to be working for us is to heavily
use the nesting features throughout locations and hostgroups. It also
almost goes without saying that we wrap a naming scheme around all this too.

For locations, we structure these as:
<region>
>-- <country>
> >-- <city>
> > >-- <DMZ>

This enables us to tie the various resources to a location and restrict
users to specific locations. We also specify parameters for use in the
provisioning process (and for puppet consumption) at the various levels
(mostly region, some country, some city (countries with multiple timezones
for example).

For products and repositories we structure these as:

R_<product>Yum and tie this to a P<product> product. This product can
then be permissioned to the product owner. We include the operating system
as a product by encapsulating the RHEL repos.

For content views we name CV_<os> and include the various products which
make the OS. To contain applications, we create composite content views
which we look to name as CCV_<os>_<product> and permission these out the
the product owner.

When we get to hostgroups, again we use the nesting functionality so:

<os>
>-- <region>
> >-- <lifecycle env>
> > >-- <product>

We enrich each level as required. Again, we permission out the product
hostgroup to the product team.Using the tree above, the CCV_<os>_<product>
is assigned to the <os>-><region>-><lifecycle env>-><product> hostgroup.

Activation keys follow similar principles. So AK_<os><product><lifecycle
> ties to the appropriate content view and hostgroup.

We use the company standard lifecycles too and again permission certain
teams to promote content to certain lifecyles and access certain hostgroups.

We try and standardize on roles (these are more a work in progress than the
rest!) so a ROLE_<product> gives edit access to P_<product>,
CCV_<os><product>, view access to CS<os>, promote access to limited
lifecycles.

Yes it means we end up with a LOT of objects but we are looking to
programatically manage these.

I hope this helps and any suggestions are of course welcome!

··· On Wednesday, March 30, 2016 at 1:17:11 PM UTC-4, Lesley Kimmel wrote: > > Thanks for that input, Eric. I was really thinking along the same lines as > what you stated. I found this document ( > https://access.redhat.com/sites/default/files/pages/attachments/2015-10_Steps_to_Build_a_Standard_Operating_Environment.pdf) > wherein the authors seem to indicate that the purpose/power of content > views is really for filtering what packages are available to the hosts > using that view. I guess without filtering the content view really just > becomes a "snapshot" in time of the contained repositories. They seem to > discourage using content views as snapshots because there is overhead in > getting content views updated with the latest versions of packages if/when > one needs newer patches for security, etc. I guess my take away is to shoot > for a minimalist approach until something necessitates a more complex > approach. > > On Tuesday, January 6, 2015 at 1:03:19 PM UTC-6, Chris Clonch wrote: >> >> Just starting out with Katello (coming from an old mrepo setup) -- really >> like what I see especially the slice-n-dice ability with content views and >> lifecycle environments. At this point I'm only concerned with getting my >> repositories up and running; provisioning through Foreman/Puppet is down >> the road. Has anyone assembled a guide/howto/best practices to structuring >> repos, content views and lifecycles? The flexibility in layering content >> views feels like this could quickly lead to a mess as it scales. >> >> I'm in a primarily vendor-app driven environment, where an application >> may be tied to a specific RHEL major/minor version. Most production >> systems have at least an associated test system, sometimes a dedicated >> development system too. Applications may have other constraints so it >> seems appropriate to keep everything separate. >> >> In selecting the repositories to sync, the Katello guide suggests not to >> select individual point release but instead the {5,6,7}Server repos to >> sync, then use filtering via content views to weed out unwanted packages. >> I like this as it will save a ton of disk space. How do I craft a filter >> that gets me a specific point release, say RHEL 5.8? My guess is I need to >> use errata, but must confess I'm a little shaking in my understanding of >> it. Can I actually lock a content view to a specific point release? Maybe >> this boils down to my understanding of how releases are managed. At this >> point I'm just syncing the {5,6,7}Server repos. >> >> On content views, I've setup ones to mirror the individual repos: i.e., >> rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content >> views based on team/application/function/etc that bundles needed repos by >> release and arch: i.e. appdev-composite-rhel-6-x86_64, >> appname-composite-rhel-5-x86_64. I decided to use composite views as it >> seems to be the only way to tie multiple repositories to an activation key. >> >> Regarding lifecycles, I have a generic Dev > Test > Prod and application >> specific ones that follow similar sequences. Maybe this only needs to be a >> single Dev > Test > Prod lifecycle? >> >> I then publish the individual repos to the Library env. Then, as needed, >> I bump the versions used within the composite, followed by a publish and >> promotion through the appropriate lifecycle. >> >> Suggestions? Am I heading down the "right" path? >> >> Thanks, >> -Chris >> >

I'm in almost the same situation and rather than rehash everything you've
already mentioned, I'd like to bump this again to see what you've found in
the 2 months since your original posts and maybe gather some additional
attention from some others who are in the same situation or have been there
and can share their experiences.

I've used TheForeman for about 2 years for provisioning and have just
started experimenting with Katello. We have a good sized existing
environment and I'd like to get it as right as possible before we start
moving hosts over.

Having seen this thread bumped by another user, I'll try to address each
area of interest. If I missed one, please let me know and I will follow up.
Since this original post some changes have occurred that would address some
of the workflows you are attempting to solve. Replies inline.

>
>
>>
>> Absolutely! The more questions and thoughts on this the better. Here's
>> some more…
>>
>>>
>>> Where did you see that in the Katello documentation? I'd not seen that.
>>>
>>> What I had seen was section 3.3 of this https://access.redhat.
>>> com/documentation/en-US/Red_Hat_Satellite/6.0/html/
>>> Provisioning_Guide/sect-Red_Hat_Satellite-Provisioning_
>>> Guide-Importing_Subscriptions_and_Synchronizing_Content-
>>> Enabling_Red_Hat_Repositories.html that I had taken to mean it is
>>> possible to provision a 6.5 server from the 6.5 kickstart and RPM repos.
>>> I've just run a test of that and it seems to work (the only caveat being
>>> that for some unknown reason I had to suppress the 'yum update' in the
>>> provisioning template to prevent an immediate upgrade to a higher minor
>>> version). Is there some reason why it might be better to use 6server repos
>>> with filters than this approach?
>>>
>>
>>
>> http://www.katello.org/docs//user_guide/red_hat_content/content.html
>> under notes in "Enable Red Hat Repositories":
>>
>>> When enabling a RHEL repository, Red Hat recommends selecting the Server
>>> repo (e.g. 6Server, 5Server) versus a specific release (e.g. 6.2). When a
>>> specific release is necessary, the preferred way is to create a Content
>>> View with filters that narrow the content to the desired version (e.g. 6.2)
>>
>>
>> This makes since as you gain space savings by not mirroring the
>> individually accessible repositories and cherry pick the packages that
>> comprise the individual minor releases from the big pool. However, no
>> solid examples are given on how to actually define the filters. My
>> google-fu has not resulted in anything either.
>>
>
> Interesting. My google-fu has resulted in this -
> https://www.redhat.com/archives/katello-devel/2012-October/msg00301.html,
> which I have only scan read so far, but does seem to work. I created a
> 6server only repo, with a 'date errata' filter, set it to the day before
> 6.5 was released and built a server with it. Ta da! A 6.4 server. I'm
> running out of time for experimentation today, but I assume if I create a
> similar repo with a filter for 12th Oct 2014 I will be able to move the
> server to that cv/repo, do a yum update. I'll let you know how I get on.
>

Exactly. SInce 6Server always contains the latest and greatest, using
errata date filters to re-create minor releases is the recommended strategy
as this also gives you more control if you wish to omit anything or
recreate different points in time based on the errata.

>
>
>
>>
>>
>> It is the same generic lifecycle here for me. What I'm finding is that
>>> it takes far too long to publish and promote a content view through to
>>> Production. I can envisage a scenario where a fix to production is
>>> required immediately and directly without the time spent in dev and test
>>> (this thread would not be a good place to argue the rights or wrongs of
>>> that - it is a simply a fact that I must live with in this environment).
>>> With that in mind I have been trying to find a way to reduce the size of
>>> content views to make them more manageable.
>>>
>>
>> I also see the long times to publish and promote. The recent bash
>> vulnerabilities and subsequent package updates are a good example of the
>> scenario you describe. It does not appear that one can push a single
>> package when using composite views.
>>
>> My thinking was to automate using the the API so I have more control over
>> the process:
>> 1. Updating the repositories nightly
>> 2. Automatically publishing to the Library of each individual content
>> view
>> 3. Then likely automatically publishing to the development lifecycle of
>> each composite view
>>
>> This is akin to continuous integration and a build system like Jenkins.
>> This make the Development lifecycle continuous and promotion to Testing and
>> beyond only happen through a QA process.
>>
>
> I'm not sure about continuous integration. There have to be some brakes
> (binders?) preventing all my prod boxes getting updated in parallel.
> That's another thing that I don't have a clear view of at the moment.
>

The past 2 months there has been work on a feature that we are calling
"Incremental Updates". This feature allows the adding of errata, packages,
package groups or puppet modules to a content view version, publishing a
minor release of the version and then updating instances of said version
within environments of your choosing. See (
http://www.katello.org/docs/2.2/release_notes/release_notes.html) for more
information and a link to a user guide. Note that for 2.2, errata was our
big focus within the UI with respect to the incremental updates. The API
and CLI support the other types.

··· On Fri, Jan 9, 2015 at 11:47 AM, David Evans wrote: > On Thursday, 8 January 2015 15:43:00 UTC, Chris Clonch wrote: >> On Thursday, January 8, 2015 8:40:27 AM UTC-5, David Evans wrote:

Using the Bash scenario and my structure, there is no way to promote the
updated bash packages separately. I would have to promote all of the
packages in the Development lifecycle. So it seems the use of composite
views based on the Library lifecycle of the individual views are not good.
I should probably also promote the individual views through the lifecycle,
then cross publish/promote using the composite. Seems redundant and a lot
more time in publishing/promoting. So strike the use of composite views to
bundle multiple repositories? How do I handle linking multiple content
views with activation keys then?

Yes, I hadn’t realised the problem with AKs - the more content views you
have the more AKs you need? Surely there is an elegant way around that or
moving a server from one content view to another will be problematic? Do
composite CVs help in anyway?


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

>
> For content views we name CV_<os> and include the various products which
> make the OS. To contain applications, we create composite content views
> which we look to name as CCV_<os>_<product> and permission these out the
> the product owner.
>

I meant to add too that the CCV_<os><product> would include the CV<os>
and CV_<product> content views. This gives us better control as we don't
allow the product owner to modify the content views. Special cases (like
product owners who want an external repo etc) are dealt with on an
exception basis.

>
> The SOE is a good document and sits on my desk. That being said its
pretty generic. We are planning Satellite for a large environment (20,000+
servers) over multiple regions with (currently) 20 capsules (expected to
grow!). We use the RBAC controls extensively to permission users. We have
some regions which need to be protected from others etc etc. Authentication
is done via AD and we are looking to standardize on the AD group names. We
filed a BZ to allow locations to be set at the group level so users coming
in get assigned to the correct locations
>
> In terms of structuring data, what seems to be working for us is to
heavily use the nesting features throughout locations and hostgroups. It
also almost goes without saying that we wrap a naming scheme around all
this too.
>
> For locations, we structure these as:
> <region>
> >-- <country>
> > >-- <city>
> > > >-- <DMZ>
>
> This enables us to tie the various resources to a location and restrict
users to specific locations. We also specify parameters for use in the
provisioning process (and for puppet consumption) at the various levels
(mostly region, some country, some city (countries with multiple timezones
for example).
>
> For products and repositories we structure these as:
>
> R_<product>Yum and tie this to a P<product> product. This product can
then be permissioned to the product owner. We include the operating system
as a product by encapsulating the RHEL repos.
>
> For content views we name CV_<os> and include the various products which
make the OS. To contain applications, we create composite content views
which we look to name as CCV_<os><product> and permission these out the
the product owner.
>
> When we get to hostgroups, again we use the nesting functionality so:
>
> <os>
> >-- <region>
> > >-- <lifecycle env>
> > > >-- <product>
>
> We enrich each level as required. Again, we permission out the product
hostgroup to the product team.Using the tree above, the CCV
<os><product>
is assigned to the <os>-><region>-><lifecycle env>-><product> hostgroup.
>
> Activation keys follow similar principles. So
AK
<os><product><lifecycle env> ties to the appropriate content view and
hostgroup.
>
> We use the company standard lifecycles too and again permission certain
teams to promote content to certain lifecyles and access certain hostgroups.
>
> We try and standardize on roles (these are more a work in progress than
the rest!) so a ROLE_<product> gives edit access to P_<product>,
CCV_<os><product>, view access to CS<os>, promote access to limited
lifecycles.
>
> Yes it means we end up with a LOT of objects but we are looking to
programatically manage these.

I assume your are looking into this but don't have a strategy in place yet.
Have you investigated or do you have proposals on how you might do this? I
have been investigating solutions myself and am curious what others are
doing/trying.

Eric

>
> I hope this helps and any suggestions are of course welcome!
>
>
>>
>> Thanks for that input, Eric. I was really thinking along the same lines
as what you stated. I found this document (
https://access.redhat.com/sites/default/files/pages/attachments/2015-10_Steps_to_Build_a_Standard_Operating_Environment.pdf)
wherein the authors seem to indicate that the purpose/power of content
views is really for filtering what packages are available to the hosts
using that view. I guess without filtering the content view really just
becomes a "snapshot" in time of the contained repositories. They seem to
discourage using content views as snapshots because there is overhead in
getting content views updated with the latest versions of packages if/when
one needs newer patches for security, etc. I guess my take away is to shoot
for a minimalist approach until something necessitates a more complex
approach.
>>
>>>
>>> Just starting out with Katello (coming from an old mrepo setup) –
really like what I see especially the slice-n-dice ability with content
views and lifecycle environments. At this point I'm only concerned with
getting my repositories up and running; provisioning through Foreman/Puppet
is down the road. Has anyone assembled a guide/howto/best practices to
structuring repos, content views and lifecycles? The flexibility in
layering content views feels like this could quickly lead to a mess as it
scales.
>>>
>>> I'm in a primarily vendor-app driven environment, where an application
may be tied to a specific RHEL major/minor version. Most production
systems have at least an associated test system, sometimes a dedicated
development system too. Applications may have other constraints so it
seems appropriate to keep everything separate.
>>>
>>> In selecting the repositories to sync, the Katello guide suggests not
to select individual point release but instead the {5,6,7}Server repos to
sync, then use filtering via content views to weed out unwanted packages.
I like this as it will save a ton of disk space. How do I craft a filter
that gets me a specific point release, say RHEL 5.8? My guess is I need to
use errata, but must confess I'm a little shaking in my understanding of
it. Can I actually lock a content view to a specific point release? Maybe
this boils down to my understanding of how releases are managed. At this
point I'm just syncing the {5,6,7}Server repos.
>>>
>>> On content views, I've setup ones to mirror the individual repos: i.e.,
rhel-6-x86_64, epel-rhel-6-x86_64, etc. I then setup composite content
views based on team/application/function/etc that bundles needed repos by
release and arch: i.e. appdev-composite-rhel-6-x86_64,
appname-composite-rhel-5-x86_64. I decided to use composite views as it
seems to be the only way to tie multiple repositories to an activation key.
>>>
>>> Regarding lifecycles, I have a generic Dev > Test > Prod and
application specific ones that follow similar sequences. Maybe this only
needs to be a single Dev > Test > Prod lifecycle?
>>>
>>> I then publish the individual repos to the Library env. Then, as
needed, I bump the versions used within the composite, followed by a
publish and promotion through the appropriate lifecycle.
>>>
>>> Suggestions? Am I heading down the "right" path?
>>>
>>> Thanks,
>>> -Chris
>
> –
> You received this message because you are subscribed to the Google Groups
"Foreman users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.

··· On Mar 30, 2016 6:16 PM, "Andrew Schofield" wrote: > On Wednesday, March 30, 2016 at 1:17:11 PM UTC-4, Lesley Kimmel wrote: >> On Tuesday, January 6, 2015 at 1:03:19 PM UTC-6, Chris Clonch wrote: > To post to this group, send email to foreman-users@googlegroups.com. > Visit this group at https://groups.google.com/group/foreman-users. > For more options, visit https://groups.google.com/d/optout.

Well my original 2.0 installation went south after automating pieces of the
product sync and initial content view publishing. I was pretty convinced
it was something I did during installation so I decided to wipe the slate
and start fresh with 2.1. Much easier installation this time. Haven't
implemented all of the automation but so far so good.

I switched to a single lifecycle (dev > test > prod) and separate content
views based on application and/or team. Dropped the use of composite views
as this just added unneeded complexity. This seems to work really well
ATM. Publishing and pushing new versions through the lifecycles is still a
slow process, especially when a quick patch needs to be pushed. Haven't
tried the CLI methods in 2.1 for pushing single packages through the LC so
I look forward to GUI abilities in 2.2.

There doesn't seem to be any way to rename lifecycles after they are
created. I will be looking into this next week as I discovered that, at a
corporate level, we use stage vs test so would like to keep Katello in step.

I followed David's link of the dev discussion
(https://www.redhat.com/archives/katello-devel/2012-October/msg00301.html)
and implemented two errata based filters to restrain a particular content
view to a RHEL release and bug/security fixes. I can provide details on
this come Monday if you're interested in exactly what I did. On the
surface it appears to work but I haven't tested very thoroughly. I still
haven't ventured into provisioning with Katello, but it seems installation
media isn't created for the previous versions when using these errata
filters.

I had a few quirks with Activation Keys that had both virtual and psychical
RHEL subscriptions tied to them. The registration process would bomb out
complaining no valid product was installed. Removing the other
subscription from the AK would allow the registration to complete. Can't
say I've had the same problem with Katello 2.1, so again, maybe something
with my original 2.0 install.

I'm pulling most of this from memory as I'm not in front of the server ATM
so let me know if I missed something or you'd like me to expand upon a
topic.

-Chris

··· On Friday, March 20, 2015 at 11:22:44 PM UTC-4, Eric du Toit wrote: > > > > I'm in almost the same situation and rather than rehash everything you've > already mentioned, I'd like to bump this again to see what you've found in > the 2 months since your original posts and maybe gather some additional > attention from some others who are in the same situation or have been there > and can share their experiences. > > I've used TheForeman for about 2 years for provisioning and have just > started experimenting with Katello. We have a good sized existing > environment and I'd like to get it as right as possible before we start > moving hosts over. > > >

Eric - apologies for the extended delay on this!

>
>
> <snip>
>
I assume your are looking into this but don't have a strategy in place yet.
> Have you investigated or do you have proposals on how you might do this? I
> have been investigating solutions myself and am curious what others are
> doing/trying.
>
> Eric
>

We have more of a strategy now than we did a month ago :-). We have some
(rough) scripting in place which will generate the templates for products
(and publish the CV's to all LC envs etc). We are now working to convert
all these to API based scripts as hammer is proving to be less than ideal
for this - they've grown organically. We are also doing likewise with
scripting to create a new OS and to create the initial layout of locations
(we will tie this into an internal data source for this). So fundamentally
we'd end up with roughly the following:

  1. Base structure creation scripts
    1. Setup settings, lifecycles, base roles, auth sources, capsule
      mappings etc.
  2. Metadata sync scripting
    1. Synchronize metadata (locations etc) from external data sources.
  3. OS creation / maintenance scripts
  4. Create / sync OS from RHN (RHEL 7.2 f.e.)
    2. Create CV_<os>, promote to all LC's.
    3. Create activation keys, gpg keys
    4. Create hostgroups structure according to above
  5. Product creation / maintenance scripts
    1. Create products, repositories
    2. Create CV_<product> and CCV_<os>_<product>, promote to all LC's
    3. Create activation kets, gpg keys
    4. Create hostgroups
    5. Create roles

The idea behind the maintenance scripts would be to keep Sat up to date. So
if a location was added we would create new objects etc.

··· On Wednesday, March 30, 2016 at 9:52:45 PM UTC-4, Eric Helms wrote:

> Eric - apologies for the extended delay on this!
>
>>
>>
>> <snip>
>>
> I assume your are looking into this but don't have a strategy in place
>> yet. Have you investigated or do you have proposals on how you might do
>> this? I have been investigating solutions myself and am curious what others
>> are doing/trying.
>>
>> Eric
>>
>
> We have more of a strategy now than we did a month ago :-). We have some
> (rough) scripting in place which will generate the templates for products
> (and publish the CV's to all LC envs etc). We are now working to convert
> all these to API based scripts as hammer is proving to be less than ideal
> for this - they've grown organically. We are also doing likewise with
> scripting to create a new OS and to create the initial layout of locations
> (we will tie this into an internal data source for this). So fundamentally
> we'd end up with roughly the following:
>
> 1. Base structure creation scripts
> 1. Setup settings, lifecycles, base roles, auth sources, capsule
> mappings etc.
> 2. Metadata sync scripting
> 1. Synchronize metadata (locations etc) from external data sources.
> 3. OS creation / maintenance scripts
> 1. Create / sync OS from RHN (RHEL 7.2 f.e.)
> 2. Create CV_<os>, promote to all LC's.
> 3. Create activation keys, gpg keys
> 4. Create hostgroups structure according to above
> 4. Product creation / maintenance scripts
> 1. Create products, repositories
> 2. Create CV_<product> and CCV_<os>_<product>, promote to all LC's
> 3. Create activation kets, gpg keys
> 4. Create hostgroups
> 5. Create roles
>
> The idea behind the maintenance scripts would be to keep Sat up to date.
> So if a location was added we would create new objects etc.
>

If you don't mind sharing, we'd be curious to know:

  1. Where does hammer fail you in building scripts for this work?
  2. When using the API, are you using your own custom API wrapper? existing
    libraries?
  3. What programming language do you write your scripts in?
  4. What generally makes building a set of entity maintenance scripts hard?
    From the scripting perspective and from the "API makes this hard"
    perspective?
  5. Do you use any tools to automate this or manage it? e.g. Jenkins,
    Ansible, Puppet
··· On Thu, May 5, 2016 at 10:23 PM, Andrew Schofield wrote: > On Wednesday, March 30, 2016 at 9:52:45 PM UTC-4, Eric Helms wrote:


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

>
>> Eric - apologies for the extended delay on this!
>>
>>>
>>>
>>> <snip>
>>>
>> I assume your are looking into this but don't have a strategy in place
>>> yet. Have you investigated or do you have proposals on how you might do
>>> this? I have been investigating solutions myself and am curious what others
>>> are doing/trying.
>>>
>>> Eric
>>>
>>
>> We have more of a strategy now than we did a month ago :-). We have some
>> (rough) scripting in place which will generate the templates for products
>> (and publish the CV's to all LC envs etc). We are now working to convert
>> all these to API based scripts as hammer is proving to be less than ideal
>> for this - they've grown organically. We are also doing likewise with
>> scripting to create a new OS and to create the initial layout of locations
>> (we will tie this into an internal data source for this). So fundamentally
>> we'd end up with roughly the following:
>>
>> 1. Base structure creation scripts
>> 1. Setup settings, lifecycles, base roles, auth sources, capsule
>> mappings etc.
>> 2. Metadata sync scripting
>> 1. Synchronize metadata (locations etc) from external data sources.
>> 3. OS creation / maintenance scripts
>> 1. Create / sync OS from RHN (RHEL 7.2 f.e.)
>> 2. Create CV_<os>, promote to all LC's.
>> 3. Create activation keys, gpg keys
>> 4. Create hostgroups structure according to above
>> 4. Product creation / maintenance scripts
>> 1. Create products, repositories
>> 2. Create CV_<product> and CCV_<os>_<product>, promote to all LC's
>> 3. Create activation kets, gpg keys
>> 4. Create hostgroups
>> 5. Create roles
>>
>> The idea behind the maintenance scripts would be to keep Sat up to date.
>> So if a location was added we would create new objects etc.
>>
>
>
So it looks like I'm going down a very similar route to Andrew. We have a
collection of bash wrappers of the hammer commands for building out the
base of everything. I'm just getting started on the process of writing our
publishing and promotion bits and in bash using hammer its quite meh.

to add to the pool of answers for this:

> If you don't mind sharing, we'd be curious to know:
>
> 1) Where does hammer fail you in building scripts for this work?
>
Some of the data needed for certain tasks is difficult to gather
effectively. For instance…

Update a content view in a composite view and promote to the next version
you need to know:

  • what version was cut
  • what that version's component id is
  • the component id of all the existing content views in the composite view

I am working on evaluating using the hammer csv plugin to see if that
negates some of the things, but i remember being told it doesn't cover
several items.

  1. When using the API, are you using your own custom API wrapper? existing
    > libraries?
    >
    We've been talking about writing an ansible module for this but it hasn't
    been high enough on our priority list. This would might mean writing a
    simple api wrapper to include in ansible rather than depending on
    python-foreman, just to simplify the dependency chain for the katello
    subset.

  2. What programming language do you write your scripts in?
    >
    we prefer python, but have been writing these in bash since just calling
    hammer

> 4) What generally makes building a set of entity maintenance scripts hard?
> From the scripting perspective and from the "API makes this hard"
> perspective?
>

Time. There is so many pieces to implementing this workflow that its a lot
of poking and prodding. I found the composite view documentation lacking
for what i described above.

> 5) Do you use any tools to automate this or manage it? e.g. Jenkins,
> Ansible, Puppet
>
I'm planning on running most of the bits from inside jenkins. Ansible would
only come in if we go the module path.

-greg

··· On Sat, May 7, 2016 at 9:08 AM Eric D Helms wrote: > On Thu, May 5, 2016 at 10:23 PM, Andrew Schofield > wrote: >> On Wednesday, March 30, 2016 at 9:52:45 PM UTC-4, Eric Helms wrote:

Thanks for the info Greg, some thoughts and info below.

>
>
>
>>
>>> Eric - apologies for the extended delay on this!
>>>
>>>>
>>>>
>>>> <snip>
>>>>
>>> I assume your are looking into this but don't have a strategy in place
>>>> yet. Have you investigated or do you have proposals on how you might do
>>>> this? I have been investigating solutions myself and am curious what others
>>>> are doing/trying.
>>>>
>>>> Eric
>>>>
>>>
>>> We have more of a strategy now than we did a month ago :-). We have some
>>> (rough) scripting in place which will generate the templates for products
>>> (and publish the CV's to all LC envs etc). We are now working to convert
>>> all these to API based scripts as hammer is proving to be less than ideal
>>> for this - they've grown organically. We are also doing likewise with
>>> scripting to create a new OS and to create the initial layout of locations
>>> (we will tie this into an internal data source for this). So fundamentally
>>> we'd end up with roughly the following:
>>>
>>> 1. Base structure creation scripts
>>> 1. Setup settings, lifecycles, base roles, auth sources, capsule
>>> mappings etc.
>>> 2. Metadata sync scripting
>>> 1. Synchronize metadata (locations etc) from external data
>>> sources.
>>> 3. OS creation / maintenance scripts
>>> 1. Create / sync OS from RHN (RHEL 7.2 f.e.)
>>> 2. Create CV_<os>, promote to all LC's.
>>> 3. Create activation keys, gpg keys
>>> 4. Create hostgroups structure according to above
>>> 4. Product creation / maintenance scripts
>>> 1. Create products, repositories
>>> 2. Create CV_<product> and CCV_<os>_<product>, promote to all LC's
>>> 3. Create activation kets, gpg keys
>>> 4. Create hostgroups
>>> 5. Create roles
>>>
>>> The idea behind the maintenance scripts would be to keep Sat up to date.
>>> So if a location was added we would create new objects etc.
>>>
>>
>>
> So it looks like I'm going down a very similar route to Andrew. We have a
> collection of bash wrappers of the hammer commands for building out the
> base of everything. I'm just getting started on the process of writing our
> publishing and promotion bits and in bash using hammer its quite meh.
>
> to add to the pool of answers for this:
>
>
>> If you don't mind sharing, we'd be curious to know:
>>
>> 1) Where does hammer fail you in building scripts for this work?
>>
> Some of the data needed for certain tasks is difficult to gather
> effectively. For instance…
>
> Update a content view in a composite view and promote to the next version
> you need to know:
> * what version was cut
> * what that version's component id is
> * the component id of all the existing content views in the composite view
>
> I am working on evaluating using the hammer csv plugin to see if that
> negates some of the things, but i remember being told it doesn't cover
> several items.
>

This has been a major area of feedback we've seen and will be making a
point of focus for the 3.1+ releases to improve the content view
experience. You'll likely here references to SOE (standard operating
environment) V2 effort that will be tackling these usability issues.

>
> 2) When using the API, are you using your own custom API wrapper? existing
>> libraries?
>>
> We've been talking about writing an ansible module for this but it hasn't
> been high enough on our priority list. This would might mean writing a
> simple api wrapper to include in ansible rather than depending on
> python-foreman, just to simplify the dependency chain for the katello
> subset.
>

I have been playing around with an Ansible module and playbook to mirror
and easily recreate our release entities within the application. I have not
put it anywhere public yet, but if you are interested in collaborating, I
can work on putting it up somewhere.

>
> 3) What programming language do you write your scripts in?
>>
> we prefer python, but have been writing these in bash since just calling
> hammer
>

The above mentioned Ansible module makes use of a python library that the
Satellite QE team works on and uses heavily called Nailgun [1]. This
creates an API wrapper in python around both the Foreman and plugin
ecosystem as well as the Satellite ecosystem. Further, there is a side
library that a member of the oVirt team has been working on to create a DSL
of sorts to managing entities [2]. If you are interested on seeing this
expanded more into the community, or would like more information please let
me know and I'd be happy to collaborate.

[1] https://pypi.python.org/pypi/nailgun/
[2] https://github.com/ifireball/python-satellite-dsl

>
>> 4) What generally makes building a set of entity maintenance scripts
>> hard? From the scripting perspective and from the "API makes this hard"
>> perspective?
>>
>
> Time. There is so many pieces to implementing this workflow that its a
> lot of poking and prodding. I found the composite view documentation
> lacking for what i described above.
>
>
>> 5) Do you use any tools to automate this or manage it? e.g. Jenkins,
>> Ansible, Puppet
>>
> I'm planning on running most of the bits from inside jenkins. Ansible
> would only come in if we go the module path.
>

The Satellite QE team also maintains all of their Jenkins jobs upstream in
an effort to be open source about the mechanisms in case it helps to
contribute to user workflows. Part of these jobs are installation, test
suite jobs they run against upstream and release jobs that use a dogfood
server to handle release repositories (the hope is to mirror this for some
upstream releasing too). Might be useful to peruse as it contains job
configurations for syncing, promoting and publishing content and pipelining
them together.

If there are workflows, tasks, etc. you'd be interested in learning about
or seeing more work put into for community use we'd love to hear about them.

[3] https://github.com/SatelliteQE/robottelo-ci

··· On Mon, May 9, 2016 at 1:33 PM, Greg Swift wrote: > On Sat, May 7, 2016 at 9:08 AM Eric D Helms wrote: >> On Thu, May 5, 2016 at 10:23 PM, Andrew Schofield >> wrote: >>> On Wednesday, March 30, 2016 at 9:52:45 PM UTC-4, Eric Helms wrote:

-greg


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

> Thanks for the info Greg, some thoughts and info below.
>
>
>>
>>
>>
>>>
>>>> Eric - apologies for the extended delay on this!
>>>>
>>>>>
>>>>>
>>>>> <snip>
>>>>>
>>>> I assume your are looking into this but don't have a strategy in place
>>>>> yet. Have you investigated or do you have proposals on how you might do
>>>>> this? I have been investigating solutions myself and am curious what others
>>>>> are doing/trying.
>>>>>
>>>>> Eric
>>>>>
>>>>
>>>> We have more of a strategy now than we did a month ago :-). We have
>>>> some (rough) scripting in place which will generate the templates for
>>>> products (and publish the CV's to all LC envs etc). We are now working to
>>>> convert all these to API based scripts as hammer is proving to be less than
>>>> ideal for this - they've grown organically. We are also doing likewise with
>>>> scripting to create a new OS and to create the initial layout of locations
>>>> (we will tie this into an internal data source for this). So fundamentally
>>>> we'd end up with roughly the following:
>>>>
>>>> 1. Base structure creation scripts
>>>> 1. Setup settings, lifecycles, base roles, auth sources, capsule
>>>> mappings etc.
>>>> 2. Metadata sync scripting
>>>> 1. Synchronize metadata (locations etc) from external data
>>>> sources.
>>>> 3. OS creation / maintenance scripts
>>>> 1. Create / sync OS from RHN (RHEL 7.2 f.e.)
>>>> 2. Create CV_<os>, promote to all LC's.
>>>> 3. Create activation keys, gpg keys
>>>> 4. Create hostgroups structure according to above
>>>> 4. Product creation / maintenance scripts
>>>> 1. Create products, repositories
>>>> 2. Create CV_<product> and CCV_<os>_<product>, promote to all
>>>> LC's
>>>> 3. Create activation kets, gpg keys
>>>> 4. Create hostgroups
>>>> 5. Create roles
>>>>
>>>> The idea behind the maintenance scripts would be to keep Sat up to
>>>> date. So if a location was added we would create new objects etc.
>>>>
>>>
>>>
>> So it looks like I'm going down a very similar route to Andrew. We have
>> a collection of bash wrappers of the hammer commands for building out the
>> base of everything. I'm just getting started on the process of writing our
>> publishing and promotion bits and in bash using hammer its quite meh.
>>
>> to add to the pool of answers for this:
>>
>>
>>> If you don't mind sharing, we'd be curious to know:
>>>
>>> 1) Where does hammer fail you in building scripts for this work?
>>>
>> Some of the data needed for certain tasks is difficult to gather
>> effectively. For instance…
>>
>> Update a content view in a composite view and promote to the next version
>> you need to know:
>> * what version was cut
>> * what that version's component id is
>> * the component id of all the existing content views in the composite view
>>
>> I am working on evaluating using the hammer csv plugin to see if that
>> negates some of the things, but i remember being told it doesn't cover
>> several items.
>>
>
> This has been a major area of feedback we've seen and will be making a
> point of focus for the 3.1+ releases to improve the content view
> experience. You'll likely here references to SOE (standard operating
> environment) V2 effort that will be tackling these usability issues.
>
>
>>
>> 2) When using the API, are you using your own custom API wrapper?
>>> existing libraries?
>>>
>> We've been talking about writing an ansible module for this but it hasn't
>> been high enough on our priority list. This would might mean writing a
>> simple api wrapper to include in ansible rather than depending on
>> python-foreman, just to simplify the dependency chain for the katello
>> subset.
>>
>
> I have been playing around with an Ansible module and playbook to mirror
> and easily recreate our release entities within the application. I have not
> put it anywhere public yet, but if you are interested in collaborating, I
> can work on putting it up somewhere.
>

i'm always happier sending a PR than starting my own project. (so, yes pls)

>
>> 3) What programming language do you write your scripts in?
>>>
>> we prefer python, but have been writing these in bash since just calling
>> hammer
>>
>
> The above mentioned Ansible module makes use of a python library that the
> Satellite QE team works on and uses heavily called Nailgun [1]. This
> creates an API wrapper in python around both the Foreman and plugin
> ecosystem as well as the Satellite ecosystem. Further, there is a side
> library that a member of the oVirt team has been working on to create a DSL
> of sorts to managing entities [2]. If you are interested on seeing this
> expanded more into the community, or would like more information please let
> me know and I'd be happy to collaborate.
>
> [1] https://pypi.python.org/pypi/nailgun/
> [2] https://github.com/ifireball/python-satellite-dsl
>

>>
>>> 4) What generally makes building a set of entity maintenance scripts
>>> hard? From the scripting perspective and from the "API makes this hard"
>>> perspective?
>>>
>>
>> Time. There is so many pieces to implementing this workflow that its a
>> lot of poking and prodding. I found the composite view documentation
>> lacking for what i described above.
>>
>>
>>> 5) Do you use any tools to automate this or manage it? e.g. Jenkins,
>>> Ansible, Puppet
>>>
>> I'm planning on running most of the bits from inside jenkins. Ansible
>> would only come in if we go the module path.
>>
>
> The Satellite QE team also maintains all of their Jenkins jobs upstream in
> an effort to be open source about the mechanisms in case it helps to
> contribute to user workflows. Part of these jobs are installation, test
> suite jobs they run against upstream and release jobs that use a dogfood
> server to handle release repositories (the hope is to mirror this for some
> upstream releasing too). Might be useful to peruse as it contains job
> configurations for syncing, promoting and publishing content and pipelining
> them together.
>
> If there are workflows, tasks, etc. you'd be interested in learning about
> or seeing more work put into for community use we'd love to hear about them.
>
> [3] https://github.com/SatelliteQE/robottelo-ci
>

thats all very helpful references. thank you.

··· On Tue, May 10, 2016 at 9:42 AM Eric D Helms wrote: > On Mon, May 9, 2016 at 1:33 PM, Greg Swift wrote: >> On Sat, May 7, 2016 at 9:08 AM Eric D Helms wrote: >>> On Thu, May 5, 2016 at 10:23 PM, Andrew Schofield >>> wrote: >>>> On Wednesday, March 30, 2016 at 9:52:45 PM UTC-4, Eric Helms wrote: