Unit test jenkins job with all plugins enabled

Hello,

I would like to open discussion about having a unit test job of
foreman with all major plugins enabled (katello, discovery, bootdisk,
rex, openscap). Some of the bugs we found in Discovery 9.0 would have
been found just by executing unit tests with all the plugins enabled.

If this is a problem of compute time, let's simply schedule a daily
job, so at least we know if nightly is stable or not with all the
plugins enabled.

··· -- Later, Lukas @lzap Zapletal

What would be all the major plugins? Expire Hosts? Digitalocean? Omaha?

The list gets quite long easily. How about making it a responsibility of a
plugin to make sure it actually works? If as a maintainer of the omaha
plugins I want a nightly job to test the plugin with more plugins enabled,
nobody stops me from setting one up.
I'd also disable the katello ci job in core to be honest. It should not be
the responsibility of core to make sure specific plugins work, but the
responsibility of the plugins.
I think, it would be a better approach to have a nightly job that tests the
"Satellite 6" configuration to check if everything still plays nice
together.

  • Timo
··· Am Donnerstag, 1. Juni 2017 12:17:49 UTC+2 schrieb Lukas Zapletal: > > I would like to open discussion about having a unit test job of > foreman with all major plugins enabled (katello, discovery, bootdisk, > rex, openscap). >

>
>>
>> I would like to open discussion about having a unit test job of
>> foreman with all major plugins enabled (katello, discovery, bootdisk,
>> rex, openscap).
>>
>
> What would be all the major plugins? Expire Hosts? Digitalocean? Omaha?
>
> The list gets quite long easily. How about making it a responsibility of a
> plugin to make sure it actually works? If as a maintainer of the omaha
> plugins I want a nightly job to test the plugin with more plugins enabled,
> nobody stops me from setting one up.
> I'd also disable the katello ci job in core to be honest. It should not be
> the responsibility of core to make sure specific plugins work, but the
> responsibility of the plugins.
>

This sounds good in theory, but I'll refresh the history of why we did
this. A change would be made to Foreman core, that change would go in and
then propagate out. A developer would update their foreman checkout or the
Katello pipeline would run and break. More developers start updating local
checkouts and now everyone is broken. The Katello PR tests begin to fail
and all PRs are frozen. So now a significant portion of developers are all
trying to figure out what point in time to roll back to, who is going to
fix the breakage and how long it is going to take. 1-5 days later, the
issue is fixed and PRs are able to start back up as well as the nightly
pipeline.

How much developer frustration was created? How much time was lost and
wasted? If we want core to have more freedom of movement then core needs to
evolve to allow plugins more reliability on interfacing.

Not all plugins are equal in size, developer community or usage and thus
they should not be all treated the same in my opinion. It is my opinion
that we need to accept this formally and as others have said build our
testing and pipelines around this or change our model for adding
functionality. Sometimes it is like we are adverse to accepting we have
evolved to have a common stack of "core" plugins that would benefit from
being treated as such to enhance reliability, testing, hardening.

Today, the only gating we have is that blessing that is Katello unit tests
being run on Foreman PRs. The nightly pipeline has no notion of the plugins
and routinely sends out changes that break Katello and other plugins making
the plugin repo and Katello nightly repository a crap shoot with respect to
will it install.

I am all for more testing, but what good does testing do if there are no
consequences? A nightly test that runs and either passes or fails is
informative and requires someone to be looking and willing to act on every
change. If the test has no consequence on deployment that likelihood drops
dramatically in my opinion. I think about this a lot and how it would be
ideal if we came up with a gating and release system that worked for each
plugin. Pushing new versions, or nightlies if and only if they pass some
testing that way we knew that everything in nightly worked. Or accepting we
are a stack, having a system whereby plugins could be accepted as a tier 1
type status that included them in stack testing and blocked with some
agreement and understanding from the developers that they have to address
issues in a timely manner.

Maybe we need to get rid of this plugin model all together and switch to a
services model that allows more independent control and flexibility from a
code writing, testing and deployment stand point. Or monolith all the
things so we are a single community working on a common core.

Another thing we have to consider is that we have people who work on and
take care of the infrastructure but we do not have an infrastructure or
CI/CD team or SIG that takes responsibility for and thinks about these
complex problems. I don't know how we reliably add more without addressing
this.

Half my two cents,
Eric

··· On Thu, Jun 1, 2017 at 8:17 AM, Timo Goebel wrote: > Am Donnerstag, 1. Juni 2017 12:17:49 UTC+2 schrieb Lukas Zapletal:

I think, it would be a better approach to have a nightly job that tests
the “Satellite 6” configuration to check if everything still plays nice
together.

  • Timo


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering

I agree this became a must. Since 1.15 compose, foreman_templates relies on
remote_execution template import method which was changed in last version.
foreman_openscap seeds remote execution job templates, again affected by
recent change. Result was installer fails with openscap activated, because
seed failed. Luckily we found it our dev setups before the release but we
should know whether plugins work together. Otherwise after new stable core
version is released, we're fixing all plugin for another month.

Based on community survey [1], 89% of our users use plugins. We should treat
at least most popular ones as part of the core. We already have a way for
plugins to disable tests, that can't run when the plugin is active. I know
that e.g. Katello breaks ~200 of Foreman tests so we'd need to fix this first,
but I definitely support this and I'm happy to help with it.

[1] Foreman :: 2017 Foreman Survey Analysis

··· On čtvrtek 1. června 2017 12:17:25 CEST Lukas Zapletal wrote: > Hello, > > I would like to open discussion about having a unit test job of > foreman with all major plugins enabled (katello, discovery, bootdisk, > rex, openscap). Some of the bugs we found in Discovery 9.0 would have > been found just by executing unit tests with all the plugins enabled. > > If this is a problem of compute time, let's simply schedule a daily > job, so at least we know if nightly is stable or not with all the > plugins enabled.


Marek

> This sounds good in theory, but I'll refresh the history of why we did this.
> A change would be made to Foreman core, that change would go in and then
> propagate out. A developer would update their foreman checkout or the
> Katello pipeline would run and break. More developers start updating local
> checkouts and now everyone is broken. The Katello PR tests begin to fail and
> all PRs are frozen. So now a significant portion of developers are all
> trying to figure out what point in time to roll back to, who is going to fix
> the breakage and how long it is going to take. 1-5 days later, the issue is
> fixed and PRs are able to start back up as well as the nightly pipeline.
>
> How much developer frustration was created? How much time was lost and
> wasted? If we want core to have more freedom of movement then core needs to
> evolve to allow plugins more reliability on interfacing.

Then let's don't block PRs, I propose nightly job with an email so we
are at least aware that things do not work and we can at least wait
with RC phase a little bit longer until things settle down, or even
block doing another RC until we are all green.

> I am all for more testing, but what good does testing do if there are no
> consequences? A nightly test that runs and either passes or fails is
> informative and requires someone to be looking and willing to act on every
> change. If the test has no consequence on deployment that likelihood drops
> dramatically in my opinion. I think about this a lot and how it would be

Of course, but we can add an item to the release process workflow to
check if nightly unit tests are all green across give set of plugins.
Current way of releasing Foreman is to do two-three RCs and then go
green, plugin authors are trying to catch up later. So if you are
beginner user and install Foreman today, Discovery won't work as
expected.

It does not need to be a jenkins job, does anybody have a script that
performs unit test run of foreman+katello+some plugins?

··· On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms wrote:


Later,
Lukas @lzap Zapletal

> > Hello,
> >
> > I would like to open discussion about having a unit test job of
> > foreman with all major plugins enabled (katello, discovery, bootdisk,
> > rex, openscap). Some of the bugs we found in Discovery 9.0 would have
> > been found just by executing unit tests with all the plugins enabled.
> >
> > If this is a problem of compute time, let's simply schedule a daily
> > job, so at least we know if nightly is stable or not with all the
> > plugins enabled.
>
> I agree this became a must. Since 1.15 compose, foreman_templates relies on
> remote_execution template import method which was changed in last version.
> foreman_openscap seeds remote execution job templates, again affected by
> recent change. Result was installer fails with openscap activated, because
> seed failed. Luckily we found it our dev setups before the release but we
> should know whether plugins work together. Otherwise after new stable core
> version is released, we're fixing all plugin for another month.
>
> Based on community survey [1], 89% of our users use plugins. We should
> treat
> at least most popular ones as part of the core. We already have a way for
> plugins to disable tests, that can't run when the plugin is active. I know
> that e.g. Katello breaks ~200 of Foreman tests so we'd need to fix this
> first,
> but I definitely support this and I'm happy to help with it.
>
> +1 I suggest that's the hard investment we would need to do upfront, e.g.
fix all plugins and not just blindly disable tests.

··· On Thu, Jun 1, 2017 at 2:10 PM, Marek Hulán wrote: > On čtvrtek 1. června 2017 12:17:25 CEST Lukas Zapletal wrote:

[1] Foreman :: 2017 Foreman Survey Analysis


Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello,

While it is plugin maintainers responsibility to keep their plugins up to
date, one of the most common frustrations they face is "core changed and
broke something AGAIN". I wouldn't want to slow down core's already slow
merge process, but at the same time core contributors should try to
minimize breaking plugins as much as possible - for example, by deprecating
functions before removing them completely. We could make these tests
optional - i.e. not block merge if plugin tests fail, but just take
advantage of the tests to fix whatever is needed in the plugins sooner
rather then later. I know that in certain cases having katello test run on
every PR let us find and fix issues we wouldn't have known about otherwise.
Right now contributors (and reviewers) don't have any visibility to whether
a PR will cause problems to other plugins. If breaking is unavoidable, at
least having plugin tests run on PRs will allow giving a "heads up" to
plugin maintainers that they need to fix something, rather then realizing
after the next release that something broke.
During the past couple of months we had several cases of core PRs leading
to broken plugins, which could have been prevented or at least handled
better had we known that a certain change requires changes in plugins.

As for which plugins to test - I think the best criteria would be to test
the 5 or 10 most downloaded plugins or most used plugins according to the
community survey.

··· On Thu, Jun 1, 2017 at 4:50 PM, Lukas Zapletal wrote:

On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms ericdhelms@gmail.com wrote:

This sounds good in theory, but I’ll refresh the history of why we did
this.
A change would be made to Foreman core, that change would go in and then
propagate out. A developer would update their foreman checkout or the
Katello pipeline would run and break. More developers start updating
local
checkouts and now everyone is broken. The Katello PR tests begin to fail
and
all PRs are frozen. So now a significant portion of developers are all
trying to figure out what point in time to roll back to, who is going to
fix
the breakage and how long it is going to take. 1-5 days later, the issue
is
fixed and PRs are able to start back up as well as the nightly pipeline.

How much developer frustration was created? How much time was lost and
wasted? If we want core to have more freedom of movement then core needs
to
evolve to allow plugins more reliability on interfacing.

Then let’s don’t block PRs, I propose nightly job with an email so we
are at least aware that things do not work and we can at least wait
with RC phase a little bit longer until things settle down, or even
block doing another RC until we are all green.

I am all for more testing, but what good does testing do if there are no
consequences? A nightly test that runs and either passes or fails is
informative and requires someone to be looking and willing to act on
every
change. If the test has no consequence on deployment that likelihood
drops
dramatically in my opinion. I think about this a lot and how it would be

Of course, but we can add an item to the release process workflow to
check if nightly unit tests are all green across give set of plugins.
Current way of releasing Foreman is to do two-three RCs and then go
green, plugin authors are trying to catch up later. So if you are
beginner user and install Foreman today, Discovery won’t work as
expected.

It does not need to be a jenkins job, does anybody have a script that
performs unit test run of foreman+katello+some plugins?


Later,
Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Have a nice day,
Tomer Brisker
Red Hat Engineering

> Hello,
>
> While it is plugin maintainers responsibility to keep their plugins up to
> date, one of the most common frustrations they face is "core changed and
> broke something AGAIN". I wouldn't want to slow down core's already slow
> merge process, but at the same time core contributors should try to
> minimize breaking plugins as much as possible - for example, by deprecating
> functions before removing them completely. We could make these tests
> optional - i.e. not block merge if plugin tests fail, but just take
> advantage of the tests to fix whatever is needed in the plugins sooner
> rather then later. I know that in certain cases having katello test run on
> every PR let us find and fix issues we wouldn't have known about otherwise.
> Right now contributors (and reviewers) don't have any visibility to
> whether a PR will cause problems to other plugins. If breaking is
> unavoidable, at least having plugin tests run on PRs will allow giving a
> "heads up" to plugin maintainers that they need to fix something, rather
> then realizing after the next release that something broke.
> During the past couple of months we had several cases of core PRs leading
> to broken plugins, which could have been prevented or at least handled
> better had we known that a certain change requires changes in plugins.
>

Agree! If I had to sum it up, developer happiness and innovation at speed
(core and plugins) is key! One option we've mentioned before that goes
directly to Tomer's point is a stable plugin API. We as a community both
develop and tell other developers, come, build a plugin. We have a vibrant
plugin community. We say plugins are central to the community and project
yet CI/CD tells a different story. Do we have a defined contract for
plugins? That would solve a lot of the issues IMO. If we had a supported
interface (for ALL major use cases of extension, including the UI, since we
do have a Foreman::Plugin today) then core could make changes at will as
long as the interface passed testing. Any plugin using functionality not in
the interface would be SOL but it would be explicitly stated and known.

Once we get to installation, running in production mode, scenario testing
things get dicier, less stable and with less testing. A good place to
invest and bring in other communities (hint: Satlelite QE) to bolster. I'm
writing a lot of words without action items which I typically try to avoid.
I want to get our testing and matrices there and am working to bring about
more time for myself and others to do so. We need to actively invest in
this area, make it a priority, and a first class citizen of our community
to make an impact.

We have wiki pages on creating and releasing plugins. But maybe we need to
step back and define what a plugin is to our community. I think it would
also be a fruitful effort to examine the landscape of plugins and see how
and where they are creating "interfaces" to Foreman.

Eric

>
> As for which plugins to test - I think the best criteria would be to test
> the 5 or 10 most downloaded plugins or most used plugins according to the
> community survey.
>
>
>> > This sounds good in theory, but I'll refresh the history of why we did
>> this.
>> > A change would be made to Foreman core, that change would go in and then
>> > propagate out. A developer would update their foreman checkout or the
>> > Katello pipeline would run and break. More developers start updating
>> local
>> > checkouts and now everyone is broken. The Katello PR tests begin to
>> fail and
>> > all PRs are frozen. So now a significant portion of developers are all
>> > trying to figure out what point in time to roll back to, who is going
>> to fix
>> > the breakage and how long it is going to take. 1-5 days later, the
>> issue is
>> > fixed and PRs are able to start back up as well as the nightly pipeline.
>> >
>> > How much developer frustration was created? How much time was lost and
>> > wasted? If we want core to have more freedom of movement then core
>> needs to
>> > evolve to allow plugins more reliability on interfacing.
>>
>> Then let's don't block PRs, I propose nightly job with an email so we
>> are at least aware that things do not work and we can at least wait
>> with RC phase a little bit longer until things settle down, or even
>> block doing another RC until we are all green.
>
>
You lost me here. Blocking PRs is the only way that we reduced developer
frustration. Not blocking means broken code, broken installs, broken
developer environments. Say core merges something that removes a function
Katello relied on. But we did not run Katello unit tests or block at all.
Now a significant chunk of developers can't even boot their application.
Meanwhile, we didn't block PRs due to failing tests, so more test start
failing and getting lost in the sea of brokenness. Now we've got a
compounding pile up.

Ask any developer who has been on Katello bats duty (aka maintaining the
nightly pipeline). Because of a lack of gating, there have been times where
it takes a developer two weeks to get the pipeline back up and running
because of compounding breakages. Two weeks lost.

··· On Thu, Jun 1, 2017 at 10:18 AM, Tomer Brisker wrote: > On Thu, Jun 1, 2017 at 4:50 PM, Lukas Zapletal wrote: >> On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms >> wrote:

I am all for more testing, but what good does testing do if there are no
consequences? A nightly test that runs and either passes or fails is
informative and requires someone to be looking and willing to act on
every
change. If the test has no consequence on deployment that likelihood
drops
dramatically in my opinion. I think about this a lot and how it would be

Of course, but we can add an item to the release process workflow to
check if nightly unit tests are all green across give set of plugins.
Current way of releasing Foreman is to do two-three RCs and then go
green, plugin authors are trying to catch up later. So if you are
beginner user and install Foreman today, Discovery won’t work as
expected.

It does not need to be a jenkins job, does anybody have a script that
performs unit test run of foreman+katello+some plugins?


Later,
Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Have a nice day,
Tomer Brisker
Red Hat Engineering


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering

While I agree, one thing that properly defined plugin DSL does not solve is
the fact, that plugins now depend on each other. Few examples

openscap, katello depends on remote execution
katello, remote execution, ansible, chef, … depends on foreman tasks
remote execution depends on foreman templates (to some degree)
virt-who-configure depends on katello

Recently, I was lucky to spot that one Katello PR was changing the Hypervisors
task API while virt-who plugin was subscribed to it, which would cause a
breakage. We managed to solve that before the PR was merged. Another example
is that foreman_templates changed import method interface, which caused remote
execution to fail importing templates so it needed to be fixed. The fix made
openscap and katello broken because they seed job templates. Openscap was
fixed in time but Katello 3.4 is now uninstallable if people have remote
execution enabled and we'll need 3.4.1 to fix it.

I think we need to treat some plugins as core. If we can't merge them to core,
we should make sure, that their changes don't break other plugins. Therefore I
think we need some job that verifies that as many plugins as possible are
continuously green. In my opinion, people working on core usually can avoid
breaking changes if they know about it. If not, they should provide fixes or
at least guidance for affected plugins.

··· -- Marek

On čtvrtek 1. června 2017 18:47:05 CEST Eric D Helms wrote:

On Thu, Jun 1, 2017 at 10:18 AM, Tomer Brisker tbrisker@redhat.com wrote:

Hello,

While it is plugin maintainers responsibility to keep their plugins up to
date, one of the most common frustrations they face is “core changed and
broke something AGAIN”. I wouldn’t want to slow down core’s already slow
merge process, but at the same time core contributors should try to
minimize breaking plugins as much as possible - for example, by
deprecating
functions before removing them completely. We could make these tests
optional - i.e. not block merge if plugin tests fail, but just take
advantage of the tests to fix whatever is needed in the plugins sooner
rather then later. I know that in certain cases having katello test run on
every PR let us find and fix issues we wouldn’t have known about
otherwise.
Right now contributors (and reviewers) don’t have any visibility to
whether a PR will cause problems to other plugins. If breaking is
unavoidable, at least having plugin tests run on PRs will allow giving a
"heads up" to plugin maintainers that they need to fix something, rather
then realizing after the next release that something broke.
During the past couple of months we had several cases of core PRs leading
to broken plugins, which could have been prevented or at least handled
better had we known that a certain change requires changes in plugins.

Agree! If I had to sum it up, developer happiness and innovation at speed
(core and plugins) is key! One option we’ve mentioned before that goes
directly to Tomer’s point is a stable plugin API. We as a community both
develop and tell other developers, come, build a plugin. We have a vibrant
plugin community. We say plugins are central to the community and project
yet CI/CD tells a different story. Do we have a defined contract for
plugins? That would solve a lot of the issues IMO. If we had a supported
interface (for ALL major use cases of extension, including the UI, since we
do have a Foreman::Plugin today) then core could make changes at will as
long as the interface passed testing. Any plugin using functionality not in
the interface would be SOL but it would be explicitly stated and known.

Once we get to installation, running in production mode, scenario testing
things get dicier, less stable and with less testing. A good place to
invest and bring in other communities (hint: Satlelite QE) to bolster. I’m
writing a lot of words without action items which I typically try to avoid.
I want to get our testing and matrices there and am working to bring about
more time for myself and others to do so. We need to actively invest in
this area, make it a priority, and a first class citizen of our community
to make an impact.

We have wiki pages on creating and releasing plugins. But maybe we need to
step back and define what a plugin is to our community. I think it would
also be a fruitful effort to examine the landscape of plugins and see how
and where they are creating “interfaces” to Foreman.

Eric

As for which plugins to test - I think the best criteria would be to test
the 5 or 10 most downloaded plugins or most used plugins according to the
community survey.

On Thu, Jun 1, 2017 at 4:50 PM, Lukas Zapletal lzap@redhat.com wrote:

On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms ericdhelms@gmail.com > >> > >> wrote:

This sounds good in theory, but I’ll refresh the history of why we did

this.

A change would be made to Foreman core, that change would go in and
then
propagate out. A developer would update their foreman checkout or the
Katello pipeline would run and break. More developers start updating

local

checkouts and now everyone is broken. The Katello PR tests begin to

fail and

all PRs are frozen. So now a significant portion of developers are all
trying to figure out what point in time to roll back to, who is going

to fix

the breakage and how long it is going to take. 1-5 days later, the

issue is

fixed and PRs are able to start back up as well as the nightly
pipeline.

How much developer frustration was created? How much time was lost and
wasted? If we want core to have more freedom of movement then core

needs to

evolve to allow plugins more reliability on interfacing.

Then let’s don’t block PRs, I propose nightly job with an email so we
are at least aware that things do not work and we can at least wait
with RC phase a little bit longer until things settle down, or even
block doing another RC until we are all green.

You lost me here. Blocking PRs is the only way that we reduced developer
frustration. Not blocking means broken code, broken installs, broken
developer environments. Say core merges something that removes a function
Katello relied on. But we did not run Katello unit tests or block at all.
Now a significant chunk of developers can’t even boot their application.
Meanwhile, we didn’t block PRs due to failing tests, so more test start
failing and getting lost in the sea of brokenness. Now we’ve got a
compounding pile up.

Ask any developer who has been on Katello bats duty (aka maintaining the
nightly pipeline). Because of a lack of gating, there have been times where
it takes a developer two weeks to get the pipeline back up and running
because of compounding breakages. Two weeks lost.

I am all for more testing, but what good does testing do if there are
no
consequences? A nightly test that runs and either passes or fails is
informative and requires someone to be looking and willing to act on

every

change. If the test has no consequence on deployment that likelihood

drops

dramatically in my opinion. I think about this a lot and how it would
be

Of course, but we can add an item to the release process workflow to
check if nightly unit tests are all green across give set of plugins.
Current way of releasing Foreman is to do two-three RCs and then go
green, plugin authors are trying to catch up later. So if you are
beginner user and install Foreman today, Discovery won’t work as
expected.

It does not need to be a jenkins job, does anybody have a script that
performs unit test run of foreman+katello+some plugins?


Later,

Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Have a nice day,
Tomer Brisker
Red Hat Engineering


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

We currently run plugin develop tests only on Sunday, I assume this
has something to do with capacity planning. Today I stumbled upon
another taxonomy related regression in develop which broke Discovery
tests, I had to git bisect it until I found the commit in core,
Jenkins was only able to tell me it was working on Sunday one week
before. I'd love to see develop plugins tests being executed every day
at least.

··· On Fri, Jun 2, 2017 at 10:09 AM, Marek Hulán wrote: > While I agree, one thing that properly defined plugin DSL does not solve is > the fact, that plugins now depend on each other. Few examples > > openscap, katello depends on remote execution > katello, remote execution, ansible, chef, ... depends on foreman tasks > remote execution depends on foreman templates (to some degree) > virt-who-configure depends on katello > > Recently, I was lucky to spot that one Katello PR was changing the Hypervisors > task API while virt-who plugin was subscribed to it, which would cause a > breakage. We managed to solve that before the PR was merged. Another example > is that foreman_templates changed import method interface, which caused remote > execution to fail importing templates so it needed to be fixed. The fix made > openscap and katello broken because they seed job templates. Openscap was > fixed in time but Katello 3.4 is now uninstallable if people have remote > execution enabled and we'll need 3.4.1 to fix it. > > I think we need to treat some plugins as core. If we can't merge them to core, > we should make sure, that their changes don't break other plugins. Therefore I > think we need some job that verifies that as many plugins as possible are > continuously green. In my opinion, people working on core usually can avoid > breaking changes if they know about it. If not, they should provide fixes or > at least guidance for affected plugins. > > -- > Marek > > On čtvrtek 1. června 2017 18:47:05 CEST Eric D Helms wrote: >> On Thu, Jun 1, 2017 at 10:18 AM, Tomer Brisker wrote: >> > Hello, >> > >> > While it is plugin maintainers responsibility to keep their plugins up to >> > date, one of the most common frustrations they face is "core changed and >> > broke something AGAIN". I wouldn't want to slow down core's already slow >> > merge process, but at the same time core contributors should try to >> > minimize breaking plugins as much as possible - for example, by >> > deprecating >> > functions before removing them completely. We could make these tests >> > optional - i.e. not block merge if plugin tests fail, but just take >> > advantage of the tests to fix whatever is needed in the plugins sooner >> > rather then later. I know that in certain cases having katello test run on >> > every PR let us find and fix issues we wouldn't have known about >> > otherwise. >> > Right now contributors (and reviewers) don't have any visibility to >> > whether a PR will cause problems to other plugins. If breaking is >> > unavoidable, at least having plugin tests run on PRs will allow giving a >> > "heads up" to plugin maintainers that they need to fix something, rather >> > then realizing after the next release that something broke. >> > During the past couple of months we had several cases of core PRs leading >> > to broken plugins, which could have been prevented or at least handled >> > better had we known that a certain change requires changes in plugins. >> >> Agree! If I had to sum it up, developer happiness and innovation at speed >> (core and plugins) is key! One option we've mentioned before that goes >> directly to Tomer's point is a stable plugin API. We as a community both >> develop and tell other developers, come, build a plugin. We have a vibrant >> plugin community. We say plugins are central to the community and project >> yet CI/CD tells a different story. Do we have a defined contract for >> plugins? That would solve a lot of the issues IMO. If we had a supported >> interface (for ALL major use cases of extension, including the UI, since we >> do have a Foreman::Plugin today) then core could make changes at will as >> long as the interface passed testing. Any plugin using functionality not in >> the interface would be SOL but it would be explicitly stated and known. >> >> Once we get to installation, running in production mode, scenario testing >> things get dicier, less stable and with less testing. A good place to >> invest and bring in other communities (hint: Satlelite QE) to bolster. I'm >> writing a lot of words without action items which I typically try to avoid. >> I want to get our testing and matrices there and am working to bring about >> more time for myself and others to do so. We need to actively invest in >> this area, make it a priority, and a first class citizen of our community >> to make an impact. >> >> We have wiki pages on creating and releasing plugins. But maybe we need to >> step back and define what a plugin is to our community. I think it would >> also be a fruitful effort to examine the landscape of plugins and see how >> and where they are creating "interfaces" to Foreman. >> >> >> Eric >> >> > As for which plugins to test - I think the best criteria would be to test >> > the 5 or 10 most downloaded plugins or most used plugins according to the >> > community survey. >> > >> > On Thu, Jun 1, 2017 at 4:50 PM, Lukas Zapletal wrote: >> >> On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms >> >> >> >> wrote: >> >> > This sounds good in theory, but I'll refresh the history of why we did >> >> >> >> this. >> >> >> >> > A change would be made to Foreman core, that change would go in and >> >> > then >> >> > propagate out. A developer would update their foreman checkout or the >> >> > Katello pipeline would run and break. More developers start updating >> >> >> >> local >> >> >> >> > checkouts and now everyone is broken. The Katello PR tests begin to >> >> >> >> fail and >> >> >> >> > all PRs are frozen. So now a significant portion of developers are all >> >> > trying to figure out what point in time to roll back to, who is going >> >> >> >> to fix >> >> >> >> > the breakage and how long it is going to take. 1-5 days later, the >> >> >> >> issue is >> >> >> >> > fixed and PRs are able to start back up as well as the nightly >> >> > pipeline. >> >> > >> >> > How much developer frustration was created? How much time was lost and >> >> > wasted? If we want core to have more freedom of movement then core >> >> >> >> needs to >> >> >> >> > evolve to allow plugins more reliability on interfacing. >> >> >> >> Then let's don't block PRs, I propose nightly job with an email so we >> >> are at least aware that things do not work and we can at least wait >> >> with RC phase a little bit longer until things settle down, or even >> >> block doing another RC until we are all green. >> >> You lost me here. Blocking PRs is the only way that we reduced developer >> frustration. Not blocking means broken code, broken installs, broken >> developer environments. Say core merges something that removes a function >> Katello relied on. But we did not run Katello unit tests or block at all. >> Now a significant chunk of developers can't even boot their application. >> Meanwhile, we didn't block PRs due to failing tests, so more test start >> failing and getting lost in the sea of brokenness. Now we've got a >> compounding pile up. >> >> Ask any developer who has been on Katello bats duty (aka maintaining the >> nightly pipeline). Because of a lack of gating, there have been times where >> it takes a developer two weeks to get the pipeline back up and running >> because of compounding breakages. Two weeks lost. >> >> >> > I am all for more testing, but what good does testing do if there are >> >> > no >> >> > consequences? A nightly test that runs and either passes or fails is >> >> > informative and requires someone to be looking and willing to act on >> >> >> >> every >> >> >> >> > change. If the test has no consequence on deployment that likelihood >> >> >> >> drops >> >> >> >> > dramatically in my opinion. I think about this a lot and how it would >> >> > be >> >> >> >> Of course, but we can add an item to the release process workflow to >> >> check if nightly unit tests are all green across give set of plugins. >> >> Current way of releasing Foreman is to do two-three RCs and then go >> >> green, plugin authors are trying to catch up later. So if you are >> >> beginner user and install Foreman today, Discovery won't work as >> >> expected. >> >> >> >> It does not need to be a jenkins job, does anybody have a script that >> >> performs unit test run of foreman+katello+some plugins? >> >> >> >> -- >> >> Later, >> >> >> >> Lukas @lzap Zapletal >> >> >> >> -- >> >> You received this message because you are subscribed to the Google Groups >> >> "foreman-dev" group. >> >> To unsubscribe from this group and stop receiving emails from it, send an >> >> email to foreman-dev+unsubscribe@googlegroups.com. >> >> For more options, visit https://groups.google.com/d/optout. >> > >> > -- >> > Have a nice day, >> > Tomer Brisker >> > Red Hat Engineering >> > >> > -- >> > You received this message because you are subscribed to the Google Groups >> > "foreman-dev" group. >> > To unsubscribe from this group and stop receiving emails from it, send an >> > email to foreman-dev+unsubscribe@googlegroups.com. >> > For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google Groups "foreman-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to foreman-dev+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/d/optout.


Later,
Lukas @lzap Zapletal

Hi,

It would be best, in my opinion, that we are able to tell as soon as
possible that some change will break plugins, and for that we'd need to run
tests on PRs.
My suggestion is that we replace the katello test currently running on
every PR with a test that installs the top 5 plugins together and runs all
of their tests.
We won't block merging PRs in core if that test fails, but at least we know
that the PR breaks plugins and be able to take action on it - either by
notifying maintainers, fixing the plugin directly, or making a change to
the PR so that they don't break.
This also has the added benefit of giving us a better indication that
plugins don't conflict with each other, which we currently don't test at
all afaik. Testing each plugin separately should be done by the plugins
themselves, not by core.
Most plugins' test are fairly short, so this won't add a significant load
to Jenkins over the current amount.

Tomer

··· On Mon, Jun 5, 2017 at 3:24 PM, Lukas Zapletal wrote:

We currently run plugin develop tests only on Sunday, I assume this
has something to do with capacity planning. Today I stumbled upon
another taxonomy related regression in develop which broke Discovery
tests, I had to git bisect it until I found the commit in core,
Jenkins was only able to tell me it was working on Sunday one week
before. I’d love to see develop plugins tests being executed every day
at least.

On Fri, Jun 2, 2017 at 10:09 AM, Marek Hulán mhulan@redhat.com wrote:

While I agree, one thing that properly defined plugin DSL does not solve
is
the fact, that plugins now depend on each other. Few examples

openscap, katello depends on remote execution
katello, remote execution, ansible, chef, … depends on foreman tasks
remote execution depends on foreman templates (to some degree)
virt-who-configure depends on katello

Recently, I was lucky to spot that one Katello PR was changing the
Hypervisors
task API while virt-who plugin was subscribed to it, which would cause a
breakage. We managed to solve that before the PR was merged. Another
example
is that foreman_templates changed import method interface, which caused
remote
execution to fail importing templates so it needed to be fixed. The fix
made
openscap and katello broken because they seed job templates. Openscap was
fixed in time but Katello 3.4 is now uninstallable if people have remote
execution enabled and we’ll need 3.4.1 to fix it.

I think we need to treat some plugins as core. If we can’t merge them to
core,
we should make sure, that their changes don’t break other plugins.
Therefore I
think we need some job that verifies that as many plugins as possible are
continuously green. In my opinion, people working on core usually can
avoid
breaking changes if they know about it. If not, they should provide
fixes or
at least guidance for affected plugins.


Marek

On čtvrtek 1. června 2017 18:47:05 CEST Eric D Helms wrote:

On Thu, Jun 1, 2017 at 10:18 AM, Tomer Brisker tbrisker@redhat.com > wrote:

Hello,

While it is plugin maintainers responsibility to keep their plugins
up to

date, one of the most common frustrations they face is "core changed
and

broke something AGAIN". I wouldn’t want to slow down core’s already
slow

merge process, but at the same time core contributors should try to
minimize breaking plugins as much as possible - for example, by
deprecating
functions before removing them completely. We could make these tests
optional - i.e. not block merge if plugin tests fail, but just take
advantage of the tests to fix whatever is needed in the plugins sooner
rather then later. I know that in certain cases having katello test
run on

every PR let us find and fix issues we wouldn’t have known about
otherwise.
Right now contributors (and reviewers) don’t have any visibility to
whether a PR will cause problems to other plugins. If breaking is
unavoidable, at least having plugin tests run on PRs will allow
giving a

“heads up” to plugin maintainers that they need to fix something,
rather

then realizing after the next release that something broke.
During the past couple of months we had several cases of core PRs
leading

to broken plugins, which could have been prevented or at least handled
better had we known that a certain change requires changes in plugins.

Agree! If I had to sum it up, developer happiness and innovation at
speed

(core and plugins) is key! One option we’ve mentioned before that goes
directly to Tomer’s point is a stable plugin API. We as a community both
develop and tell other developers, come, build a plugin. We have a
vibrant

plugin community. We say plugins are central to the community and
project

yet CI/CD tells a different story. Do we have a defined contract for
plugins? That would solve a lot of the issues IMO. If we had a supported
interface (for ALL major use cases of extension, including the UI,
since we

do have a Foreman::Plugin today) then core could make changes at will as
long as the interface passed testing. Any plugin using functionality
not in

the interface would be SOL but it would be explicitly stated and known.

Once we get to installation, running in production mode, scenario
testing

things get dicier, less stable and with less testing. A good place to
invest and bring in other communities (hint: Satlelite QE) to bolster.
I’m

writing a lot of words without action items which I typically try to
avoid.

I want to get our testing and matrices there and am working to bring
about

more time for myself and others to do so. We need to actively invest in
this area, make it a priority, and a first class citizen of our
community

to make an impact.

We have wiki pages on creating and releasing plugins. But maybe we need
to

step back and define what a plugin is to our community. I think it would
also be a fruitful effort to examine the landscape of plugins and see
how

and where they are creating “interfaces” to Foreman.

Eric

As for which plugins to test - I think the best criteria would be to
test

the 5 or 10 most downloaded plugins or most used plugins according to
the

community survey.

On Thu, Jun 1, 2017 at 4:50 PM, Lukas Zapletal lzap@redhat.com > wrote:

On Thu, Jun 1, 2017 at 2:55 PM, Eric D Helms ericdhelms@gmail.com > >> >> > >> >> wrote:

This sounds good in theory, but I’ll refresh the history of why we
did

this.

A change would be made to Foreman core, that change would go in and
then
propagate out. A developer would update their foreman checkout or
the

Katello pipeline would run and break. More developers start
updating

local

checkouts and now everyone is broken. The Katello PR tests begin to

fail and

all PRs are frozen. So now a significant portion of developers are
all

trying to figure out what point in time to roll back to, who is
going

to fix

the breakage and how long it is going to take. 1-5 days later, the

issue is

fixed and PRs are able to start back up as well as the nightly
pipeline.

How much developer frustration was created? How much time was lost
and

wasted? If we want core to have more freedom of movement then core

needs to

evolve to allow plugins more reliability on interfacing.

Then let’s don’t block PRs, I propose nightly job with an email so we
are at least aware that things do not work and we can at least wait
with RC phase a little bit longer until things settle down, or even
block doing another RC until we are all green.

You lost me here. Blocking PRs is the only way that we reduced developer
frustration. Not blocking means broken code, broken installs, broken
developer environments. Say core merges something that removes a
function

Katello relied on. But we did not run Katello unit tests or block at
all.

Now a significant chunk of developers can’t even boot their application.
Meanwhile, we didn’t block PRs due to failing tests, so more test start
failing and getting lost in the sea of brokenness. Now we’ve got a
compounding pile up.

Ask any developer who has been on Katello bats duty (aka maintaining the
nightly pipeline). Because of a lack of gating, there have been times
where

it takes a developer two weeks to get the pipeline back up and running
because of compounding breakages. Two weeks lost.

I am all for more testing, but what good does testing do if there
are

no
consequences? A nightly test that runs and either passes or fails
is

informative and requires someone to be looking and willing to act
on

every

change. If the test has no consequence on deployment that
likelihood

drops

dramatically in my opinion. I think about this a lot and how it
would

be

Of course, but we can add an item to the release process workflow to
check if nightly unit tests are all green across give set of plugins.
Current way of releasing Foreman is to do two-three RCs and then go
green, plugin authors are trying to catch up later. So if you are
beginner user and install Foreman today, Discovery won’t work as
expected.

It does not need to be a jenkins job, does anybody have a script that
performs unit test run of foreman+katello+some plugins?


Later,

Lukas @lzap Zapletal


You received this message because you are subscribed to the Google
Groups

“foreman-dev” group.
To unsubscribe from this group and stop receiving emails from it,
send an

email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Have a nice day,
Tomer Brisker
Red Hat Engineering


You received this message because you are subscribed to the Google
Groups

“foreman-dev” group.
To unsubscribe from this group and stop receiving emails from it,
send an

email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


You received this message because you are subscribed to the Google
Groups “foreman-dev” group.
To unsubscribe from this group and stop receiving emails from it, send
an email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Later,
Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Have a nice day,
Tomer Brisker
Red Hat Engineering

+1 +1 +1

Last week or two, Katello was red because of trivial regression in the
testing framework, we did not block PRs at all, but eventually Daniel
stepped up and fixed it. That sounds to me this works! If we can
extend this to other plugins - big improvement for these. For
discovery this would be huge improvement in overall stability.

I agree that it should not make the jenkins load much worse, discovery
and also other plugins has only dozens of tests, so the slowdown
should not be significant I hope. We can stick with what Katello team
does today - only test postgresql on particular ruby version (2.2 I
think today). If we are able to run them all in one rake task, that
would mean just one Rails boot as well.

What others say? Next steps would be:

  • nominate the plugins for this and vote them (or we can simple take
    most downloaded from our survey)
  • modify katello job to test all of them
  • rename katello job perhaps to avoid confusion?
··· On Mon, Jun 5, 2017 at 2:48 PM, Tomer Brisker wrote: > My suggestion is that we replace the katello test currently running on every > PR with a test that installs the top 5 plugins together and runs all of > their tests.


Later,
Lukas @lzap Zapletal

>> My suggestion is that we replace the katello test currently running on every
>> PR with a test that installs the top 5 plugins together and runs all of
>> their tests.
>
> +1 +1 +1
>
> Last week or two, Katello was red because of trivial regression in the
> testing framework, we did not block PRs at all, but eventually Daniel
> stepped up and fixed it. That sounds to me this works! If we can
> extend this to other plugins - big improvement for these. For
> discovery this would be huge improvement in overall stability.
>
> I agree that it should not make the jenkins load much worse, discovery
> and also other plugins has only dozens of tests, so the slowdown
> should not be significant I hope. We can stick with what Katello team
> does today - only test postgresql on particular ruby version (2.2 I
> think today). If we are able to run them all in one rake task, that
> would mean just one Rails boot as well.

+1 for testing a set of most used plugins on postgresql and one ruby
version for each PR. Together with that we can still keep testing for
the full matrix once a week.

··· On Mon, Jun 5, 2017 at 4:14 PM, Lukas Zapletal wrote: > On Mon, Jun 5, 2017 at 2:48 PM, Tomer Brisker wrote:

What others say? Next steps would be:

  • nominate the plugins for this and vote them (or we can simple take
    most downloaded from our survey)
  • modify katello job to test all of them
  • rename katello job perhaps to avoid confusion?


Later,
Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups “foreman-dev” group.
To unsubscribe from this group and stop receiving emails from it, send an email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

+1 to what was suggested except for the limit 5 :slight_smile: as long as plugins tests
take less than 1 minute (usually it's within seconds), I think we can test
more.

plugins I'd like to see tested since they are not tiny and hence more prone to
be broken by changes:

remote execution
discovery
openscap

··· On úterý 6. června 2017 9:54:12 CEST Tomas Strachota wrote: > On Mon, Jun 5, 2017 at 4:14 PM, Lukas Zapletal wrote: > > On Mon, Jun 5, 2017 at 2:48 PM, Tomer Brisker wrote: > >> My suggestion is that we replace the katello test currently running on > >> every PR with a test that installs the top 5 plugins together and runs > >> all of their tests. > > > > +1 +1 +1 > > > > Last week or two, Katello was red because of trivial regression in the > > testing framework, we did not block PRs at all, but eventually Daniel > > stepped up and fixed it. That sounds to me this works! If we can > > extend this to other plugins - big improvement for these. For > > discovery this would be huge improvement in overall stability. > > > > I agree that it should not make the jenkins load much worse, discovery > > and also other plugins has only dozens of tests, so the slowdown > > should not be significant I hope. We can stick with what Katello team > > does today - only test postgresql on particular ruby version (2.2 I > > think today). If we are able to run them all in one rake task, that > > would mean just one Rails boot as well. > > +1 for testing a set of most used plugins on postgresql and one ruby > version for each PR. Together with that we can still keep testing for > the full matrix once a week.


Marek

What others say? Next steps would be:

  • nominate the plugins for this and vote them (or we can simple take
    most downloaded from our survey)
  • modify katello job to test all of them
  • rename katello job perhaps to avoid confusion?


Later,

Lukas @lzap Zapletal


You received this message because you are subscribed to the Google Groups
"foreman-dev" group. To unsubscribe from this group and stop receiving
emails from it, send an email to
foreman-dev+unsubscribe@googlegroups.com. For more options, visit
https://groups.google.com/d/optout.