Testing plugins that go with a release

During the 2.3 testing week we (mostly @lzap :heart:) noticed that some of our plugins would not work properly on 2.3 – or even worse, break an working setup when installed.

While we do not guarantee that all shipped plugins are updated to be compatible with a release on day 0, we still try to achieve that and it would be nice if we could at least include a list of known issues in the release announcement.

Today, we have no way to know this. The plugins pipeline is responsible for publishing the packages, but does not test them at all.

While talking about this issue during a recent release meeting we came up with the idea that we could have a “plugins” test pipeline run directly after the normal Foreman release pipeline happens. This would not block the Foreman release (contrary to the failures during the Foreman pipeline itself), but at least give us an indicator whether the plugins that we ship work (well, are installable and don’t break anything).

We have something similar in the luna pipeline (see Testing more plugins together regularly for the old discussion around this), but it currently only runs on nightly (this is easy to change) and depends on Katello (not so easy to change) which tends to lag a few days behind Foreman releases, which makes it less helpful for inclusion in Foreman release announcements.

So, TL;DR:

  • create a forklift pipeline that can test installation of plugins (that don’t require Katello)
  • trigger this pipeline when foreman-X.Y-release-pipeline finishes
  • this won’t block plugin publishing (to not block good plugins)
  • the results can be used to update a list of known issues in the release announcement

I’m happy to own the implementation of this.

4 Likes

Overall :+1: for this but of course I have some questions about implementation details.

  • Would it use staging or release repos?
    • Foreman
    • Foreman Plugins
  • Would it include a repoclosure check?
  • Will we actually write tests for plugin code? (i.e., run a REX job and verify the result)

I’d use release, as at this point both Foreman and Plugins were published anyways.

The “normal” plugin pipeline already does that for RPM, and there is no pipeline for Debian – the plugins are published immediately when they are built. So probably “no”. But I wanted to add a Debian repoclosure-like check at some point.

I asked for that back in Testing more plugins together regularly, and so far this didn’t happen.

So I think the answer is “we should, patches welcome”.

I think that we should have the framework and we should run the integration tests of Foreman and every included plugin. That would definitely require some work and shift in minds of developers of purpose of integration testing to “Acceptance tests” but it’s a good practice IMHO.

If we look at https://github.com/theforeman/forklift/tree/master/bats, the only “thing” that has reasonable integration tests today is: Katello :frowning:
Foreman core just tests the app is up, and then that Puppet works. And then there is a test for the PowerDNS plugin that I think nothing executes today.

We can argue whether bats it the right framework here, but that’s what we have today.

You’re brave, man. You’re brave.

The other day I had a clever idea to add yum install foreman-* into my nightly installation script. I never got it working, we ship a lot of things that break too often. But if you manage to push on us fixing those earlier, that’s a huge win.

I wonder how much more testing we’d get during the test week if as part of rc1 we tried to install it with plugins on some machine and ask everyone to test their area. I think that’s the biggest investment every developer does (or should do) with every release or even rc and we could eliminate.

Is this a loop over plugins passing that particular plugins enable option to the installer or is this a pass a whole set to the installer at one time and see what fails test? I imagined the former but wanted to see what your were thinking implementation wise.

As a first step I’d try “all in one step”, but the loop is actually an interesting idea too.

You mean a central instance, available to everyone? That should be doable. Didn’t you have something similar for demos started recently?

I’ve started working on the forklift part here:

and here is infra:

Yes, but this instance would be reinstalled with every RC or release and would be mostly blank. I have a demo box but it’s available only on private network and I can’t expose it to the wild. I could however seed the same data on the newly installed box, that would probably also help with testing.