Running plugin tests with Github Actions

We have noticed some missing functionality in the Jenkins jobs to CI test plugins and have created a CI definition for Github Actions that I’d like to share.

The advantages over the “official” Jenkins jobs are in our opinion:

  • Github Actions can run tests against all Foreman versions that a plugin supports
  • Github Actions can tun tests with several plugin combinations if that’s desired
  • Github Actions respects the rubocop version defined in the plugin’s gemspec and does not enforce the version used in Foreman core. Plugins can upgrade rubocop at their own pace.
  • the test config is part of the plugin repo so all changes, e.g. dropping support for older Foreman versions or bumping the rubocop version can all happen as part of one PR that is tested
  • PR authors can checkout a feature branch of a dependency and have the tests run against this feature branch while they wait for a release
  • Github Actions is free to use for open source projects
  • Github Actions can use shared definitions. We e.g. have created a reusable step that sets up a plugin.

The only disadvantage I see is that it’s yet another solution to a problem that needs to be maintained etc.

Let me know if that’s interesting and what you think about it in general.


Big :+1: let’s do it please.
I’d love to migrate to some modern CI with readable output and clear job definition (living alongside the code!). I’d add one more advantage we would have save our infrastructure, what’s a big plus (if we figure what else to do with it)

We can probably get rid of travis and use Github actions instead, right? If we adopt across all major plugins, I would say it can be even advantage as we use one CI for plugins and two for core so better situation then today.

This sounds like the real only disadvantage to me xD Plugins should have same style as core does IMHO and if noone forces the style, plugin maintainers usually don’t care :slight_smile:


Note that we have 2000 minutes of Github actions on a free plan. If we migrate all plugins now, I suspect we’ll burn through this quickly while we get our Jenkins CI infra sponsored.

This is IMHO a good thing but note that a huge part of our CI infra relies on Jenkins now.


So everything you mentioned Jenkins can do. I feel as if I am missing what about it is either easier or cleaner or more powerful. Or if this is simply a we need changes in our Jenkins to support these community CI use cases. I find CI comes down to:

  • DSL for creating jobs
  • feature set available (e.g. plugins, handling of pull requests, interface for users)
  • amount of backing infrastructure to handle load

I believe there are no limits for public repositories, even on a free plan


I just double checked: Free repos can use unlimited resources, the limit of 2.000 minutes applies for private repos on the free plan.


Ah TIL. That explains why it was at 0 in the profile page. That’s slightly confusing in their UI but I’m happy with the end result.


I’d agree Jenkins can do everything we need, but I feel the painpoints:

  1. Our Jenkins is old version, so we don’t have the nice UI that boils down to we would need to invest effort to keep the Jenkins up to date. Effort, that GitHub will invest for us for free.
  2. The jobs are defined outside of the tested repository, that makes it harder to adjust (at least for me).

On the other hand, we would loose the ability to adjust all plugin tests at one place. Currently some of new plugins actually decided to use travis instead of Jenkins, I believe we would need to invest some effort into documenting the Jenkins process.
All of that boils down to me to a simple thing: a lot of effort needed on Jenkins side, for not much value comparing to GitHub actions.

Let’s use Jenkins for nightlies, packaging, but let’s move the simple tasks like testing to engine that require less maintenance (and other costs) from us.


Our Jenkins is up to date. However, not all jobs have been rewritten to pipelines.

On the other side, we can enable Ruby 2.7 on all plugins in a single file with Jenkins. With GHA you need to submit a PR to every single repo.

+1 on that. I actually have a branch somewhere that rewrites the plugin testing to use pipelines, but there’s still a few rough edges that I need to work out.

Jenkins can process JUnit results and I haven’t found a nice reporter for that in Github actions. Scrolling through lots of output to find the failed jobs is a pain and I hate Travis/Github actions for it. I know you can provide annotations with GHA, but I haven’t found anything useful with Ruby test frameworks that actually benefit from that. I will note that currently not all of our jobs in Jenkins are configured to process this either.

… and that’s a good thing for the maintainers and developers of the plugin. They can do the change in a PR, the whole test suite runs and they can spot and fix errors when it’s convenient for them in the scope of that PR. Stable branches might not need ruby 2.7 support but ruby 1.9, so they can be configured to use 1.9 and nothing else.
You don’t want to break all plugin tests just because there is some change in core.
As I mentioned before, Github Actions allows you to define reusable actions that can be parameterized. The action could take as an input parameter the range of supported core versions and we could still maintain the list of ruby versions in a central place if that’s something that’s important.

I went on and took the oportunity of Trends and statistics extraction and added GitHub actions to this plugin today. It was pretty streight forward (thanks to @TimoGoebel pointing me to a @kamils-iRonin’s work). So far it runs smooth and it’s quite quick, especially if you cache the gems.

Any suggestions for improvements are welcome. There is not really complicated test suit yet, so it’s too soon to judge, but I’d like to implement it in other plugins as well if it prove its worth.
Anyone who’s opposed to that, please share your thoughts.

Speaking on this this topic in general, as a Katello developer I find it limiting and frustrating that it is so hard to configure and iterate on our CI service. Using services that I can run locally or on my own instance in my github fork makes it much easier to test out changes and contribute them.

If you want a real example, I’m trying to make this change, which will require CI changes, but I have no idea to go about iterating on the change and making sure an updated directory structure in our CI doesn’t break anything, other than making a best-guess PR.

I do really love the ease-of-use with GH actions, but I realize it may not work for Katello since afaict it doesn’t support centos. However, I do wonder if there are any steps we could take to run a testing environment on ubuntu.

If we stay with Jenkins, I think we should look for a way that changes can be tested by a developer easily either locally or some sort of hosted automation.

I think this is an area we could improve even if we don’t find GH actions doesn’t work for us, it feels quite limited now.

1 Like

in this case you may be interested in self-hosted runners


… can’t you also simply spawn a centos container and run the tests inside that container?

Should work. I did that with katello-certs-tools and that’s certainly a lot easier than with Travis.

Adding a bit of background information to help with discussions.

Current Tests

As of today, there are three major sets of tests we run.

  1. Tests against a pull request
  2. Tests to verify a project branch and produce a verified source
  3. Tests to verify distribution packages and integration before releasing

Historically, #2 and #3 are only done for core and nightly packages. Further, #1 and #2 have been equivalent for nightly packages given we test the branches as thoroughly as a PR to ensure a high quality source is generated as input to package building.

If PR testing is moved to Github actions, the biggest issue we face is that branch testing may diverge from PR testing and result in lower quality sources. If for any project that has branch testing, we could move this to GH actions as well we could keep our parity. This branch testing would either have to:

a) generate source and upload it somewhere
b) kickoff a job to generate source in our Jenkins

Jenkins Theory

The theory behind Jenkins sharing is the move to the Groovy programming language backed by a DSL called Pipelines. The idea is you can create groovy functions to capture logic and then share them via a library hosted on the Jenkins so that developers can build pipelines similar to GH action declarations. You can even store these as Jenkinsfiles in your projects similar to a .travis.yml or GH action workflow.

Single Team vs. Developer Focused

The project has taken an approach to CI that was focused on community members within the infra team to create and manage jobs and helpers for projects for testing. This helped as there have not always been developer friendly methods of adding CI/CD and allowed developers to focus on code and writing tests. Rather than what ran said tests. As mentioned, there was also a heavy lean towards the effects on delivery and building to have a full delivery pipeline.

As a project, we have not gone this route as of yet preferring to store all of our Groovy in a single git instance and use Jenkins Job Builder to handle job definitions as well as the code that backs them. This is not as developer friendly but is friendly to a centralized team managing all aspects of this.

I think Travis, GH Actions and even Zuul (which just uses Ansible) both have simpler DSLs and processes for putting control back in developers hands. I think if projects would prefer to entirely control their own destiny in this regard, then we can certainly go that route as a community and reduce the usage of our Jenkins for delivery focused jobs.

The sense I get is that we are moving towards developer ownership of the tests that run and how they run to verify their software. I think this can be good for the community. The part that I then think developers also have to take ownership of is the generation of their source and what checks are placed on that. This allows the team of developers focused on delivery to care and know their is a verified source for a given project out there for packaging but not how it’s made. Which I do think could open up more projects to a nightly release style similar to the core projects and Katello.

Thanks for the summary. I think that one aspect that Jenkins does have is a unified experience for developers moving between projects. If you submit a PR to plugin X and plugin Y the experience and tooling is the same.

Things that the developer might are about are:

  • Supported Foreman versions (if more than just develop)
  • Exact style enforcement (Rubocop or similar tools)
  • Specific dependencies only used for testing

Other than that I’m not sure how much else there is to configure. The commands to run are rather static and IMHO that’s a feature since I don’t need to worry about how to work on a plugin. The Ruby/NPM versions are implied by Foreman (IMHO). We used to have databases as well, but that’s now limited to only PostgreSQL.

@ehelms thanks for the write-up and the context

I think we could live with PR testing being in GH actions and the project branch (nightly) testing being in Jenkins for a while. There could be some environmental differences, but I imagine we would consider both CI systems when the major environment updates such as nodejs or ruby version. Ideally though we move to one system, and like you mention for GH actions we would either need GH actions to generate a source or kick this off in Jenkins.

I’m introducing a non-blocking GH actions test for Katello React tests in - So far it has been very fast (~5 mins for setup, linting, and tests) and I can see a lot of value in this quick feedback. If we can figure out how to run rails tests (sounds like we would need to in a centos7 container) as well in a separate action and even the assets precompile, I could see the amount of time for the full test suite to run on a PR to be reduced significantly, something that I know developers would really appreciate.

As far as Katello goes, my thought is to introduce these non-blocking GH actions and live with them a while to see if A) we can properly run all of our testing in GH actions and B) they provide the consistent results we expect. If they meet our expectations, then we can discuss removing the PR jenkins tests and how we can incorporate the rpm building either in GH actions or through Jenkins.

I think having one team own testing infrastructure and another owning development sounds great on paper, but we have seen too many times where this hasn’t served us well. If developers are blocked, they would like to be unblocked ASAP and sometimes its hard to convey this urgency and frustrating when you can’t do much about it. The more developers own the PR testing and maybe even nightly release testing, the more they can quickly get unblocked. It also leaves a lot of room for experimentation with testing methods as we consider more functional and e2e testing. Not to mention the confusion and annoyances when nightlies are broken and its not clear who is responsible, when it will be solved, and what exactly them being broken affects. Perhaps there are ways we can improve our testing process as a whole while testing out new CI/CD systems.

I using GH actions is worth the experimentation at least, we may run into areas where its not a fit for us, but our CI/CD hasn’t really changed in a long time and the landscape is has evolved quite a bit, so it seems worth some investigation.