E2E and functional testing meeting notes

Hey all,

Myself, @amirfefer, @ekohl, and @Roman_Plevka met to discuss e2e and functional testing for Foreman and it’s ecosystem. This has been a topic that has been kicked around for a while, and we would like to take some actions to improve this area.

I want to share our discussions (and continue to do so with future meetings) with the community, so we can get feedback and input from developers to ensure that our goals around testing are aligned.

First we discussed what problems we are trying to fix. I think it’s important to establish this before delving into testing technology discussions, even if they do seem obvious. We came up with:

  • Releases are not stable overall
  • Large amount of regressions introduced into releases
  • Installer and platform changes can introduce regressions
  • UI testing is not comprehensive, a lot of bugs found in the UI
  • A lot of issues arise when parts of the Foreman ecosystem are combined

So our goals with introducing new testing are:

  • Upstream stability
  • Testing plugins along with core
  • Testing actual user workflows as if the user is doing them (e2e testing)
  • Parity between UI and API
  • Testing multi-system setups (provisioning, capsule, content, subscription)
  • Make testing simpler for the developer
  • Quicker feedback on changes for developers

We started to discuss the logistics of the “how” we will do this, I think a lot is left to be determined, but here is the general direction so far:

  • Using foreman-smoker to test Foreman instances and it’s plugins
    • This will be used to test the platform (installer and packaging) and the functionality of a Foreman install.
    • Run these tests as part of the nightly pipeline
    • Add a way to allow plugins to add tests to smoker
    • Test actual user workflows and multi-system setups.
    • Basic UI testing from a packaging perspective (did the page load?), but not UI workflow testing.
    • Smoker uses pytest and also selenium. We were not sure if everyone would be ok with Python, but the background of this is Satellite QE uses pytest, so it will make it easier for us to use their existing work and collaborate with them. I think it’s not a huge ask for ruby developers since they are so similar, but If you are strongly against using Python, speak now or hold your peace :wink:
    • See Ewoud’s post introducing the repository for more detailed information.
  • UI testing
    • This is an area we want to improve, it seems like it makes more sense to add these tests outside of smoker. Smoker will be used to make sure pages load and simple packaging assurances.
    • Some of us have investigated cypress.io and really enjoyed it, we will continue to investigate this and see where and when it would make the most sense to add these tests.
  • Nightly testing
    • The nightly pipeline, as is, can notoriously break and stay broken. The concern with adding more tests to this pipeline is that there will be more breakages than there are already.
    • In general, we have found “everyone’s problem is no one’s problem” and when one person is responsible for something, problems are addressed. For example, we have had a lot of success having a rotation to triage Katello community support issues.
    • We are considering having a nightly pipeline rotation, with both a Foreman and Katello member responsible for the nightly pipeline each week. This would rotate every week and the member would be responsible for triaging issues to the responsible parties and monitoring the fixes. So not necessarily doing the fixing themselves, but facilitating the fixes and making sure those responsible are aware.
  • Automation tool
    • We mostly use Jenkins currently and likely will for foreman-smoker, but there are some concerns there. It is not always clear who is responsible for monitoring and maintaining Jenkins, also developing new jobs can be hard as there is no way to run them as you can’t run the job to test it. These are some areas we may look to improve as part of this effort.
    • Devs have had good experiences with Travis and Github Actions though these don’t always provide enough for our testing needs, in particular testing full installs. There may be some room to use these tools in other areas, such as UI testing.

This is a lot of information and we are still figuring out the approach we are taking, but the goal is to always communicate with the community and get feedback as we progress. Right now we are mostly getting organized and investigating the tools we want to use for testing but welcome any feedback or concerns!