New repository for smoke tests

Hello everyone,

As a follow up to RFC - Integration Tests & End to End tests I want to create a new repository for this and @rplevka’s provisioning tests.

My proposed name is theforeman-smoker. The reason is that theforeman indicates it’s project wide including plugins rather than the single foreman project. Smoker since it’s doing smoke tests.

The project will be written in Python 3 and heavily rely on pytest.

We will include this in our release pipelines to verify functionality. That means we could port functionality from as well.

Are there objections to this? Other names?


So it will be theforeman/theforeman-smoker on GH? I might find this confusing, but not enough to tell you not to name it like this (ctrl-f exists) :slight_smile:

:+1: for having a central place for such tests and moving off bats.

cc @Stephen_Wadeley for awareness, he was planning to play with forklift/bats soon.

Yes, it does mean that name is a bit long. However, if you just name it theforeman/smoker your git clone will end up with a directory named smoker. That might be a bit ambiguous. Naming is hard :frowning:

I think just plain smoker is the best option. Clean, short and gets to the heart of what you really want to call it rather than adding redundant fluff. I also tend to think the addition of theforeman or foreman before any project name at this point too closely implies a plugin.

I avoided foreman because of that and thought theforeman was a bit more explicit. Perhaps plain smoker is good enough. It’s not as if we’re deploying this on clients.

You can always rename the directory locally when confused. So plain smoker sounds good.

Does this mean forklift should be named theforklift? :wink:

As stated before, I strongly believe these tests should go in to the core repo and run with each PR or it will be a pain to maintain this.
If we do it in a separate repository, I don’t ever want to see any core PR blocked because it breaks “smoker” tests. Separate repo means separate responsibility (read: somebody else’s problem). I know I’m probably not making a lot of friends with writing this, but I have seen in the past that the “definition of done” in core gets harder and harder to reach.

Will tests for every plugin live in this repo? How do you match the tests and for which foreman/plugin version they are valid?

Plain smoker sounds good.

No, but I didn’t want foreman-smoker because it sounded too much like a plugin which also prefixes with foreman.

The goal of these tests is to test a fully assembled installation. Foreman itself is only a piece of the puzzle.

I actually think that makes absolute sense and I wouldn’t even want these smoke tests to run on the foreman repo itself. One goal we’re looking at is testing provisioning of machines using kickstarts. In the case of Katello including a content sync. Ideally we’ll also make sure the DNS records are created. That’s testing details the main foreman shouldn’t even be aware of.

My current plan is to use marks and configure the test suite run to only for the plugins that are expected.

They could become in-handy when testing the Docker images.


Are there any restrictions to what plugins can be tested?

What happens if the tests break? Who’s gonna fix them?

Thanks @ekohl +1

About naming, what about theforeman/smokerman?

Sure, but I see those as an oddness in the main repo. If you go that route, you could argue you need all plugins in the same repo so you can fix any related issues. However, some plugins live outside our organization and might even be internal to a company. We can always change this, but I’d like to avoid designing the perfect solution holding us back from implementing a good solution (even though I’ve been probably guilty of doing the same in the past).

The way I’m thinking about it now is that we need it to be run continuously because that’s the only way it’s useful. Currently we have 3 pipelines in forklift that we run: Foreman (without plugins), Katello and luna.

At this point I’d say that we can also add additional pipelines with other plugins. The current limitation is that they need to be packaged. Ideally also part of the installer because it has all the logic for package names in various distros and versions.

Ideally we’d have tests for all plugins that are installable via the installer.

Currently the pipelines I mentioned report to and there we would notice if tests break. Fixing is a harder question. Shared responsibility is always hard, but in this case it will be. Ideally speaking the maintainer would notice it, but perhaps we need a ping. From there on we can decide how to fix it (as with all bugs). If no solution can be found and it’s in a plugin, it can be a reason to drop the plugin altogether from the installer and packaging.

I was more thinking of a meat smoker than the person who does the smoking.

@ekohl I think its a great idea to have release-based smoke testing automation, especially fitting our immediate need to test for blank pages.

I wonder if we need another respository for these tests? Could this all live in forklift? We have so many repositories as it is, IMHO it can often be confusing and hard to find bits of code when things are so spread out. I think we should ask if there is any existing place these smoker tests can live before creating a new repository.

@TimoGoebel From the RFC - Integration Tests & End to End tests thread, it sounds like there are two efforts going on:

  1. Create automated smoke tests that run on a release-basis (the ones @ekohl are proposing here)
  2. Create automated UI smoke tests that run on a PR basis (the ones myself and @amirfefer have explored with cypress)

I think these can co-exist and serve different purposes. They also have different limitations, tests run on a release basis won’t have a huge time limit compared to tests run on each PR, which we would want to keep under the existing ~30 mins to run the tests (at least in Katello’s case). I agree that the PR level tests should live in the code itself. As a developer, if I break something in a PR, I don’t want to have to open up a separate PR to update tests, this is too hard to coordinate.

I think it’s fine if the per-release tests being proposed here live outside of the code, especially if they wont be plugin specific. Though we should do our best to keep these tests as flexible and forgiving as possible.

I’m glad to see testing efforts underway!

Update with the current progress. Now that 1.23.0-RC1 is out, I have some time to work on this. The repository has been created and there’s a PR up to add initial tests.

Then there’s also a PR to integrate this into forklift.

The next steps are to finish the forklift PR to also test on the upgrade pipeline. Then it needs to be verified on CentOS CI as well as integrating the debug logs into Jenkins so it’s easy to read the results.

Once this is done, I would like to ask for a REST endpoint that returns the menu structure that’s visible for the current user. This would allow me to remove the hardcoded list and automatically test all plugins as well. When the endpoint returns a 404 we can always fall back to the hardcoded list for compatibility with older versions.

1 Like

So it’s been another few months so time for another status update. It took a while to find time and properly wire everything up, but it’s now integrated in the Foreman Nightly RPM Pipeline. If you click through to the actual job, then you’ll find the actual report.

Currently a failure can be found and this appears to be a legitimate failure on the logout page. Copying here since the output might be rotated later:

Error Message

assert not [' 0:194011 Uncaught TypeError: Cannot read property 'icon' of null"]


selenium = < (session="77958fae29a66c039021105372fef04a")>
user = User(username='admin', password='changeme', name='Admin User')
url = ''

    def test_menu_item(selenium, user, url):
        assert selenium.current_url.endswith('/users/login')
        login_field = selenium.find_element_by_name('login[login]')
        password_field = selenium.find_element_by_name('login[password]')
        account_menu = WebDriverWait(selenium, 10).until(
            EC.presence_of_element_located((By.ID, 'account_menu'))
        assert account_menu.text ==
        assert selenium.current_url == url
        if == 'firefox':
            print("Firefox hasn't implemented webdriver logging")
        logs = selenium.get_log('browser')
>       assert not [x['message'] for x in logs if x.get('level') == 'SEVERE']
E       assert not [' 0:194011 Uncaught TypeError: Cannot read property 'icon' of null"]

tests/ AssertionError

I’ve opened to improve the error reporting but this appears to be a legitimate failure.

Currently smoker test failures make a build UNSTABLE which is treated as a success. Once this proves itself, we can make it fail the build. The reporting also needs to be added to the other pipelines, but I already saw that there’s an error in every page for Katello in the current staging repositories.

1 Like