Enhancing our tests with stable snapshots for UI testing

Recently we talked about our strategy around testing UI components.
@MariaAga was in favor of reducing the amount of small unit tests and relying more on high-level integration tests including the full stack using capybara or similar framework to test the whole scenario from the ground up.
Since I am worried about the amount of compute power that will be needed for such testing, and the amount of effort that will be needed to write such complex setups, I want to suggest a middle ground:

Test the whole UI stack as a whole, but use mocking framework like MSW to isolate server requests.

To mitigate the issue with stale mocks, I would like to suggest the following strategy:

  1. Have a folder with API responses. e.g. /test/api_snapshots this folder will contain named responses. For example empty_hosts.json, three_hosts.json and host_with_content.json.
  2. The UI tests will use the files from that folder to mock the API requests needed for the specific UI test. For example the host details component will use the host_with_content.json and the hosts list component may use the empty_hosts.json and three_hosts.json for different scenarios.
  3. To make sure the files are not getting stale, we will use a service similar to snapshots_service we have for templates, that will generate the files using FactoryBot.create factories for prerequisites and call the relevant controllers to generate the responses.
  4. We will have tests set up that will call the new service and compare the results against the files. If there is a difference between the file and the result - the API changed, and either the files need to change, or the API needs to get back to the previous state.
  5. A task will be added that writes/updates the json files in case the change is desired

Advantages:

  1. We have tests that verify stability of the API for paths that are critical for the UI
  2. UI tests are independent and will not require a running ruby code to be executed
  3. UI tests are fast, as they just read the mocked responses

Disadvantages

  1. Writing UI tests will require existing json files. If the json does not exist, a new code would need to be added to the response generation service, which may be tedious.
  2. Extra code to maintain, and it’s not simple (the service and the rake task).

Ideas, thoughts, comments?

1 Like

I’m really not in favor of starting with mocking. You will easily spend more time creating a realistic mock. If you get them wrong, what do your tests really prove anyway? Another concern is that it can make cherry picking way harder. Either you have conflicts or the API may respond in a different way.

:-1: from my side.

I understand the mocking concern.
You can think about it as a high-level equivalent of a factory, and less as a mock. Especially with the guardrail of tests that make sure the mocks are still relevant.
The idea is not to craft the json files manually, to avoid exactly the problem you are describing, but to use the actual controller’s response as the mock. This will be done by the factory service and the response will be recorded.

One of the things that I want to avoid, is complex setups that are needed to test a specific UI feature. For example to test host networking UI, you have to make sure you have operating system, domain, subnet e.t.c. properly configured. It would be a huge waste of resources to generate those objects using API before we can start testing the actual piece of UI.
If you think about it, let’s say you want to test some fields for the host index page. You will need a way to actually provision a host and let it report back the facts, if you want to get to a realistic state of the index page.

One complication that I see with this approach, is the existence of plugins: do we want to let the plugins to participate and how they will generate the responses.
Since we shouldn’t have conflicting plugins, I would suggest generating the hosts on a machine with a predefined superset of the enabled plugins (let’s say everything we are maintaining under the foreman org plus katello of course) and the factories will ignore the plugin boundary.

As for your concern about cherry picking - two things:

  1. I don’t see too much trouble coming from the template snapshots that we already have.
  2. If you cherry-pick things that can influence the API response, I think it is a big deal. We will need to regenerate the jsons and make sure the changes are desired.
1 Like

It still feels like premature optimization. We will end up storing huge JSON files in our codebase that are effectively a cache. I remain a strong :-1: on this. If it turns out that it is too slow then we can investigate it.

I didn’t even think of this part.

@ekohl Can you specify what is “it” in here?

Testing against the API.

What would be your recommendation for testing host’s content tab elements?
If we go the API-only way you will need:

  1. Sync a repo
  2. Create a host
  3. register the host

Each one of these steps is more than one API call and takes quite a lot of time. I think going the API-only route is a bit problematic here.

I am generally a +1 on the generated mocks idea because we use mocked API responses in our front-end tests which run the risk of becoming stale. We use something similar in our ruby tests- VCR cassettes for our katello-pulp API integration tests and those have proved very valuable in testing pulp version upgrades.
Re-recording on older dot releases is a little complicated cause we can only record these in dev boxes and sometimes cherry-picking becomes a pain but we have been able to manage it so far mostly cause dot releases don’t have too many API changes.

And in this case I suggest maintaining the responses, and checking them regularly against an actual response.
Re-recording will need a system with the proper set of plugins, true, but it could be done in the CI automatically.
The idea is to create those responses using ActionController::TestCase or as Rails is moving away from this class, use the ActionDispatch::IntegrationTest for that. It will enable us to generate the responses using a state defined by our FactoryBot factories and simplify the whole process.

To add to the previous argument, if we want to start moving away from UI-centric approach and go to a bit more stable API + a UI that calls it approach, having the components separated feels more natural to me.