RFC - Integration Tests & End to End tests

Hello everyone,
on this thread I would like to describe our concerns with capybara, especially from a Front-End developer perspective, and to suggest some alternatives.
Ideas and other thoughts will be more then welcome :slight_smile:

Present architecture

E2E

Pipeline bats, but not client edge - server edge running on a browser

Integration

foreman uses rails minitest with capybara (phantomJs driver)

While this integration test architecture has some benefits:

  1. Mocking data with fixtures and factoyBot
  2. Direct access to models/DB
  3. Works naturally with rails

It defiantly has some cons that affect us:

  1. Might be slow, casued by predefind timeouts

  2. Debugging is difficult and frustrating, for example

    • Poor ability to see why a test faild visually
    • Having unclear JS errors
    • Running a singular test might be encrounterd with errors (assets should be precompile, as long as webpack)
  3. Random test failures

  4. Abusing tests - some tests are written from a unit prespecitve

  5. The current architecture seems to be incompatible with react, some tests are failing due to timeout errors, bad handling with promises, and async events (for instance, we have a workaround for AJAX requests wait_for_ajax)

Alternatives:

  1. Gradually moving to another js driver - (we have couple of PRs for headless chrome i.e #21592),
  2. Replacing capybara for react pages with a different tool
  3. Gradually migrate capybara to a different tool
  4. Suggest any ? :slight_smile:

On my searching for a modern testing tool, a tool named cypress took my attention the most

an open source project for full browser E2E testing

Pros:

  1. integrates with CI such as jenkins and travis
  2. recording tests - automatically record failing test with a video and a snapshot
  3. Optional dashboard
  • Cypress could be used as a sass with a dashboard - grants an easy way to access recorded tests, fully integrates with travis.
    This link will lead you to the dashboard of the demonstrated test I made.
  1. great for debugging tests - Adding a local tool, watching a test on the browser live, plus
    an ability to watch each step by demand.

  2. Clear API for writing tests

  3. running tests concurrently

  4. allows to execute processes - there are few gems that communicates with rails (cypress-on-rails)

Cons:

  1. No “out of the box” server side communication (besides 3rd-party gems)
  2. Different environment, javascript based
  3. ATM only chrome and electron are supported (supporting in firefox and IE are developed)

Demonstration

For the demonstration I’ve created a travis configuration with foreman test environment and cypress

branch with these changes

Test example

test recording, recorded on travis environment

Recorded Test

Test Code
import { FOREMAN_ADDR } from './config';

describe('Layout', () => {
  beforeEach(() => {
    cy.visit(FOREMAN_ADDR);
  });

  it('hover all menu items', () => {
    cy.get('.fa-tachometer').trigger('mouseover');
    cy.contains('Dashboard').should('be.visible');

    cy.get('.fa-server').trigger('mouseover');
    cy.contains('All Hosts').should('be.visible');

    cy.get('.fa-wrench').trigger('mouseover');
    cy.contains('Host Group').should('be.visible');

    cy.get('.pficon-network').trigger('mouseover');
    cy.contains('Smart Proxies').should('be.visible');

    cy.get('.fa-cog').trigger('mouseover');
    cy.contains('Users').should('be.visible');
  });

  it('taxonomies dropdown', () => {
    cy.contains('Any Organization').click();
    cy.contains('Manage Organizations').should('be.visible');

    cy.contains('Any Location').click();
    cy.contains('Manage Locations').should('be.visible');
  });

  it('notfication drawer', () => {
    cy.get('#notifications_container').click();
    cy.contains('No Notifications Available').should('be.visible');
  });

  it('user dropdwon', () => {
    cy.get('#account_menu').click();
    cy.contains('My Account').should('be.visible');
  });

  it('mobile view', () => {
    cy.viewport(550, 750);
    cy.contains('Toggle navigation').click();
    cy.get('.visible-xs-block .fa-user').click();
    cy.contains('My Account').should('be.visible');
  });
});

As you can see, this test looks clear, no JS skills are really required, and with gems like cypress-on-rails it would
be pretty easy to migrate a test from capybara to cypress.
I will take any responsibility for such a gradually/partly migration, if decided.

Plugins

It’s a pain to have a broken plugin that affect the entire application.
Katello, for instance, doesn’t have any integration tests with foreman core, AFAIK.
With this tool plugins would be able to create integration tests more easily with almost the same travis configuration
and by that, plugins would be more stable with foreman core changes.

E2E

Beside integration tests, We can take these advantages, and create an E2E architecture for foreman (within production environment).

With it we can gain great benefits, for example, discovering js runtime errors, which happen after webpack compilation.
After all, this will increase foreman builds stability by far.

1 Like

Am I correct that this is usable for black box testing (as we do in our release pipeline)? i.e., we could point this at a “random” Foreman URL and it would do verification for us?

https://docs.cypress.io/guides/getting-started/installing-cypress.html#System-requirements does not list CentOS 7 so if that doesn’t work it’d be a major limitation.

Another downside is that it would duplicate any efforts done by RH QE in https://github.com/SatelliteQE/robottelo

These don’t have to be blockers and I’d like to repeat that I’d love to have some real black box UI tests we can apply on any installation. That does we need a strategy to handle versioning.

While I’m all in with extending testing into this direction, I would rather like to see a tool like testcafe, that’s not Chrome-only. While I did see the cypress issue about implementing cross-browser capabilities, there’s no estimated delivery date on that…

I think it will be a great addition to the testing-stack.
At the moment we don’t have a real E2E/BlackBox testing for upstream, only a bunch of integration-tests.

@amirfefer didn’t you showed me a demo on centos?

Good point @ekohl, one of the downsides of robottelo is the fact it lives in a different repository outside of the foreman stack.
Benefits of having E2E testing in the foreman source-code are:

  1. You break it you fix it so you update the tests or fix the issue the tests were found.
  2. You write a new feature, write E2E tests around it
  3. We can easily run it in the CI env and reproduce locally
  4. We can deliver E2E testing infra for plugins

testcafe looks promising as well, I think we should check it out and compare it to cypress.

didn’t you showed me a demo on centos?

@sharvit, I’ve run cypress desktop on centos-devel, and it worked as expected.

@mmoll - testcafe looks great ! thanks for sharing it

But the downside is that you don’t have a central place where you can test the Foreman including plugins. This means your test suite also needs to become pluggable. Where do you store the tests of plugin combinations to see if they’re compatible?

I’m not in QE, but it’d be good to involve them in the discussion here.

I’m just wondering, does the foreman need to be responsible for testing itself together with a set of plugins?

I can think about 4 use-cases for tests:

  1. Foreman should be responsible for testing itself (and it’s pluggability functionality).
  2. Plugins should be responsible for testing themselves together with the foreman.
  3. If a plugin relies on other plugins, this plugin should be responsible to test itself together with the plugins it relies on.
  4. Downstream projects that rely on a set of plugins, should be responsible to test itself together with the foreman and the plugins set it wishes to use.

For all those use-cases, the foreman can potentially provide an infrastructure.

Would love to see QE join the discussion :+1:

Thanks for writing this up @amirfefer. Cypress looks good. It would be interesting to put down the requirements we have on such tool/tests and compare with other options. Just to make sure it covers all our needs before putting larger efforts into that.

The screenshots and videos are impressive. What I often see when something goes wrong in our redux code is a completely blank page. In such cases screenshots are useless and I think it’s critical to have a full log of errors and warnings from the browser’s console available.
(BTW Cypress can do that with some additional setup https://github.com/cypress-io/cypress/issues/300#issuecomment-321587149)

I have mixed feelings about that. A lower barrier for writing and updating tests when they’re in the same git repo is an obvious benefit to me. At the same time the point about testing plugins and their combinations is very valid I think. Maybe we need both.

I might be wrong but my impression is that most often breakages happen with a combination of plugins rather than in vanilla foreman. I agree it’s plugins’ responsibility to test they’re compatible simply because it’s not possible for the foreman to test each combination. On the other hand if we provide only an infrastructure it will lead to duplication of tests at the best. A solution could be to somehow provide the test suite from the core for importing into the plugin’s tests.

How about defining a set of actively maintained plugins (or multiple sets) that we’ll test together?

QE in the house :slight_smile:

In no particular order:

  • E2E automated tests are supposed to take longer to execute, so you wouldn’t want to make that your first test suite to execute or necessarily block PRs

  • QE’s existing tests are either integration (of components) or system (E2E) tests, specifically designed to run against a Satellite instance and not necessarily a TheForeman instance, so your mileage will vary

  • I’d love to see PRs coming for QE’s tests but I understand that they’re a bit hard to run outside of QE

  • Having QE’s existing tests outside of TheForeman Github organization should not prevent folks from sending PRs but you all know that :slight_smile:

  • If you want to run E2E automated tests, which I highly recommend, perhaps an alternative would be to:

  • Add more tests to our existing UI tests using the AirGun library

  • Consider using a Dockerized version of Chrome or Firefox and Selenium, which in my opinion would be a better solution, since you could save all logs, take screenshots of failures, and run it all through Travis. I have a similar setup on Gitlab here using their CI/CD environment (aka Pipelines)

Hope this was helpful?

1 Like

its been around 6 months since this thread, any way to restart it? or perhaps starts with testcafe/cypress for a couple of months and revisit?

From a release engineer perspective I’d prefer to have something. That also means I wouldn’t mind a temporary solution, even if we throw it away after a few months. My requirements:

  • Black box. It should have nothing on the actual system under test. That means providing it with an url, username and password should be sufficient.
  • An easy traffic light functionality: red if it’s broken, green if it’s working

Nice to have:

  • An easy to consume test output. A common format is xunit but Jenkins can also easily test TAP
  • Flexible. It should work on a vanilla Foreman, but also handle plugins.
  • Quick.
  • Have a multi-version strategy. I’d be ok with just using this for nightly, but it’d be great if we could also use it on releases.
1 Like

I think that changing the name of what we are trying to accomplish here to “smoke tests” would be better. We aren’t trying to implement a way to test all the functionality of the application (and thus duplicate QE’s effords), we just want some simple tests to make sure that our application isn’t dead on arrival with JS errors.

My requirements:

  • Run these with each PR against a production build and fail the PR if they fail
  • Code lives with the application itself
  • Less than 30 minutes to run
2 Likes

Today I tried my hand at my immediate need: testing whether the pages actually load and no “white pages” are shown because webpack failed. My minimal implementation is:

Sadly it’s limited to Chromium now because Firefox doesn’t implement the Webdriver logging so you can’t get the browser console log which my test relies on:

The generated html log (838.7 KB) contains the details for all the pages with their HTML source and a screenshot. By default it only takes a screenshot on failed tests but SELENIUM_CAPTURE_DEBUG=always can change that. Since it’s all pytest you can also add --junitxml=report.xml.

Note that I manually ensure the tests failed by inserting a console.error('Test') since no other pages logged any error (yay :partying_face:).

On my laptop this takes about a 100 seconds.

If there are no objections, I intend to implement this in forklift so we can test our nightlies and solve my immediate need. After that I’m happy to accept improvements to trash it in favor of a more complete solution.

I’ll also look at differences between 1.20 and nightly. It should all work since the login process is the same and I’d expect URLs to remain the same. To support plugins (including katello) I’m wondering if I should extract URLs from the menu or use hardcoded files with paths to test.

3 Likes

Thanks for doing this. I want to have this, but I do have an objection: The target repo. I want to have this in core and run with every PR. Why would you want to have this in a separate repo?

Can we run the tests in a Docker container so a developer does not need a special environment to run the tests in?

Hardcoded files need maintainance. If we hardcode them, we should have tests in core that tell us that we missed a new route. But then again, we could just run a script to print all routes and feed your code with the script’s output.

Is there a ruby or javascript implementation for this? Despite personal preferences, I’m a little reluctant on adding tests in yet another language.

I guess many of us have been interested in this :smile: I’ve been testing out cypress for Katello. I have a vested interest in making sure the UI works properly as I have been moving our angular pages to webpack and would like to make sure the pages work as expected in webpack. I’m hoping to include in automated testing in with the refactor, I think it’s a great opportunity to do so.

I created a PR here with some instructions on how to use it and a link to the video. It’s fully functional locally but I haven’t added any CI implementation yet.

Like @amirfefer, I was very impressed with cypress.io. It’s very simple to set up and provides a lot of tools that you can leverage for your testing strategy. Some things I like about it (sorry if this is repetative from what others have said about it):

  • It interacts with the application as a user would
  • You can easily record network requests as fixtures and stub them to keep any calls away from the backend server
  • Can run in headless mode or in a very helpful GUI.
  • In headless mode it records video
  • No coupling with the implementation, it can test rails/angular/react pages
  • No coupling with how our application is packaged, if it can resolve a UI from a hostname, it can test it out.
  • Very robust API with many tools that we could potentially use.

I was able to get it to communicate with the server, though I’m not sure that is something we want for PR-level testing as that requires a full environment with our backend services. This is why I stubbed network requests. However, it does leave the possibility open for true e2e tests on a per-release or nightly basis

It also can be used for API testing, which is something we may want to explore.

The cons I could see are:

  • No firefox support (as has been mentioned)
  • Additional packages needed to support cypress
  • Since we don’t always have good UI tag selectors, you are sometimes forced to use a styling class to select a DOM element, which could be brittle (though I don’t think this is unique to cypress)
  • Stubbing fixtures could break with API response changes

As far as using for a Foreman plugin, the only part that I saw that may need to be shared is the login workflow. Because of the “black box” nature of cypress, it doesn’t care if its working on a plugin/rails engine/whatever. This is good news in that plugins can use their own implementation, and not need to wait for changes from Foreman core. (at least as far as I can tell)

My goals are similar to the ones @Walden mentioned:

  • Have an automated UI “smoke test” that lives with the application so it can be updated when breaking changes are introduced.
  • Run on every PR and not take longer than the existing tests.

My proposal for Katello:

  • Introduce cypress.io testing as a PR-level job (travis or jenkins, I’m not sure which is best yet) with this PR.
  • As part of moving Bastion pages to wepback, I would create a cypress ‘smoke test’ for that page that runs against our current page.
  • This test would be merged and run in automation against future PRs.
  • Then the page would be moved to webpack, where it should pass the same smoke test (as no functionality should be impacted).
3 Likes

+1 to implementing something like this in forklift to use for a per-release basis while we work on PR-level UI automation and more comprehensive solutions.

Thanks for pushing it forward @ekohl

+1
I would want something I can run for each PR so developers who made the error will have the tool to find, debug and fix the error before it gets merged.

+1
I would say javascript is ideal because it is mostly going to catch javascript errors.
Therefore using a javascript tool will make it easier to debug to javascript errors.

1 Like

@John_Mitsch I tend to agree with you, imo cypress (or a similar tool) would be the best fit for foreman and it’s plugins.