I think it will be a great addition to the testing-stack.
At the moment we donāt have a real E2E/BlackBox testing for upstream, only a bunch of integration-tests.
@amirfefer didnāt you showed me a demo on centos?
Good point @ekohl, one of the downsides of robottelo is the fact it lives in a different repository outside of the foreman stack.
Benefits of having E2E testing in the foreman source-code are:
You break it you fix it so you update the tests or fix the issue the tests were found.
You write a new feature, write E2E tests around it
We can easily run it in the CI env and reproduce locally
But the downside is that you donāt have a central place where you can test the Foreman including plugins. This means your test suite also needs to become pluggable. Where do you store the tests of plugin combinations to see if theyāre compatible?
Iām not in QE, but itād be good to involve them in the discussion here.
Iām just wondering, does the foreman need to be responsible for testing itself together with a set of plugins?
I can think about 4 use-cases for tests:
Foreman should be responsible for testing itself (and itās pluggability functionality).
Plugins should be responsible for testing themselves together with the foreman.
If a plugin relies on other plugins, this plugin should be responsible to test itself together with the plugins it relies on.
Downstream projects that rely on a set of plugins, should be responsible to test itself together with the foreman and the plugins set it wishes to use.
For all those use-cases, the foreman can potentially provide an infrastructure.
Thanks for writing this up @amirfefer. Cypress looks good. It would be interesting to put down the requirements we have on such tool/tests and compare with other options. Just to make sure it covers all our needs before putting larger efforts into that.
The screenshots and videos are impressive. What I often see when something goes wrong in our redux code is a completely blank page. In such cases screenshots are useless and I think itās critical to have a full log of errors and warnings from the browserās console available.
(BTW Cypress can do that with some additional setup https://github.com/cypress-io/cypress/issues/300#issuecomment-321587149)
I have mixed feelings about that. A lower barrier for writing and updating tests when theyāre in the same git repo is an obvious benefit to me. At the same time the point about testing plugins and their combinations is very valid I think. Maybe we need both.
I might be wrong but my impression is that most often breakages happen with a combination of plugins rather than in vanilla foreman. I agree itās pluginsā responsibility to test theyāre compatible simply because itās not possible for the foreman to test each combination. On the other hand if we provide only an infrastructure it will lead to duplication of tests at the best. A solution could be to somehow provide the test suite from the core for importing into the pluginās tests.
How about defining a set of actively maintained plugins (or multiple sets) that weāll test together?
E2E automated tests are supposed to take longer to execute, so you wouldnāt want to make that your first test suite to execute or necessarily block PRs
QEās existing tests are either integration (of components) or system (E2E) tests, specifically designed to run against a Satellite instance and not necessarily a TheForeman instance, so your mileage will vary
Iād love to see PRs coming for QEās tests but I understand that theyāre a bit hard to run outside of QE
Having QEās existing tests outside of TheForeman Github organization should not prevent folks from sending PRs but you all know that
If you want to run E2E automated tests, which I highly recommend, perhaps an alternative would be to:
Add more tests to our existing UI tests using the AirGun library
Consider using a Dockerized version of Chrome or Firefox and Selenium, which in my opinion would be a better solution, since you could save all logs, take screenshots of failures, and run it all through Travis. I have a similar setup on Gitlab here using their CI/CD environment (aka Pipelines)
From a release engineer perspective Iād prefer to have something. That also means I wouldnāt mind a temporary solution, even if we throw it away after a few months. My requirements:
Black box. It should have nothing on the actual system under test. That means providing it with an url, username and password should be sufficient.
An easy traffic light functionality: red if itās broken, green if itās working
Nice to have:
An easy to consume test output. A common format is xunit but Jenkins can also easily test TAP
Flexible. It should work on a vanilla Foreman, but also handle plugins.
Quick.
Have a multi-version strategy. Iād be ok with just using this for nightly, but itād be great if we could also use it on releases.
I think that changing the name of what we are trying to accomplish here to āsmoke testsā would be better. We arenāt trying to implement a way to test all the functionality of the application (and thus duplicate QEās effords), we just want some simple tests to make sure that our application isnāt dead on arrival with JS errors.
My requirements:
Run these with each PR against a production build and fail the PR if they fail
Today I tried my hand at my immediate need: testing whether the pages actually load and no āwhite pagesā are shown because webpack failed. My minimal implementation is:
Sadly itās limited to Chromium now because Firefox doesnāt implement the Webdriver logging so you canāt get the browser console log which my test relies on:
The generated html log (838.7 KB) contains the details for all the pages with their HTML source and a screenshot. By default it only takes a screenshot on failed tests but SELENIUM_CAPTURE_DEBUG=always can change that. Since itās all pytest you can also add --junitxml=report.xml.
Note that I manually ensure the tests failed by inserting a console.error('Test') since no other pages logged any error (yay ).
On my laptop this takes about a 100 seconds.
If there are no objections, I intend to implement this in forklift so we can test our nightlies and solve my immediate need. After that Iām happy to accept improvements to trash it in favor of a more complete solution.
Iāll also look at differences between 1.20 and nightly. It should all work since the login process is the same and Iād expect URLs to remain the same. To support plugins (including katello) Iām wondering if I should extract URLs from the menu or use hardcoded files with paths to test.
Thanks for doing this. I want to have this, but I do have an objection: The target repo. I want to have this in core and run with every PR. Why would you want to have this in a separate repo?
Can we run the tests in a Docker container so a developer does not need a special environment to run the tests in?
Hardcoded files need maintainance. If we hardcode them, we should have tests in core that tell us that we missed a new route. But then again, we could just run a script to print all routes and feed your code with the scriptās output.
Is there a ruby or javascript implementation for this? Despite personal preferences, Iām a little reluctant on adding tests in yet another language.
I guess many of us have been interested in this Iāve been testing out cypress for Katello. I have a vested interest in making sure the UI works properly as I have been moving our angular pages to webpack and would like to make sure the pages work as expected in webpack. Iām hoping to include in automated testing in with the refactor, I think itās a great opportunity to do so.
I created a PR here with some instructions on how to use it and a link to the video. Itās fully functional locally but I havenāt added any CI implementation yet.
Like @amirfefer, I was very impressed with cypress.io. Itās very simple to set up and provides a lot of tools that you can leverage for your testing strategy. Some things I like about it (sorry if this is repetative from what others have said about it):
It interacts with the application as a user would
You can easily record network requests as fixtures and stub them to keep any calls away from the backend server
Can run in headless mode or in a very helpful GUI.
No coupling with the implementation, it can test rails/angular/react pages
No coupling with how our application is packaged, if it can resolve a UI from a hostname, it can test it out.
Very robust API with many tools that we could potentially use.
I was able to get it to communicate with the server, though Iām not sure that is something we want for PR-level testing as that requires a full environment with our backend services. This is why I stubbed network requests. However, it does leave the possibility open for true e2e tests on a per-release or nightly basis
It also can be used for API testing, which is something we may want to explore.
The cons I could see are:
No firefox support (as has been mentioned)
Additional packages needed to support cypress
Since we donāt always have good UI tag selectors, you are sometimes forced to use a styling class to select a DOM element, which could be brittle (though I donāt think this is unique to cypress)
Stubbing fixtures could break with API response changes
As far as using for a Foreman plugin, the only part that I saw that may need to be shared is the login workflow. Because of the āblack boxā nature of cypress, it doesnāt care if its working on a plugin/rails engine/whatever. This is good news in that plugins can use their own implementation, and not need to wait for changes from Foreman core. (at least as far as I can tell)
My goals are similar to the ones @Walden mentioned:
Have an automated UI āsmoke testā that lives with the application so it can be updated when breaking changes are introduced.
Run on every PR and not take longer than the existing tests.
My proposal for Katello:
Introduce cypress.io testing as a PR-level job (travis or jenkins, Iām not sure which is best yet) with this PR.
As part of moving Bastion pages to wepback, I would create a cypress āsmoke testā for that page that runs against our current page.
This test would be merged and run in automation against future PRs.
Then the page would be moved to webpack, where it should pass the same smoke test (as no functionality should be impacted).
+1 to implementing something like this in forklift to use for a per-release basis while we work on PR-level UI automation and more comprehensive solutions.
+1
I would want something I can run for each PR so developers who made the error will have the tool to find, debug and fix the error before it gets merged.
+1
I would say javascript is ideal because it is mostly going to catch javascript errors.
Therefore using a javascript tool will make it easier to debug to javascript errors.
My (selfish) intent is that it I want to verify a (remote) server installation with 0 interaction on the actual system. I want to verify every page at least loads without errors so I know that the packages we have in a repository are at least doing something for the end user. The aim is to verify our build system works.
Possibly, but I donāt intend this tool for developers. Its aim is automatically verifying the builds. The intended step for developers is to spin up a similar box and reproducing it should be trivial: log in and navigate to the page. Open your browserās console log and see.
Agreed, but this was a solid base line. These URLs havenāt changed in years and users might actually have them bookmarked. They should be stable and breaking those URLs should be considered breaking the application.
Ideally there would be a JSON HTTP endpoint (so thereās no local interaction on the system) that returns all the menu items so you can also automatically verify all plugins.
No. As I said: this is solving my need (RFC - Integration Tests & End to End tests). Since there hasnāt been any progress in the past 6 months on an item I find extremely important in my daily job, I solved my own problem. Others are free to implement it in a way they can maintain and if it solves my need, Iām happy to drop my implementation. It was just half a day so Iām not attached to it.
The forklift project is already a lot of Ansible and Python so itās not unusual there. QE also writes their tests in Python with pytest (@rplevka is working on end to end provisioning tests) so Iām also happy to move it there in the long run.
I thought we already have integration tests in the framework using Capybara. That can use selenium which in turn uses an actual browser. All the tooling to that exists to day. If itās broken, we should fix it. It is written in Ruby. What would limit us from actually using this?
Again, this is not going to solve my personal use case.
Strongly disagree, at least for my goal. There should be no coupling and on the system under test and the tooling. Browsers that implement the webdriver API provide controls via a REST API. Iām only relying on what the browser is able to do to emulate an end user. Having to write Javascript would be a huge negative for me. Also because it means we would need javascript in our (production) test setup which we currently donāt.
I would love to see us continue working with cypress and figuring out how we can set up tests on a PR level. Until then, lets use what works and makes sure our users donāt see blank pages