All,
I had promised that I’d provide a more detailed reply to this thread but the truth is that it is not a simple task to do this without providing context and background… I just spent over 1 hour talking about this with @Gwmngilfen on video and that was because we were talking and not typing Anyhow, here’s what I wanted to say to you:
Short version: I will ask that all existing pull requests adding a UUID to the test cases that QE donated to the upstream projects be closed without being merged. We will discuss internally and figure out a less painful way to map automation test results with our internal systems and move on. This does not diminish my commitment to continue donating more automated tests to the upstream projects in any way, form or shape and I look forward to hopefully adding value here.
Long-ish version: Everyone that has ever worked as quality engineer will attest that no matter what industry you work on, for most cases, test cases need to live in a Test Case Management System for a variety of reasons. Some of these reasons are valid (e.g. for compliance purposes folks may have to report a detailed test plan with all test cases, what requirements or features they verify against, what are the parameters and systems being used, etc, etc.) but sometimes they are solely based on reasons that to be quite honest can be resumed in a few words: to cover our butts in case something breaks and we need to prove that we tested things. In the 11 years that I have been doing QE, I am yet to have someone ask me for my test cases (roughly around 4000 of them) and go through each and everyone of them! It’s just too much information to be parsed by a human being! Showing reports based on these test cases, however, is a different ball game.
As a quality engineer for Red Hat, it is my job to make sure that I not only track every single test case that my teammates automate against TheForeman (and its plugins) but also provide some information about the execution of these tests against any given version that can provide useful information about the quality of the product. As I mentioned before, these tests are tracked on Polarion, which is an internal, Red Hat only test case management system but the test cases themselves are available upstream.
M QEy team just donated 460 net new automated test cases to TheForeman and plugins to help augment feature coverage and increase the level of confidence that builds don’t allow regressions sneak in whenever a new version of TheForeman (or some of its plugins such as Katello) is pushed to the community! We strongly feel that this was a great move for all parties and the benefit will yield lots and lots of “dividends” to everyone. I intend to continue adding value to the community (and Red Hat) by continuously encouraging more automated tests to be added directly upstream.
Back to these pesky UUIDs. As of right now I need to be able to generate a report that basically says: when I tested version X.Y.Z of TheForeman + Katello and other plugins, I executed of automated tests and <number_failures> test cases failed because of and <number_skipped> test cases were skipped because of <other_reasons>. Then some percentages of failures and skipped over success test cases are generated and provided along some charts that show these same numbers.
Now that I no longer have those 460 test cases living inside our test repository (completely open sourced and available here by the way) I can no longer generate these numbers and reports until I have a clever way to match them against our internal Polarion system. These tests are being executed right now and the results are publicly available to anyone who knows what job to look at http://ci.theforeman.org/), so the community is getting the benefit of these tests but there is no consolidated “dashboard” that I am aware of that would let you see these numbers.
When I set out to get the QE team to port these 460 test cases and instructed them to start writing any new test case directly upstream, I did meet with several people who are both Red Hatters and TheForeman/Katello community members and explained what I was attempting to do. I did get verbal acknowledgement and approval from everyone to proceed with the plan, but I do realize that: 1) this was not the proper process 2) the devil is in the details and maybe those people I talked to about this plan did not understand or foresaw all the implications. Having been around the open source world for over 20 years now, I can truly say: I should have researched more and looked into how to include people from the community outside Red Hat. For this I am sincerely sorry.
I planned and designed all the steps and tooling that would be required to donate our test cases (they were written in python and had to be converted to MiniTest), including training people to learn how to port them in such a way so that it would not become a hinderance to the existing CI/CD systems being used upstream. Everything was successfully executed and completed by our team and the remaining piece, being able to identify these tests and report on their execution, required adding these UUIDs.
So as I mentioned much earlier in this reply, I am ready to change our tactics and no longer request that the UUIDs be added to the tests upstream. I do understand how they could be confusing to any new contributors who are automating test cases for TheForeman (out of curiosity, do we have any numbers around this?) not to mention that changes to existing UUIDs would throw a monkey wrench into our reporting. UUIDs are not intuitive either, so there is that too
The QE team will get together and figure out a strategy that is not intrusive with the upstream community and I will continue to strive to make the QE team continue to write more automated tests directly with the upstream projects.
Thanks everyone who took the time to chime in, I really appreciate the feedback and I look forward to continue working with you all for a really long time!