On UUIDs in tests

Hello,

I see many PRs with adding weird statements to tests:

test_attributes :pid => '892b44d5-0f11-4e9d-8ee9-fd5abe0b0a9b'

These are coming without further explanation. This smells like Java generated code from 2000s and believe me I don’t want to go back in time.

Now let me assume that these UUIDs are there to create reference that does not change for some reason. Then my concern would be that this is unnecessary burden:

  • each test already has an unique name
  • two tests with same name sometimes happen (needs to be fixed)
  • if test name is renamed that’s for a reason (and we do it rarely)
  • you can generate UUIDs from test names automatically

Now, here is the thing: If you can generate something, do not manually maintain list of it. Therefore I suggest to change robotello_reporter (assuming that this is the component which needs this) in a way that it generates cross reference in our codebase from names. Please explain @ldjebran thanks.

HI, Please refer to this PRs when implemented and first used:
foreman: https://github.com/theforeman/foreman/pull/5464
katello: https://github.com/Katello/katello/pull/7337

This is needed for Polarion reporting, that UUIDs already exist as TestCase ids in Polarion, and needed (in foreman/katello) for binding the tests to the already existing polarion TestCases. robottelo-reporter collect that tests and generate an xml report (one file) the same way c-reporter do, except that it collect only the tests marked with test_attributes.

1 Like

Neither the PRs or RedMine issues explained this to me, that’s why I asked. Good description of what you are trying to achieve is really necessary.

After some googling and short chat with @mhulan I am able to understand that Polarion is some kind of tracking software you guys use and these UUIDs are for tases cases. You want to associate these with unit/functional tests. Although this should be doable with less-intrusive ways like mapping files, I think that having those UUIDs in the codebase is fine.

Will you share reports from Polarion with us? Because I assume that tracking individual test cases on this detail level gives you ability to tell more about performance and functional regressions in time. Our Jenkins does a little of this, but not much.

@lzap I think I may have a better chance to explain this to you in person, but for those following along:

  • Polarion is Red Hat’s internal Test Case Management System, which has been in place for a few years
  • All of our existing test cases, automated or not, live in Polarion and use a UUID to map the source code for these tests against a Polarion ID
  • As we port our existing automated test cases to the upstream repositories, it is imperative that we keep the same UUIDs as currently found in Polarion (this has been discussed before with other team members and approved)
  • I do not expect this piece of information to be useful to anyone but Red Hat QE’s team :confused:
  • We can share internally the reports that Polarion provide out of the box via email
  • As per dynamically generating UUIDs based on the test case name, path or any other artifact, there are a few problems to doing it this way. For example, if the name, path or $ARTIFACT ever changes, the UUID will also change and no longer map to the corresponding Polarion ID. I’ll be happy to discuss alternatives though

Og

I think then we should not have this in a public community repository. How is the community supposed to review or support this? We at dm e.g. use Jira to track issues. That’s why we currently open issues in both Redmine (public for everyone) and Jira so we can a) track the issue like all other internal tasks b) have all relevant information available publicly.

Just my two cents. I don’t care enough to comment this more, but I don’t think this is the best approach. And I do want to have as much (useful) tests as possible.
Maybe a middle ground could be to write some documentation in the developer handbook to let new developers now of Foreman’s specialties (compared to an ordinary rails project).

1 Like

When I was thinking about this, I didn’t see any negative things. It’s just a metadata which allows Red Hat QE team to contribute tests upstream. Given alternatives we discussed, it seemed like too much pain for them and if I have to choose between having more tests running against upstream code with additional metadata versus no more tests, I prefer the first. I tried to set expectations clearly, we don’t expect any dev to fill in such metadata or modify them. If the test needs to be removed, it will be removed also with metadata.

It would be great if Red Hat QE could share some data gathered from polarion to community, e.g. what has been tested for every upstream release. That would I think be a great prize :slight_smile:

It might have been good to communicate this via a RFC. It wasn’t clear to me that this would also be applied to plugins that are shipped downstream for example.

2 Likes

I think this can be actually a good thing - see our git history, we usually don’t change test names for nothing. If name is changed, probably also semantics of the test changed and such a mapping would allow you to find those and investigate - maybe they actually need new case in Polarion. I am just thinking loud, don’t take this as strong arguments the against current proposal.

Before we commit to spreading UUIDs across our test codebase, it would be beneficial to be sure there is no better way. Is this the only argument you have? I am pretty sure it will also be more work to implement custom mapping, but if it is worth it why not. I don’t know, there might be better ideas - we have many clever people here, much more than I am.

While I am not against grabbing coffee and having private telco with you, please be aware that this is Foreman community place and I am bringing this topic (UUIDs) for a reason. Red Hat QA team does lots of things behind the scenes and committing “mystery UUIDs” is not the best approach to take for increasing transparency and bringing QA and community together. That’s the overall goal of this effort, isn’t it?

This is more about the way how you contribute code rather than about UUIDs. Although Red Hat is biggest contributor to Foreman, that does not put us into position we don’t need to explain. I had no idea what these tests are all about and what the intentions are. If the plan is to commit those UUIDs into test codebase (current PRs pending review), then we should probably see discussion on this lists first gathering feedback about this. This would be opportunity to discuss long-term goals and plans. Is this start of something big? I don’t know myself, how could community folks know.

I think those UUIDs are kind of a detail which matter, after this was explained to me I have no problems with that. But there can be people who don’t need to share my opinion. That’s why I think it is important to discuss this. Particularly when there are about a dozen of PRs opened with the same change.

This should be part of our discussion, I am pretty sure that those UUIDs can be justified if there are more plans in tightening our cooperation which will be beneficial to the community. More tests, different tests, end to end tests, professional reports or long-term regression reports - these are all gaps which we would love to see to be filled.

But it’s not me or you who should decide. It is the community, the most valuable asset there is.

4 Likes

Hi everyone, I will try to find time to properly reply to your comments, questions or concerns. I understand that this is not Red Hat and I hope that you’ll all understand that there is no ill intention here.

Og

3 Likes

I just stumbled across these test_attributes thing and was similarly confused. Can we standardize on a comment above these lines briefly explaining it’s purpose?

I’m not sure a comment above the line is the best. IMHO it should be self-descriptive. So instead of pid we should have written polarion-id. Then you know to search for polarion for which we can write something in our testing guide.

3 Likes

Just before doing anything stupid, could we merge what we have already & switch to polarion-id later on? I also agree with @timogoebel ideally we should think of a way to automatically add the test_attributes :pid later on in the downstream, rather than keeping these cryptic lines in a community repo. Opening PRs with these changes seems mind-numbing to me too.

We already have https://github.com/theforeman/foreman/pulls?utf8=✓&q=is%3Apr+is%3Aopen+Robottelo 13 open PRs for these kind of tests

1 Like

I think the point of having labeled test in upstream is, that QE can link upstream tests easily. That way they can contribute to upstream test suite. Adding it in downstream does not help. I’d prefer to get all PRs with pids merged so we move somewhere, unless someone explicitly disagree. It seems few people so far don’t like it but don’t block it. Then I think changing pid to polarion-id is matter of 1 PR. Meanwhile we can think about how to get other uniq identifier and we can keep mapping externally.

So let’s see in a week if there’s a strong push back, if not let’s start merging.

1 Like

Sorry, i might messed the point, but why can’t we just rely on test names?
they should be unique and if the test name changes i assume we would need
to revisit the polarion usage case?

TBH: I vote against adding support for this upstream, e.g. what if they
tooling they use change? wouldn’t a simple table with test case / pid
mapping be sufficient ?

3 Likes

To try and understand why this change was needed I took a look at the robotello reporter which uses it. I didn’t dig too deeply, but it seems that the gem allows generating XML reports for tests that I presume allow pushing the results into polarion, and the tests included in the report are only those that have a pid set.

Now, I’m very divided regarding this issue.
On one hand, I want to promote Red Hat QE’s involvement in the upstream testing process, and I think that we all agree that this brings added value to the project - better test coverage, more eyes looking over tests which may have not been touched in a while, etc.
On the other hand, pushing a ton of IDs that provide no context or meaning for anyone other than QE to the tests feels like the wrong way to go about it. It raises questions when any other developer touches such test - should I change the PID? Should I remove it? Do I need to wait for QE to ack my change? The implementation itself also feels like an attempt to implement Python decorators in Ruby, which is just feels strange to me.

As others have mentioned, the test names should be unique enough, and if the name changes that is for good reason and means that the mapping in polarion should likely change as well. In any case that shouldn’t be an issue the Foreman community cares about, especially since we have no access to the polarion information and results.

So all in all, I think the right way to go forward is to remove the pids from foreman core, and have QE take care of mapping test names to polarion pids seperately. I don’t mind keeping the robottelo-reporter gem installed to allow generating the XMLs, but adding hundreds of lines of pids to the test suites is not the solution I think we should take.

1 Like

The problem with test names is that every test gets also index, e.g. 00001_test_a and 00002_test_b, now when we inject one in the middle, they get reindexed and test_b is known as 00003_test_b. It gets works with contexts/describe blocks as they become part of the name. We could strip out the prefix, not sure about context/describe, which is usually added later when we add more tests with similar setup block. Therefore it’s more straightforward to use the id directly.

If the tooling ever changes, we could still use existing pids and create the mapping based on them.

@Og_Maciel @ldjebran As far as I see, it looks like it is possible to extend https://github.com/SatelliteQE/robottelo_reporter/blob/9191a19424557defe8a14e40aad8f3dd796fd154/lib/robottelo/reporter/attributes.rb to create UUIDs based on names.

However you mentioned the UUIDs are already in Polarion. Would it be possible to extend the module to do that, and change Polarion to use that UUID format based on names? I was thinking about using https://github.com/sporkmonger/uuidtools

Sorry about that, and I’ll take full responsibility for not creating a RFC. The truth is that I did not know what the process was and went directly to Red Hatters to discuss about this topic. Folks may not remember but I did talked about this topic in person with @ohadlevy @tbrisker @Marek_Hulan @Ivan_Necas1 and what were were trying to achieve but I realize that perhaps the full impact of what I was asking may not have been obvious to everyone outside of Red Hat QE. I promise that future requests will be done through the proper channels and processes.

1 Like

@ohadlevy there are a couple of reason why we cannot rely on test names alone, which rely on some assumptions. For example, let’s say that you currently have a test case with a name of “create user with long email”. If we were to get a clever way to generate a unique ID for this test, then we’d end up with something like “create_user_with_long_email” or some other value. Red Hat QE could then use it to map it against a Polarion ID (e.g. POL-12345678) and from this point forward, whenever we run this test upstream, we can always match it against the Polarion test case by that ID.

However, if the name of the test case changes (e.g. “create user with email address greater than 255 chars”), if that clever, dynamically generated ID is changed, then I can no longer match it against the Polarion ID.

In my experience, test cases can change in a variety of ways and using a static UUID was the best solution we found that still allowed us to have the flexibility to change the test and not lose the mapping.

All,

I had promised that I’d provide a more detailed reply to this thread but the truth is that it is not a simple task to do this without providing context and background… I just spent over 1 hour talking about this with @Gwmngilfen on video and that was because we were talking and not typing :slight_smile: Anyhow, here’s what I wanted to say to you:

Short version: I will ask that all existing pull requests adding a UUID to the test cases that QE donated to the upstream projects be closed without being merged. We will discuss internally and figure out a less painful way to map automation test results with our internal systems and move on. This does not diminish my commitment to continue donating more automated tests to the upstream projects in any way, form or shape and I look forward to hopefully adding value here.

Long-ish version: Everyone that has ever worked as quality engineer will attest that no matter what industry you work on, for most cases, test cases need to live in a Test Case Management System for a variety of reasons. Some of these reasons are valid (e.g. for compliance purposes folks may have to report a detailed test plan with all test cases, what requirements or features they verify against, what are the parameters and systems being used, etc, etc.) but sometimes they are solely based on reasons that to be quite honest can be resumed in a few words: to cover our butts in case something breaks and we need to prove that we tested things. In the 11 years that I have been doing QE, I am yet to have someone ask me for my test cases (roughly around 4000 of them) and go through each and everyone of them! It’s just too much information to be parsed by a human being! Showing reports based on these test cases, however, is a different ball game.

As a quality engineer for Red Hat, it is my job to make sure that I not only track every single test case that my teammates automate against TheForeman (and its plugins) but also provide some information about the execution of these tests against any given version that can provide useful information about the quality of the product. As I mentioned before, these tests are tracked on Polarion, which is an internal, Red Hat only test case management system but the test cases themselves are available upstream.

M QEy team just donated 460 net new automated test cases to TheForeman and plugins to help augment feature coverage and increase the level of confidence that builds don’t allow regressions sneak in whenever a new version of TheForeman (or some of its plugins such as Katello) is pushed to the community! We strongly feel that this was a great move for all parties and the benefit will yield lots and lots of “dividends” to everyone. I intend to continue adding value to the community (and Red Hat) by continuously encouraging more automated tests to be added directly upstream.

Back to these pesky UUIDs. As of right now I need to be able to generate a report that basically says: when I tested version X.Y.Z of TheForeman + Katello and other plugins, I executed of automated tests and <number_failures> test cases failed because of and <number_skipped> test cases were skipped because of <other_reasons>. Then some percentages of failures and skipped over success test cases are generated and provided along some charts that show these same numbers.

Now that I no longer have those 460 test cases living inside our test repository (completely open sourced and available here by the way) I can no longer generate these numbers and reports until I have a clever way to match them against our internal Polarion system. These tests are being executed right now and the results are publicly available to anyone who knows what job to look at http://ci.theforeman.org/), so the community is getting the benefit of these tests but there is no consolidated “dashboard” that I am aware of that would let you see these numbers.

When I set out to get the QE team to port these 460 test cases and instructed them to start writing any new test case directly upstream, I did meet with several people who are both Red Hatters and TheForeman/Katello community members and explained what I was attempting to do. I did get verbal acknowledgement and approval from everyone to proceed with the plan, but I do realize that: 1) this was not the proper process 2) the devil is in the details and maybe those people I talked to about this plan did not understand or foresaw all the implications. Having been around the open source world for over 20 years now, I can truly say: I should have researched more and looked into how to include people from the community outside Red Hat. For this I am sincerely sorry.

I planned and designed all the steps and tooling that would be required to donate our test cases (they were written in python and had to be converted to MiniTest), including training people to learn how to port them in such a way so that it would not become a hinderance to the existing CI/CD systems being used upstream. Everything was successfully executed and completed by our team and the remaining piece, being able to identify these tests and report on their execution, required adding these UUIDs.

So as I mentioned much earlier in this reply, I am ready to change our tactics and no longer request that the UUIDs be added to the tests upstream. I do understand how they could be confusing to any new contributors who are automating test cases for TheForeman (out of curiosity, do we have any numbers around this?) not to mention that changes to existing UUIDs would throw a monkey wrench into our reporting. UUIDs are not intuitive either, so there is that too :slight_smile:

The QE team will get together and figure out a strategy that is not intrusive with the upstream community and I will continue to strive to make the QE team continue to write more automated tests directly with the upstream projects.

Thanks everyone who took the time to chime in, I really appreciate the feedback and I look forward to continue working with you all for a really long time!

8 Likes