Updating host via API failes because compute information is not found

I have already posted this on matrix a few days ago, posting here too in case someone who’s not on there can help.

Problem:
We are testing lestejska’s patch from Fixes #37800 - Don't apply the compute profile when updating host by stejskalleos · Pull Request #10291 · theforeman/foreman · GitHub, since we are affected by the same issue. The patch worked flawlessly in out test environment, so we went ahead and tried it on the production instance where we are now facing a new issue (which I am somewhat certain is not caused by the patch, but rather surfaced now due to the original issue not being present anymore).
When trying to update a host provisioned to VMWare, we get the following error: “Failed to find compute attributes, please check if VM was deleted”, which as far as I could find is only thrown here: foreman/app/models/concerns/orchestration/compute.rb at develop · theforeman/foreman · GitHub . Following the chain of events that leads to this code being executed, it seems like the only way that code should be executed is if vm_exists? returns false.
Though when I query the compute_object and it’s persisted? attribute via rake console, I get the object and persisted? returns true. Modifying the host via the UI also works fine.

Expected outcome:
Host updated without error through the UI.

Foreman and Proxy versions:
3.9.3

Foreman and Proxy plugin versions:

  • foreman-tasks 9.0.4
  • foreman_expire_hosts 8.2.0
  • foreman_hooks 0.3.17
  • foreman_puppet 6.2.0
  • foreman_remote_execution 12.0.5
  • foreman_scc_manager 3.0.0
  • foreman_snapshot_management 3.0.0
  • foreman_templates 9.4.0
  • foreman_webhooks 3.2.2
  • katello 4.11.1
  • puppetdb_foreman 6.0.2

Distribution and version:
RHEL8.10

Other relevant data:

Here is how I checked the compute info is present for one of the affected hosts through the console:

irb(main):003:0> Host::Managed.find_by(:id => 10267).compute_object.persisted?
=> true

So Foreman clearly has the correct association for the host and as far as I understand, the compute orchestration should not be checking anything else.
Dos someone have an idea where this might come from or how to debug this further?

So, after a lot of digging, testing and debuging, we at least found the culprit on our end.
After a lot of fiddling, we realized that the only setting we could not change without the error message from the first post was the “hostgroup” setting on certain hosts. Changing other options of the host or changing the hostgroup of some other hosts worked just fine, so I went digging into our hostgroups and found that some of that had inherited an incorrect compute resource. The host itself had the correct compute resource set, though.
For illustration, here is what some of those hostgroups and an affected host look like in the DB:

 id  | ancestry | compute_resource_id
-----+----------+---------------------
  31 | 1        |
 362 | 359/360  |                   4
 361 | 359/360  |                   4
 247 | 218/219  |
 219 | 218      |                   5
  33 | 1/31     |
   1 |          |

foreman=# select id, hostgroup_id, compute_resource_id from hosts where id=10267;
  id   | hostgroup_id | compute_resource_id
-------+--------------+---------------------
 10267 |          247 |                   4

If we now try to move this host from hostgroup_id 247 to hostgroup_id 362, Foreman tries to create a new VM in compute_resource 4 despite the host being on compute_resource 4, too. Changing a host from hostgroup_id 361 to 362 works, though, as does moving the host to hostgroup 33, where the whole ancestry has no compute resource set. So my guess is, when updating the hostgroup of a host, Foreman does not actually check if the compute_resource of the host would change, but compares the compute_resource from the two hostgroups if those have changed?
We can absolutely work around this with our hostgroup settings, but this was at least unexpected behavior to me and the error message feels also somewhat misleading (despite being technically correct for what it is intended for).

Can someone confirm whether this works as intended or if I have hit some obscure kind of bug by having hosts in on a compute resource that differs from what is configured in the hostgroup? Or maybe just by moving a host to a hostgroup with a different compute_resource assigned?