VMs deployed with Ansible are not idempotent

Problem:
When provisioning a VM using Ansible to talk to Foreman, the build status changes to “Installed” after it has completed. So far, so good, but if I re-run the same Ansible playbook, it will change the VM’s build status to “Pending Installation”. The VM doesn’t actually get re-built; which is a good thing, but after several hours the VM goes into an error state indicating that it’s token has expired.
Expected outcome:
Upon re-running my playbook, if the VM is already present and all it’s properties are correct, leave the VM alone and don’t change its build state.
Foreman and Proxy versions:
Both are on version 3.1.2
Foreman and Proxy plugin versions:
No plugins are installed.
Distribution and version:
All systems involved are Rocky 8.5 (both the VM I’m provisioning, the libvirt hypervisor, and the foreman server)
Other relevant data:

Here’s what my Ansible playbook looks like:

- hosts: localhost
  vars:
    foreman_default_organization: Default Organization
    foreman_default_location: Default Location
    vm_interface_attributes:
	  - type: "interface"
		name: "testvm.jnk.sys"
		domain: "jnk.sys"
		subnet: "Control Network"
		provision: yes
		primary: yes
		managed: yes
		mtu: 1500
	  - type: "interface"
		name: "testvm-storage.jnk.sys"
		domain: "jnk.sys"
		subnet: "Storage Network"
		provision: no
		primary: no
		managed: yes
		mtu: 9000
  tasks:
    - name: "create the VM"
      theforeman.foreman.host:
        server_url: "https://foreman.jnk.sys"
        username: admin
        password: "{{ site_foreman_admin_password }}"
        validate_certs: no
        organization: "{{ foreman_default_organization }}"
        location: "{{ foreman_default_location }}"
        name: "testvm.jnk.sys"
        interfaces_attributes: "{{ vm_interface_attributes }}"
        hostgroup: "rocky85_hostgroup"
        compute_resource: "testhypervisor.jnk.sys"
        compute_profile: "rocky85_profile"
        build: true
        state: present
      register: buildret

    - name: Start the VM if we just created it.
      virt:
        name: "testvm.jnk.sys"
        state: running
      delegate_to: "testhypervisor.jnk.sys
      when: buildret.changed == True

I suppose I could add a check at the start of the playbook to skip the task that creates the VM if it’s already found in “virsh list”. But that would only skip the VM if there’s a VM that already has the VM’s name. It wouldn’t consider whether I’d changed any of the VM’s other properties, such as it’s interface details. If I make a change to the VM’s other properties, I would want it to re-provision or modify it to match in that case.

Is there a better way to do this?

What affect does the “build: true” actually have on the system? The Ansible documentation on that property isn’t clear.