Foreman Compute Resources with Citrix Xen


I’d like to inquire about the current state of the foreman-xen plugin. The github project seems active, but the issue tracker’s latest updated issue regards Foreman 1.17, with a recent update that suggests the problem still persists in 1.20. The plugin doesn’t seem to have doc on what versions of Xen are functional, or what is required and how to make things work on the from the Xen side.

I’m not a Xen guy, but I am looking at trying to integrate Foreman with our Xen platforms, and unfortunately the Xen guy doesn’t know anything about Foreman :slight_smile:. So are there resources I’m missing here between the git repo, the Foreman Docs, and Xen Plugin’s redmine page?


We’re currently using it in our lab. Note that last time I looked the rpm was very out of date and I had to install the plugin manually to get the latest version. We are running XCP-ng 7, but iirc I also had it working fine with other versions of xen.

This is on foreman 1.20 (will be upgrading to 1.21 soon)

We are also currently using the plugin, and are on 1.20.2. Even though I have also ran into issues, and needed to manually compile, we have successfully been using theforeman to manage our 10+ xcp-ng 7.6 hypervisors for the last few years.

The overall functionality is there, but I would love to see some more development. The only issue that really impact’s productivity, is not being able to refresh the compute-resource’s cache. Example: I create a few new SR’s, and add a few new hosts to our xen pool, but theforeman does not update the resource. I have noticed that after an upgrade, the changes will be shown. Another method is removing and re-adding the resource.

Is there another way to update it, or refresh the cache?

Please also let me know anyway I can help.

Thank you!

One thing I have noticed, and don’t know if its a foreman, xen, or just us issue.

Any servers provisioned with foreman are not able to be migrated. Try to do a live migrate on our xcp-ng nodes and it just hangs there at 0%, until we do a toolstack restart.

Have you come across this?

Once we get foreman set up and rolled out to production I do want to see if I can make some improvements to the plugin (better description on foreman provisioned hosts for a start)

I am also experiencing the same issue after the xcp-ng 7.6.0 upgrade. I have been looking into this, and so far I can confirm that it is something that is caused during the initial provisioning with foreman. If I create a vm without theforeman and delete the disk, then attach the disk from a vm that was provisioned with theforeman, migration works and does not get stuck at 0%.

Are you migrating to different host in same pool or are you doing cross-pool migration?
In my first tests with this plugin, migration within pool works without problems. Maybe the problem is in Xenserver template used for VM?
I am creating debian 9 VMs with Debian 9 (stretch) template (XCP-ng 7.6), and migration is fine.

I have totally different problem: no matter what i do, my VMs are created with only one VCPu… Did someone had such experience?

I am using Foreman + Xenserver plugin to manage about 30+ hypervisor hosts with hundreds VMs.
Setting it up took some tinkering, but I got it working.
Some features don’t work (ex. reboot on rebuild, compute resource unaware of new ISCSI volume until recreated, etc.) but I can live with that.

Using foreman version 1.24 and XC-ng 8.1

What really drives me nuts is the fact that I can’t provision hosts using hammer cli.
To be fair: I got it working about a year ago, at a different company, in a lab env, with a simple single-network setup, no shared storage, with XCP-ng 7.9 or so, with foreman installed in Centos 7.

Now, I’m on XCP-ng 8.1, foreman installed on Ubuntu 18.04.

Any attempt to provision bump into the error:

Failed to create a compute pool2 (Xenserver) instance api10.mydomain.local: VM.set_affinity: ["MESSAGE_PARAMETER_COUNT_MISMATCH", "VM.set_affinity", "2", "1"]

Anyone had that issue?