Foreman Libvirt Compute Resource - volume based storage pools supported or not?

Problem:
I’m looking at moving some kvm hypervisors provisioning process move away from directory based storage pools to logical volume or pool based storage pools managed by libvirt.
I decided to test this in my home lab never having used this configuration before.

I setup a hypervisor on Rocky 10, and created two storage volumes made up of different disk devices to understand performance and configuration options (specifically volume block size and file system block size performance)

the two logical volumes and their libvirt storage pool definition look like this

<pool type='logical'>
 <name>vgnvme</name>
 <uuid>8fcdd132-b356-4044-93b0-681f44a8fa3c</uuid>
 <capacity unit='bytes'>0</capacity>
 <allocation unit='bytes'>0</allocation>
 <available unit='bytes'>0</available>
 <source>
   <name>vgnvme</name>
   <format type='lvm2'/>
 </source>
 <target>
   <path>/dev/vgnvme</path>
 </target>
</pool>

and

<pool type='logical'>
  <name>vgstripe</name>
  <uuid>61329409-de04-48f2-bed8-035ad72ecfae</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <name>vgstripe</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/vgstripe</path>
  </target>
</pool>

These pools appear on the libvirt hypervisor and active

 Name       State    Autostart
--------------------------------
 vgnvme     active   yes
 vgstripe   active   yes

Verifying the libvirt compute resource is reachable using the ‘test’ option in the foreman interface I am able to begin provisioning a host on this hypervisor.

When setting up the specification of the virtual machine within the foreman interface, the drop down for the storage pools that would normally allow me to select the storage destination for the virtual guest, contains no entries and foreman expects me to create one

The storage pools created do not display, and the only options for storage are ‘raw’ and ‘qcow2’ which are directory and file based storage options in libvirt.

Expected outcome:

Storage pool would display volume based storage pools - in this example ‘vgnvme’ and ‘vgstripe’

Foreman and Proxy versions:

Foreman 3.16
Foreman-Proxy 3.16

Foreman and Proxy plugin versions:
foreman-tasks 11.0.4
foreman_ansible. 17.0.1
foreman_dhcp_browser 0.1.2
foreman_kubevirt 0.4.1
foreman_puppet 9.0.0
foreman_remote_execution 16.2.1
foreman_templates 10.0.9
foreman_webhooks 4.0.1

Other relevant data:

Satellite reference docs

Show that provisioning kvm/libvirt hosts requires no limitation of storage pools, although all examples in this doc do reference qcow2 format provisioning.

There is a link in this document documenting how to setup a RedHat virtualisation host (libvirt/kvm) for use with foreman

this document second 3.5.2 shows support for storage volume based storage pools, while this is not direct support confirmation in foreman, the implication is that if Satellite does not say 'only supports raw or qcow2 on directory based pools, AND the document linked on how to setup a kvm server for foreman shows you how to setup volume based storage pools, then foreman should support it.

From the research I’ve been able to do myself, I don’t believe pool based storage is supported and only directory and file based is supported, (I can see my other kvm hosts that use directory based pools offer their storage correctly in the foreman host provisioning page)

Answered my own question, I’d missed a line in the doc

Only directory pool storage types can be managed through Satellite.

FWIW - I have a setup running with libvirt managed LVM storage currently on Foreman 3.13.

Sounds like a change of behaviour with recent updates?

While I’m not great with RoR I’m more than happy to spend time to make this work better if the project would consider a PR.

1 Like

when you say lvm managed - you don’t mean a file system managed by lvm, you mean foreman actually creates a logical volume per VM ?

Yes - I meant LVM LVs as block devices managed by foreman-libvirt and fog-libvirt. I’ll no doubt test before the upgrade but it’s good to know if something is expected to break in advance.

I would love to see this working, I have no idea how it ‘can’ work, it looks like it’s been in the Satellite docs for many many releases, (which means this must have been the case in foreman for many many releases too)

what version of foreman is this actually working in, I’ll get it in my lab straight away to check this

ahhh you did say 3.13, deploying 3.13 to verify this now, I’d be surprised if it works (I’m not doubting you) as that’s a pretty recent release and included in the Satellite ‘only directory’ limitation documentation, I can’t see any changes in the foreman libvirt compute provider between 3.13 and 3.16 either

I saw that you’d been using this setup since 2023 !

it looks like fog doesn’t care about the pool type, it just cares that a pool is presented, what backs it seems irrelevant to the fog library, so what is the limiting factor ?

That post brought back some interesting memory… Yes we have been using Foreman with LVM LV-backed storage for years and while we have plans to move away from it we are still using it today at version 3.13.

FWIW - we have been using it this way since version 1.3 and maybe longer from memory!

interesting, what do you plan to move to ? and I wonder why the docs show this as ‘not supported’ and why the storage pools didn’t show up in my foreman 3.16 instance, I can’t see any changes in the libvirt compute resource or the fog provider

We are looking at Proxmox and OKD virtualization with kubevirt. Foreman supports both it seems.
Still haven’t gotten the time to test the the libvirt LVM route with the latest version - did you get any further?

no, I can’t get it to work on the older version too ! (I had a crash and burn 3.13 box so used that) I actually can’t see how you’ve got this working (I also don’t see a reason it shouldn’t work)

Interesting – as I said it’s definitely working with our v3.13 install with CS9 compute resources. Right libvirt versions on the CRs matters too - do check yours.

I’m default version on EL9 and EL10 (only just started testing with EL10 and already found an incompatibility between foreman running on EL9 and a hyperisor running on EL10)

so that’s

libvirt-libs-10.10.0-7.7.el9_6.x86_64

rubygem-ruby-libvirt-0.8.4-1.el9.x86_64

rubygem-fog-libvirt-0.13.2-1.el9.noarch

foreman-libvirt-3.16.0-1.el9.noarch

on the foreman node

libvirt-libs-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-log-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-lock-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-plugin-lockd-10.10.0-7.7.el9_6.x86_64

libvirt-client-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-common-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-core-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-nwfilter-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-network-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-qemu-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-nodedev-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-interface-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-secret-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-proxy-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-config-network-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-config-nwfilter-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-iscsi-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-logical-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-disk-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-rbd-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-scsi-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-mpath-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-driver-storage-10.10.0-7.7.el9_6.x86_64

libvirt-ssh-proxy-10.10.0-7.7.el9_6.x86_64

libvirt-client-qemu-10.10.0-7.7.el9_6.x86_64

libvirt-10.10.0-7.7.el9_6.x86_64

libvirt-daemon-kvm-10.10.0-7.7.el9_6.x86_64

on the EL9 Hypervisor

and

libvirt-libs-10.10.0-8.4.el10_0.x86_64

libvirt-client-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-lock-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-log-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-plugin-lockd-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-proxy-10.10.0-8.4.el10_0.x86_64

python3-libvirt-10.10.0-1.el10.x86_64

libvirt-client-qemu-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-common-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-core-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-network-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-nwfilter-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-config-nwfilter-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-config-network-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-disk-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-iscsi-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-logical-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-mpath-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-rbd-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-scsi-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-storage-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-interface-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-nodedev-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-secret-10.10.0-8.4.el10_0.x86_64

libvirt-daemon-driver-qemu-10.10.0-8.4.el10_0.x86_64

libvirt-10.10.0-8.4.el10_0.x86_64

on EL10

all stock from the distro

I’m talking about the actual OS and libvirt version used on your hypervisor - your host running the qemu/libvirt process NOT the Foreman server.

both matter (as I’ve recently found out)

the foreman server needs libvirt as a dependency for provisioning as it’s config is what foreman uses to configuration options from the hypervisor. (which I believe there is an incompatibility which I’ve verifying at the moment, between the libvirt version on EL9 and the default config foreman uses with it) and an EL10 hypervisor, but that’s a different topic.

The packages/versions I’ve listed above are on the hypervisor, not the foreman server, the only packges I listed above are the ones ‘on the foreman node’ the rest of on an stand alone EL9 hypervisor, and a stand alone EL10 hypervisor.

1 Like