Crashed my foreman server two times!

foreman 3.2 and katello 4.4, then again on a brand new 3.3 and 4.5.

two times now, i have come back to my foreman server overnight, only to find the postgresql in a state where it cannot start. the 3.2/4.4, i just rebuilt and started over (which was painful, as i had migrated in about 40 centOS systems) but the new install was 3.3/4.5. been since since the rebuild.

now i built a new 3.3/4.5 so i can do some further playing around and testing without putting the other one at risk, and sure enough, i set up repos on it last night for centos 7/8/9 and ubuntu 20, and i come back this morning and the pgsql is offline again. LOL what the heck?

does anyone have a documented procedure for recovery? its been a long time since i had to do any pg_dumps and recovery, but that assumes the server is online and available.

right now i think the issue is caused by thin provisioned LVM volumes. i rebuilt my foreman without thin provisioning, hopefully this issue will be resolved.

I personally never jumped on thin LVM due to the risk of loosing control of actually remaining space on vital volumes. I go with thin disks in VMware instead one layer up, that way no need to worry about extra unused space in an LVM volume taking physical space on disks.
My current AlmaLinux Foreman server looks like:

Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg0-root          1014M  225M  790M  23% /
/dev/mapper/vg0-usr            6.0G  3.1G  3.0G  52% /usr
/dev/sda2                      914M  262M  653M  29% /boot
/dev/sda1                      100M  5.7M   95M   6% /boot/efi
/dev/mapper/vg0-opt            2.0G  190M  1.9G  10% /opt
/dev/mapper/vg0-tmp            2.0G   48M  2.0G   3% /tmp
/dev/mapper/vg0-var            6.0G  902M  5.2G  15% /var
/dev/mapper/vg0-var_tmp       1014M   40M  975M   4% /var/tmp
/dev/mapper/vg0-pgsql           10G  3.1G  7.0G  31% /var/lib/pgsql
/dev/mapper/vg0-home           4.0G   61M  4.0G   2% /home
/dev/mapper/vg0-qpidd          5.0G   68M  5.0G   2% /var/lib/qpidd
/dev/mapper/vg0-pulp           400G  248G  153G  62% /var/lib/pulp
/dev/mapper/vg0-var_log       1014M  219M  796M  22% /var/log
/dev/mapper/vg0-var_log_audit 1014M   73M  942M   8% /var/log/audit

i do thin in vmware as well, it must have just been an inadvertent click that i clicked on the thin lvm while i set up the first centos 8 image. then i re-used that same anaconda file each time i rebuilt the system, just re-introducing the same issue right back into it from the start. oops.

running just fine now!