Foreman 2.1 test week

Foreman 2.1 RC1 was announced yesterday and it’s the time to put our effort into making sure that all most important features are not broken hard. We need your help! It’s easy to get your hands dirty and help us with testing:

  1. Install Foreman RC version (the most up to date one)
  2. Pick a scenario from this post or add your own scenario
  3. If you find an issue, file it in the RedMine and make a comment in this thread linking the issue
  4. Mark the scenario as checked in this OP (this is an editable wiki post), here is syntax of checked and unchecked lines (you can click on checkboxes directly with mouse):
    • Unchecked
    • Checked
    • Checked (alternative syntax with no special semantics - both are equal)

Installation

  • Install Foreman using existing script/forklift/beaker
    • Forklift
    • My own install script
  • Install Foreman manually by following our installation guide
    • RHEL / CentOS stable
    • Debian / Ubuntu stable
  • Install Foreman manually by following our new installation guide
    • RHEL / CentOS
    • Debian / Ubuntu
  • Upgrade existing Foreman deployment (advertise in RedMine it was an upgraded instance if you encounter bug)
    • RHEL / CentOS
    • Debian / Ubuntu
  • Sanity checks
    • Installation on Red Hat distro with SELinux turned on
    • Packages passenger and tfm-rubygem-passenger are from the same repo (foreman) and in the same version
    • Logging in with a user that has limited permissions works properly

Provisioning

  • Bare-metal or virtualized PXE provisioning (host exits build mode and reboots)
    • BIOS host with CentOS 8
    • BIOS host with CentOS 7
    • UEFI host with CentOS
    • BIOS host with Debian or Ubuntu
    • UEFI host with Debian or Ubuntu
    • BIOS host with Atomic OS
  • Compute Resources (VM is successfully created, finish or cloud-init is executed)
    • Create VMware host (Image Based/Network Based)
    • Create OpenStack host (Image Based)
    • Create Ovirt host (Image Based/Network Based)
    • Create Libvirt host (Image Based/Network Based)
    • Creare AWS host (Image Based)
    • Create GCE host
    • Create Azure host
  • Puppet manifest import (classes are imported, parameters recognized)
  • Puppet configuration (class is assigned to a host, agent performs changes, reports and facts appears correctly)
  • Log in using user from LDAP (user account is created from LDAP)
  • Log in using user from FreeIPA (user account is created from FreeIPA)

Foreman Discovery

  • Bare-metal or virtualized provisioning via Provision - Customize Host (host exits build mode and reboots)
    • BIOS with discovery from PXE
    • UEFI with discovery from PXE
    • BIOS with discovery PXE-less
    • UEFI with discovery PXE-less
  • Provision a host via discovery rule
  • Provision a host via Customize UI button
  • Provision a host without hostgroup via Customize UI button
  • Provision a host via hammer via hostgroup
  • Provision a host via hammer via auto provisioning rule

Foreman Bootdisk

  • Bootdisk basic provisioning (host exits build mode and reboots)
    • Full host image BIOS
    • Host image BIOS
    • Generic image BIOS
    • Full host image EFI
    • Host image EFI
    • Generic image EFI

Foreman Ansible

  • Import Roles
    • With/From Smart-Proxy
  • Assign Roles
    • Hostgroup
    • Hosts
  • Play Roles
    • Hostgroup
    • Hosts
  • Run shipped Ansible playbook (job), e.g. to install ansible role from galaxy

Foreman Remote Execution

  • Run some job, e.g. ‘ls /etc’ on a system that was provisioned from Foreman, it should work out of the box
  • Run some job against the Foreman host itself, only key configuration should be needed

Foreman Puppet run

  • Trigger Puppet run on host through SSH

Foreman Openscap

  • Create new content file, define a policy, assign it to a host and deploy the foreman_scap_client using puppet
  • Verify ARF report gets uploaded upon foreman_scap_client run and full version of it can be rendered
  • Create tailoring file, assign it to the policy and rerun client with the tailoring file

Foreman Virt Who Configure

  • Create a configuration definition and run it e.g. through REX on some provisioned host. It should succeed as long as it has access to sat tools repo on RHEL, epel (I think) on centos.
    note: plugin works, the configuration requires new virt-who that is currently in fedora 30, not in epel

Foreman Templates

  • hammer import-templates --lock true # sync newest templates from community-templates repo, see audits
  • mkdir /repo; chown foreman /repo; hammer export-templates --repo /repo # may need setenforce 0

This page is a wiki, feel free to update it and add new scenarios as you test them. Thanks for your help!

happy to put some time into this this weekend, how do you want the results or feedback beyond the check box ?

2 Likes

Just use check boxes and feel free to drop any comments here. If you find a bug, create a RedMine ticket and link it from here so we are tracking these as priority targets.

Small thing in regard to VLAN provisioning: https://github.com/theforeman/community-templates/pull/736

Remote Execution SELinux denial when clicked on Web Console.

@tbrisker I pushed into 2.1-stable directly if you don’t mind.

1 Like

Encountered issue with run puppet once on a fresh install.

https://projects.theforeman.org/issues/29950

Few safemode macros for bootdisk EFI support:

Updates for PXEGrub2 template for bootdisk EFI support:

Both are undergoing review and testing by @aruzicka and we would like to hit 2.1. These are small changes, I am late due to numerous bugs found in grub2 which I had to report and workaround. But the bootdisk FINALLY works in BIOS and EFI which is a feature of 2.1.

I upgraded my own server from 2.0 to 2.1 and noticed this:

Note that I ran Puma both before and after. It may be because we’ve changed the workers from 0 to 2:

I do believe that copy on write or preloading does not work. Can you compare 1, 2, and 5 workers? These should definitely not be N * 1, N * 2 and N * 5 but much less then that.

Passenger also utilized a preloading trick using curl, app was brought up by visiting its main (welcome, dashboard) page so some portion of the app would load up and only after then a fork was created. Do we do the same?

I can only see the preload_app! statement in puma configuration, I wonder if it really loads all the resources Foreman has. We need to do more testing, probably count loaded classes before and after this statement but I doubt it finds all libraries and dependencies we carry.

I have restarted with a single worker. Since it takes a few hours to stabilize I’ll check back tomorrow.

I don’t know. I never really thought about it too much.

Perhaps it’s better to use the before_fork block in the Puma config to actively load things:

Additionally, Puma 5 will introduce a Fork-Worker Cluster Mode where workers are forked from worker 0 instead of master, but that feels like a workaround.

Reading up on docs and code, it would suggest that Puma and Rails should just do this. Autoloading and Reloading Constants — Ruby on Rails Guides

However, I did find that there’s load_defaults which was introduced in Rails 5.1. We don’t call this in our application and now I’m wondering if we should. Not sure if it matters though as the classic loader should also be able to eager load.

Yeah but all my research so far indicated everything should work out-of-box with Rails 6. At least it preloads Rails classes (models, controllers, views). Definitely not all dependencies we carry, there I see a room for improvement.

Not to me, readonly pages remains in memory. It is a good solution to the rolling restart problem.

Yeah,

Apologies @tbrisker but this SELinux update was bigger than I expected and testing was long. Thanks to @aruzicka and @ekohl with this one.

This is a cherry pick request into 2.1.

After running for a while with a single worker it’s slightly higher than 2.0 but not a lot. This can also be the difference between 0 workers and 1 worker.

Overall my take away from this is that for small deployments there is a significant difference and scaling down makes sense. Sadly I don’t have any numbers for Passenger to compare.

1 Like

Can you comment a bit on the graph so we don’t misinterpret it? I see three spikes:

  • 1.4 GB (2 workers with 0:16?)
  • 1.2 GB (1 worker with 0:16?)
  • 0.8 GB (?)

Thanks.

So until Mon 01 Jun it ran Foreman 2.0 with Puma as a standalone service + Apache as a reverse proxy. The Puma default is to use 0 workers (non-clustered mode).

After that I upgraded to Foreman 2.1 it started to use clustered mode with 2 workers. You can immediately see the jump in memory. Then I restarted a day later but kept 2 workers (mistake on my side, intended to go to 1 worker). It does show a more flat growth which can be explained. The first time I upgraded I started to click around in the UI which likely triggered more code paths. After the second restart it on ran Puppet check ins so that’s a more stable access pattern.

Then I restarted with 1 worker. You can see the memory usage is lower than 2 workers, but still a little bit higher than Foreman 2.0. I’ll try with 0 workers as well since it’s closer to the Foreman 2.0 setup.

Minor thing: we should probably optimize memory for multiple thready for puma just like we do for dynflow.

I have raised two small cherry picks:

Both have been tested and are only adding (two new methods, two new Grub menu entries). The only change is I am removing BOOTIF= kernel parameter however the reason for that is that it is rendered twice and the removed entry was actually incorrect in some cases (empty, thus breaking networking).

1 Like

@rplevka found a bug in the installer - it carries too old Grub2 copy which does not work with UEFI HTTP boot. This patch fixes it: https://github.com/theforeman/puppet-foreman_proxy/pull/598

@tbrisker I’d appreciate this in 2.1, it’s a small change, low risk, the solution now is actually to require users to deploy global templates first. @ehelms perhaps since @ekohl is not available? Thanks.

1 Like