It is the time again where we put our effort into making sure that all most important scenarios work. It’s easy to get your hands dirty and help us with testing:
Install Foreman RC version (the most up to date one)
Pick a scenario from this post or add your own scenario
If you find an issue, file it in the RedMine and make a comment in this thread linking the issue
Mark the scenario as checked in this OP (this is an editable wiki post), here is syntax of checked and unchecked lines (you can click on checkboxes directly with mouse):
Unchecked
[*] Checked
Checked (alternative syntax with no special semantics - both are equal)
You can start right away, the ideal timing is from Monday January 28th until Sunday February 3rd but feel free to put your effort anytime before the final release comes out.
Installation
Install Foreman using existing script/forklift/beaker
[*] RHEL / CentOS latest stable version
Debian stable
Ubuntu stable LTS
Install Foreman manually by following our installation guide
RHEL / CentOS latest stable version
Debian stable
Ubuntu stable LTS
Upgrade existing Foreman deployment (advertise in RedMine it was an upgraded instance if you encounter bug)
RHEL / CentOS latest stable version
Debian stable
Ubuntu stable LTS
Sanity checks
[*] Installation on Red Hat distro with SELinux turned on
Packages passenger and tfm-rubygem-passenger are from the same repo (foreman) and in the same version
Logging in with a user that has limited permissions works properly
Provisioning
Bare-metal or virtualized PXE provisioning (host exits build mode and reboots)
[*] BIOS host with CentOS
[*] UEFI host with CentOS
BIOS host with Debian or Ubuntu
UEFI host with Debian or Ubuntu
BIOS host with Atomic OS
Compute Resources (VM is successfully created, finish or cloud-init is executed)
Puppet manifest import (classes are imported, parameters recognized)
Puppet configuration (class is assigned to a host, agent performs changes, reports and facts appears correctly)
[*] Log in using user from LDAP (user account is created from LDAP)
[*] Log in using user from FreeIPA (user account is created from FreeIPA)
Foreman Discovery
Bare-metal or virtualized provisioning via Provision - Customize Host (host exits build mode and reboots)
[*] BIOS with discovery from PXE
[*] UEFI with discovery from PXE
[*] BIOS with discovery PXE-less
UEFI with discovery PXE-less
[*] Provision a host via discovery rule
Provision a host via Customize UI button
Provision a host without hostgroup via Customize UI button
Provision a host via hammer via hostgroup
Provision a host via hammer via auto provisioning rule
Foreman Bootdisk
Bootdisk basic provisioning (host exits build mode and reboots)
[*] Full host image
[*] Host image
[*] Generic image
[*] Subnet image
Foreman Ansible
Import Roles
With/From Smart-Proxy
Assign Roles
Hostgroup
Hosts
Play Roles
Hostgroup
Hosts
Run shipped Ansible playbook (job), e.g. to install ansible role from galaxy
Foreman Remote Execution
Run some job, e.g. ‘ls /etc’ on a system that was provisioned from Foreman, it should work out of the box
Run some job against the Foreman host itself, only key configuration should be needed
Foreman Puppet run
Trigger Puppet run on host through SSH
Foreman Openscap
Create new content file, define a policy, assign it to a host and deploy the foreman_scap_client using puppet
Verify ARF report gets uploaded upon foreman_scap_client run and full version of it can be rendered
Create tailoring file, assign it to the policy and rerun client with the tailoring file
Foreman Virt Who Configure
Create a configuration definition and run it e.g. through REX on some provisioned host. It should succeed as long as it has access to sat tools repo on RHEL, epel (I think) on centos.
Foreman Templates
hammer import-templates --lock true # sync newest templates from community-templates repo, see audits
mkdir /repo; chown foreman /repo; hammer export-templates --repo /repo # may need setenforce 0
Image:
Normal VM creation with single HDD on Local Datastore - Passed
Normal VM creation with added HDD on Local Datastore - Passed
Normal VM creation with single HDD on Storage Pod - Passed
PXE:
Normal VM creation with single HDD on Local Datastore - Passed
Normal VM creation with added HDD on Local Datastore - Passed
Normal VM creation with single HDD on Storage Pod - Passed
Normal VM creation with added HDD on Storage Pod - Passed
Bootdisk:
Normal VM creation with single HDD on Local Datastore - Passed
Normal VM creation with added HDD on Local Datastore - Passed
Normal VM creation with single HDD on Storage Pod - Passed
Normal VM creation with added HDD on Storage Pod - Passed
Network:
Normal VM creation with NIC in portgroup on Standard Switch - Passed
Normal VM creation with NIC in portgroup on Distributed Switch - Passed
Normal VM creation with NIC in port group on Distributed Switch with VLAN - Passed
Normal VM creation with additional NIC - Passed
Friendly reminder, let’s share how RC testing went in this thread please. The OP is editable wiki, just mark what you have covered so far. I am installing today!
I could not find time today, but tomorrow. What is the schedule @tbrisker? Do I still have time for testing and then coming up with discovery/bootdisk releases?
Yes, there is still some time. We haven’t discussed specifics, but with RC4 coming out just his week, I’d expect GA to be on the week of Feb 17th, assuming no major blockers are found.
After enabling ansible, Rails won’t come up with: “ERF73-0602 [Foreman::PermissionMissingException]: some permissions were not found: [“play_roles_on_host”, “play_roles_on_hostgroup”, “view_ansible_roles”, “destroy_ansible_roles”, “import_ansible_roles”, :play_roles_on_host, :play_roles_on_hostgroup, :view_ansible_roles, :destroy_ansible_roles, :import_ansible_roles] (Foreman::PermissionMissingException)”. Not sure what is wrong, a ReX bug prevents me from seeding the database:
This looks like a fix that’s needed in the plugin, possibly fixed already in 2.3.1 - can you try updating the plugin to 2.3.1 and see if it works? in any case, plugins don’t have to follow the same cadence as core, we could do another ansible release if needed to get it working properly without blocking 1.21 release on it (although it would be good to get it fixed before GA)
We still are missing a good way to associate templates via hammer, someone please pick this up. This is TODO for like 7 years.
[root@rc ~]# hammer os set-default-template --help
Usage:
hammer os set-default-template [OPTIONS]
Options:
--config-template-id TPL ID Config template id to be set
--id OS ID Operatingsystem id
-h, --help Print help
Only IDs can be used, we need to allow users to pick them by name.
@stbenjam out of curiosity, I was trying to test UEFI HTTP Boot under libvirt (Fedora Stable) and my VM was actually requesting TFTP file named “http://rc.nat.lan:8443/httpboot/grub2/grubx64.efi” instead doing proper HTTP request. Is there anything I need to configure in BIOS?
Because I do see Network Configuration - Enable IPv4 DHCP. Then there’s also HTTP Boot entry in the same submenu, but after Save - F10 it does not remember it. This is the screen I am talking about: