How suitable is Foreman+Puppet for managing Laptops?

I have been using Foreman several times for installing and managing servers: i.e. machines that “don’t move”. I recently installed a bunch of completely empty machines via the discover mechanism over PXE, works great.

Today I was talking with some colleagues about tools for installing and managing Linux (probably Ubuntu 20.04 LTS) on the “Desktop” … which in really means: Laptops that sometimes connect to the office network (currently usually via VPN), but not every day, not all the time.

My question is how suitable is The Foreman + Puppet for managing/monitoring such systems that only very infrequently connect to the company network (and if they do they have a new IP every time)?

What I see in the dashboards is for example “Out of sync hosts” which is defined as no updates for more than 35 minutes. This indicates to me the default assumption is “servers”.

I’m confident the installing part will work (probably similar to what I now have with CentOS via PXE);
I’m mostly looking for the operational phase: Installing updates, monitoring if updates have been installed, is it still operational, etc.

Does anyone here have experience in doing this?
Is the Foreman/Puppet suitable for this class of operations?
If not; What tool is suitable?

Thanks.

Niels Basjes

Welcome back, Niels!

Our PXE workflow relies on always booting from network, which is a bit clunky to have for laptops. I’d probably suggest to rule out PXE and leverage Foreman Bootdisk plugin. This way you can prepare either generic or per-host bootdisk which can be manually put into a laptop and usually with a simple key combination (e.g. F11) booted once. From there, all the installation is unattended.

Just keep in mind that generic bootdisk only works with BIOS, for EFI you can only use Full Host Bootdisk which needs to be generated per-host. That’s a bit unfortunate, EFI is the standard these days. It would be nice to only have a single generic bootdisk that would boot on all EFI systems automatically grabing its configuration via MAC address. While this can be technically done, it hasn’t been implemented yet.

Alternative workflow is to boot discovery from ISO (USB stick, DVD), you can remaster the ISO so you don’t need to enter Foreman URL all the time and you can even have some facts built-in on multiple USB sticks so the one you pick will decide how systems will be provisioned. However, this workflow relies on kexec which we find unstable on some hardware.

You can always boot from network and then use what’s called hostgroup provisioing, where you pick a hostgroup from a PXE menu and start provisioning. It’s a bit clunky to configure tho, hopefully this will help:

Full disclosure, I am software engineer rather than from ops. So I am very biased, however I know we have multiple users here on the forum maintaining laptops. Hopefully they will speak up. Good luck and report back, write a blogpost about it!

We do use Foreman for some laptops, but in a different scenario. Our use case are the laptops used as training setups for the different courses we give.

But fro this I can say it works very well to provision the laptops and I would not expect to have them re-provisioned in a similar way like servers so having them not always boot from pxe is probably fine.

Having them only online sometimes speaks for a pull-based and not push-based model in general. So I would use Puppet also with things like out-of-sync and perhaps only add Remote Execution for additional tasks that are not so important or to trigger Puppet runs.

As systems differ and having laptops and servers in the same view can be confusing like one being expected to be in sync and the other not, I would have the laptops in a different organization. Perhaps also allowing the (power) user some access and control like adding additional puppet class to their device.

2 Likes

Thanks for the pointers.

What I did a few months ago was to PXE boot the unknown hosts of the discovery image and then I only had to assign a hostname and a host group.

The actual installation route (PXE/USB/ISO/…) for the laptops is something I’m confident I can get running. No worries there. Either an “installation subnet with PXE” or an “installation disk” should work fine either way (like I did in this experiment several years ago https://github.com/nielsbasjes/ipxe-boot-rom ).

I’m mostly worried about the steps after that. So making sure they are still ok even if they only rarely check in.

Yes, I agree with the pull-based model.
Only when a machine is online can it pull the new configurations.

I think I’m just going to try it here at home with some VMs and see what breaks.

Thanks for the suggestions!

We’ve implemented fully managing all laptops with Foreman+Puppet in a software development agency 3 years ago. This went really well!

  1. Whenever you wanted to have a fresh installation (e.g. because something broke, the system behaved weird, you had a new colleague or a presentation laptop had to be set up) you enabled the flag on your machine in the Foreman user interface and booted from the network with PXE boot and the rest of the workflow picked up.
  2. The system rebooted twice (first after base system installation, then after all installing all packages including the graphical user interface), and after roughly half an hour you could log in to your freshly set up computer.
  3. Upon your first login a startup script would configure common network shares and run a (user-customizable!) default desktop configuration setup. That script, at the end, replaced itself by a script that you would usually run at every login (a neat trick for having a custom one-off setup action for your desktop customization). You had your system back where you left off!
  4. The data was on a backup medium, but also the inital syncing back could be automated. Then there would be nothing missing.

When you have everything configured in Puppet this gives you wonderful freedom (to start from scratch anytime!) and control. Everyone on the team could benefit from a broken system setting being fixed by the first one discovering it. That’s really nice and gives you awesome team spirit!

A few things were important, though, to have this become a success:

  • Full automation, end-to-end for the entire setup. Even things like restoring your application tray, default applications, etc. (on your GNOME desktop environment, for example) should not have to be done manually. Otherwise you’d have minuscule reasons to not leverage the power of regularly upgrading your system through an entire reinstallation.
  • Treat your machine as an ephemeral system. Make the entire team think of your desktop machines as a runtime for “docker containers”. You should only be interested in your “containers”, the data, your work artifacts. And those you need to sync away to a backup medium, continuously. This way you make yourself entirely independent of your physical machine. (Take this with a grain of salt, but take it!)
  • Allow for self-service customization, but as a team effort. We managed the entire Puppet setup on Git, naturally, and had a user-defaults script in /usr/local/bin, which contained custom setup code based on the user name (or machine name). All changes would go through peer review via (GitLab) merge requests. Maintaining our machines was just like any other software development project.
  • Pick wisely what you sync back. The desktop and application settings in your home directory are sometimes the culprit for your system instabilities. Hence, cleaning those out and picking the right pieces to a) configure explicitly or b) treat as personal configuration data to be synced back shall be the engineering effort you have to do. Also, huge “stupid” data like virtual machines you can re-download afresh easily you probably shouldn’t sync. Moving your zillions of downloads, music files or whatever you store in a personal space on your machine you should probably also agree to regularly clean out or move to a network share, so this doesn’t negatively affect restoring machines to the original state. Don’t forget about important things like SSH keys (e.g. back them up in an encrypted fashion and/or have important keys on a 2nd factor pen drive).

It’s a huge effort to get this going, but it really pays off.

I even had a talk about this topic at PyCon 9 in 2018. (If you see “Ansible” in the slides please ignore that fact a bit; I started to understand that Puppet is really the better suited tool, conceptually, to enforce a configuration – think: you have a moving system, you have configuration drift – and Ansible is really better for one-off setups – think: it’s a Bash script written in YAML, you run it as root through SSH.)

Hint: Take a look at the software Puppet module for installing your desktop software. That helps with quite a few tasks. And take advantage of PDK for speeding up your Puppet development endeavours.

Hope that helps.

6 Likes

@Niels_Basjes, we had not so good experiences,
couple things to consider:
1 - you mentioned Ubuntu, deployment is supported, but you can’t see the ‘operational phase’ you are mentioning : Installing updates, monitoring if updates have been installed, so on. There are some missing pieces to finish this effort on the ddbb side IIRC.
2 - systems would not report back if they are not on VPN, or, you have a very secure Foreman’s Prox at your DMZ . Time for out of sync hosts should be something big to avoid false positives if you want to have a nice view on your dash, I do not see a real justification for this.
maybe if you are still in time, use fedora for desktops?

2 Likes

Thanks for the clear info.