Adding Raspberry Pi 10 (Buster) Devices to Foreman

I’ve put together a document on the process I’ve used for adding my two Raspberry Pi 10 (Buster) devices to foreman so that I can more easily manage and patch them.


Thanks @gawainxl !

I would be very interested in hearing why you took the time to put this deployment. I’d say others would be too.
If you were willing to introduce your setup in a few sentences that explains why, I think this might make a nice blog also :slight_smile:


Any thoughts in moving the text directly in here? We can make the tutorial a wiki so it can be updated later on.

Some comments on the Puppet side. I would actually recommend this (and just this):

puppet config set --section agent server
  • The certname setting defaults to the hostname, so no need to duplicate that.
  • I’d also not change the default interval from 30 to 40 minutes. It means you need to modify Foreman too. If you do want to, I’d recommend to apply and use that to set it.
  • The environment defaults to production
  • The listen setting is gone since Puppet 4
  • The report setting defaults to true

You also don’t need to create a user/group puppet. Since Puppet 4 only the server package creates it. The agent also runs as root so there is no unprivileged user anymore.

As for /etc/puppetlabs/code/environments/ I think it doesn’t need to exist. They are shipped by packages for convenience, but I personally think it shouldn’t. It only makes sense on a server, not an agent.

For the initial run, I’d recommend using the new puppet ssl bootstrap. Puppet 6 introduced it and is exactly intended for that use case.

Instead of copying the SSH key to the host manually, there’s which uses the Puppet ENC. That also means that if for some reason the key changes, Puppet can apply the new one.

wow thank you, I’ll need to wipe the RPI attempt the modified steps and update the documentation.

I have a couple of questions

This is just within my homelab and my goal is to be able to more easily keep track of and perform patching on my various systems, I put together the doc as Marek_Hulan had suggested it in a support thread I had started.

1 Like

Actually, a good question. I haven’t used it myself, but there is a package_provider parameter or you can set manage_packages to false. However, it may not detect all paths correctly due to AIO vs non-AIO. There may be some dragons.

You would deploy it on your Puppetserver and then assign the class to the host. Either via Foreman’s ENC or via site.pp, depending on how you actually use Puppet.

Thank you, I’ll look into this in the next couple of days. By any chance could you please link me to a good tutorial for performing this type of task, incase I’m not able to find a good one on my own?

I really smell a blog post on our official blog. Nice writeup!

1 Like

Thanks! I’ve tested revising the steps based on those recommendations, (I’m leaving the cat << EOF > /etc/puppetlabs/puppet/puppet.conf , steps in place though as doing it that ways ensures it will blow away any old, invalid configurations which may be present.

I’m going to work on figuring out the ssh_user.pp stuff next

There was also a really weird issue I encountered after one of the times I deleted the host before re-adding, where it left the host in a broken, orphaned state not visible anywhere within the WebUI that I documented in troublehsooting.

Well, unfortunately that was a huge flop, it would appear that that particular module is reliant upon the pick function from puppetlabs-stdlib, which I installed but assume is missing the pick function on armhf as it kept throwing errors whenever it tried to process the puppet ENC.

Do you happen to have a reccomendation for a better manner in which I can programmatically obtain the value for remote_execution_ssh_keys, other then Looking at a device’s Puppet YAML report from within the Foreman WebUI?

The steps I have in the document are more difficult then simply running the ssh-copy-id command from the server itself, which although works for small environments doesn’t scale well for larg Orgs where only a few SHOULD have root console access to a configuration management server?

Any thoughts or suggetions would be much appreciated.

One thing I realized is that gem install puppet does not ship the various bundled modules. That means some functionality is broken. See the various module-puppetlabs-*_core.json files here for the versions:

Look for the tag you have installed for an exact match.

In your particular case you’re at least missing the sshkeys module which is needed to manage authorized_keys files. Quite crucial for this use case.

LOL! I discovered that run interval is in an interger of seconds and I had it set to 30 seconds…

Not entirely sure where I poached that setting from… Omitted now. I’m going to see if installing puppetlabs-stdlib on the master(SmartProxy) resolves that error when calling the function pick.


I discovered that stdlib was needed on the proxy server itself. But after satisfying and getting rid of the pick error message that I ran into what you mentioned reguarding the ssl key management functionality not being present in the gem module. I might revisit compiling it from the source you linked if I run into more shortcomings, but for now I’m content being able to monitor it’s status and remotely patch it via a single pane of glass.

I do however wonder why some of the time when I added a RPI I had to manually set the org, OS and location via foreman rake console though.