First Puppet Run Failing

I’m having the same issue with Foreman (Satellite) and Puppet 3.8 where Puppet fails on the first run every time. This causes some difficulty when I expect Puppet to play a central roll in system deployment/configuration but can’t rely on it to predictably run.

I have a feeling it may be related to my process but I’m not sure where to get more information.

-I use an automation platform to add a host to Satellite via a REST call
-Deploy a virtual machine
-Install Puppet and configure puppet.conf with server URL
-Start Puppet

MOST of the time the first run fails with the same error reported here (node.rb returned 1) but subsequent runs seem to always work.

Any ideas where I might look for answers?

Thanks!
L.Kimmel

UPDATE: Based on a support case with Red Hat it seems that first-run failure is expected behavior because Puppet won’t run until there have been facts gathered from the system. This apparently doesn’t happen until some time during the first run. Is there no way to pre-inform Foreman of the system so that the first run will succeed?

It’s generally not a great idea to reply to posts 4 years old, as the people involved have frequently moved on. Accordingly, I’ve shifted your post to a new topic - hope you don’t mind :slight_smile:

So generally you’d expect the very first Puppet run to contain both 1 failed and 1 successful call to node.rb. This is because Puppet runs it twice. The first is to get the Puppet environment only (since the server is authoratative on this point, by default). After this it uplods the facts (which is what creates the Host in Foreman), and then runs it a second time for the class & parameter data.

Since the Host does not exist at the time of the first ENC run, you’d expect that to fail, and that’s harmless. Puppet will use the configured environment from the agent and continue on. So long as the first run as a whole is successful, you have no issues. If that’s not the case, we can dig some more.

As for pre-informing Foreman? Well yes, we can entirely handle provisioning the machine via PXEboot or image deployment, or you could use the API to create an unmanaged host with the requisite information. It shouldn’t be needed though, as covered above.

Sure, that’s cool that you moved it. I just tied it there because I didn’t want to rehash the same issue.

I do see that it appears to attempt to run a 2nd time. The issue here is that it doesn’t do a pluginsync (download facts) on the 2nd run. I have several custom facts (facts.d) that are required by some of my modules. These don’t get downloaded on the 1st run because Foreman doesn’t apparently know which modules to apply to the host yet (even though I already placed it in a valid hostgroup). The 2nd run basically fails due to a ‘divide by null’ issue where null would be a value supplied by one of my custom facts.

I’m currently working around the issue by doing a foreground ‘puppet agent --noop’ run to get the facts uploaded. Then, as part of the provision, the system is rebooted at which time Puppet runs again, correctly this time.

You also noted that I could “use the API to create an UNMANAGED host with the requisite information.” Indeed I am using the API to pre-create the host but still there’s the initial failure presumably because of the absence of the facts files. Does the ‘requisite information’ you allude to contain some minimum set of facts that would allow node.rb to complete successfully on the first run?

Is the host being UNMANAGED important? I initially did that but moved to have it be MANAGED. I did this because I found that when we deleted the host with an API call, on an UNMANAGED system, that the Puppet certificates were not cleaned off of the Foreman server. They are cleaned up when the system is MANAGED.

If you click the YAML button on a host’s page, it’ll show what Puppet receives - I think you only need to ensure the environment is set, but test it yourself.

You are correct, only managed hosts can have orchestration done for them (which includes certificate cleanup) - I mention it because unmanaged hosts is what you’ll get from fact upload, so I assumed that was what you wanted. Obviously via the API you can decide what works for you.

When I create hosts I don’t set an environment directly on the host. However, I do add the host to a host group from which they inherit an environment. Unfortunately, it doesn’t seem that this is enough. Foreman appears to still require the host to first upload facts. It seems odd to me.

It would odd, if it were true :wink: - but in that case managed hosts would never work. The issue is more about what else what needs to be set first. Did you set the puppetmaster field too? I suspect that’s relevant. I’d test it myself but my system is somewhat broken at the moment :stuck_out_tongue:

Well, I think/hope you are correct and that is precisely why I posted this. I’d like to find out what that “thing” is which I need to add to the creation request. Currently, my API call only sets the following values:

build: false
organization_name: <org_name>
location_name: <loc_name>
hostgroup_id: <hg_id>
ip: <ip_addr>
mac: <mac_addr>
subnet_name: <subnet_name>

Everything else that I know I want to set on the host is inherited from its hostgroup. As you may know there are hundreds of possible attributes to set for a host so finding the one I’m missing is akin to finding a needle in a haystack. I was hoping some expert might just be able to tell me the right one (or more).

Also, I’m not sure if it’s clear but after the first failed run everything does start working. So I guess I don’t understand your comment that “in that case managed hosts would never work.” I’m just surprised that creating the host through the Foreman API and adding it to a hostgroup doesn’t automatically trigger Foreman to notice what Puppet modules (environment) applies to the host so that it may sync custom/external facts immediately.