Error while provisioning the Hosts

Hi All,

I have done a fresh install of Katello 3.5 alonng with foreman 1.16 on centos 7.4.

While provisioning guest Machine which is to be provisioned given below error

on further analysis below error was found in /var/log/foreman-proxy/

E, [2018-01-22T07:21:10.793692 714d31b5] ERROR – : Attempt to remove nonexistent client certificate for
I, [2018-01-22T07:21:10.794750 714d31b5] INFO – : - - [22/Jan/2018:07:21:10 -0400] “DELETE /puppet/ca/ HTTP/1.1” 404 79 2.8318

E, [2018-01-22T07:21:10.855108 714d31b5] ERROR – : Failed to remove autosign for No such file /etc/puppet/autosign.conf
I, [2018-01-22T07:21:10.855575 714d31b5] INFO – : - - [22/Jan/2018:07:21:10 -0400] “DELETE /puppet/ca/autosign/ HTTP/1.1” 406 96 0.0012

on further analysis have found that autosign.conf is actually located at /etc/puppetlabs/puppet/autosign.conf Not sure why foreman-proxy is looking it in wrong path as its defined properly in puppet.conf

./puppetlabs/puppet/puppet.conf:36: autosign = /etc/puppetlabs/puppet/autosign.conf { mode = 0664 }

[ skhan ]

the Puppet errors are unpleasant, but likely not the cause why your host fails to provision. Assuming you are using network-based provisioning, there are 2 most common causes why my hosts fail to provision: problems with tftp and host fails to fetch provisioning templates. You can review templates for your host to see if they render properly. You can do that on the details page of your host, on the left hand side under ‘Templates’ tab. You can also get a better idea what is wrong by watching your host boot up - I would expect some sort of error that could point you in the right direction, it should appear just before all the ‘starting timeout scripts’ entries.

Hope this helps,

This looks a lot to me like a system was provisioned without needed drivers. For example, on vmware, if your VM is configured to use the VMWare Paravirtual SCSI driver, but you forgot to do yum install vmware-tools-pvscsi-common kmod-vmware-tools-pvscsi. What are you deploying on? Is this bare metal? Or are you deploying to a virtualization stack (vmware, kvm, AWS, etc) and if so, which one?

Looks like networking issue to me, the root cause is somewhere above the
timeout errors.