Hi everyone,
Wanted to write to see if I am missing a setting that needs adjusted in
foreman.
My setup is as follows:
First Run/Kickstart
On first run or a kickstart, since by default foreman does not know about
a node etc, I have a default node setup in puppet, that only applies
puppet, mcollective, and yum (our base defaults). Once the node is
registered in forman, I select the hostgroup that it belongs to. The base
host group contains a variable that simply state foreman_controlled =
true. When the base::role class that is in the default node detects this
setting, it skips including the puppet, mcollective and yum classes, and
instead does nothing. This allowed my to avoid dealing with duplicate
class deff errors from declaring in both the default node as well as in
hostgroups.
Second run/hostgroup membership
Once a node has been "classified" in a hostgroup correctly, a subsequent
run performs all of our configurations etc. so far, the only manual step
after this is adding the node to our GPFS cluster, but I am working on a
module to automate this as well. At a later date, we plan on using the API
to bulk preload our new nodes that are going to be built, so that they are
already classified in hostgroups etc. This is also going to allow us to
use foreman to control our dhcp/dns settings as well as kickstart for the
machines.
My main question, is once the node shows up in foreman, I notice that he
puppet server and puppet ca field are blank. This prevents me from being
able to trigger puppet runs from foreman once I move the nodes into the
correct hostgroup. Is there a setting I am missing that is not detecting
when the node registers that it's puppet server and puppet ca are the
system defaults? Is there a way I can just define the defaults for these
values for every node that registers?
Thanks for the help!
Chuck