Provision Libvirt host using Hammer

I am trying to create virtual hosts using Libvirt compute resource through
Hammer. I am running Foreman 1.10 and hammer (0.6.1) with
hammer_cli_foreman (0.6.2).

There are multiple issues that I came across and not sure what the problem
is.

Success:
All bare metal machines provisions fine with provided IP address

Problem
Hosts continue to receive provisioning template upon reboot. Hammer
command used to provision machine is

hammer --output json host create --hostgroup=Virtual --name=collab
–interface=primary=true,managed=true,provision=true,compute_bridge=br0,compute_model=virtio,mac=00:16:3e:01:fd:42
–compute-resource=kvmA
–compute-attributes=start=1,cpus=6,memory=6291456000
–volume=format_type=qcow2,capacity=200G --puppet-classes=role::collab
–parameters=sensortype=collab

I had to specify MAC address for it to succeed. If I don't provide MAC it
returns an error with return value 70 and message
"Could not create the host:
Invalid MAC
"

Here is the snippet from foreman-tail output

==> /var/log/foreman/production.log <==
2016-04-19 07:22:59 [app] [I] Started POST "/api/hosts" for 192.168.130.227
at 2016-04-19 07:22:59 +0000
2016-04-19 07:22:59 [app] [I] Processing by Api::V2::HostsController#create
as JSON
2016-04-19 07:22:59 [app] [I] Parameters: {"host"=>{"name"=>"splunkSH",
"compute_resource_id"=>3, "hostgroup_id"=>1, "build"=>true,
"enabled"=>true, "managed"=>true, "compute_attributes"=>{"start"=>"1",
"cpus"=>"6", "memory"=>"8388608000",
"volumes_attributes"=>{"0"=>{"format_type"=>"qcow2", "capacity"=>"500G"}}},
"puppetclass_ids"=>[16], "overwrite"=>true,
"host_parameters_attributes"=>[{"name"=>"sensortype",
"value"=>"[FILTERED]"}], "interfaces_attributes"=>[{"primary"=>"true",
"managed"=>"true", "provision"=>"true", "ip"=>"192.168.130.34",
"compute_attributes"=>{"bridge"=>"br0", "model"=>"virtio"}}]}, "apiv"=>"v2"}
2016-04-19 07:22:59 [app] [I] Authorized user sensoradmin(Admin User)

==> /var/log/foreman-proxy/proxy.log <==
D, [2016-04-19T07:22:59.554258 #9597] DEBUG – : accept:
192.168.130.254:53261
D, [2016-04-19T07:22:59.557761 #9597] DEBUG – : Rack::Handler::WEBrick is
invoked.
D, [2016-04-19T07:22:59.558660 #9597] DEBUG – : verifying remote client
192.168.130.254 against trusted_hosts foreman.auto
I, [2016-04-19T07:22:59.559210 #9597] INFO – : 192.168.130.254 - -
[19/Apr/2016 07:22:59] "GET /serverName HTTP/1.1" 200 17 0.0009

D, [2016-04-19T07:22:59.600121 #9597] DEBUG – : close:
192.168.130.254:53261

==> /var/log/foreman/production.log <==
2016-04-19 07:22:59 [app] [W] Action failed
> Net::Validations::Error: Invalid MAC
> /usr/share/foreman/lib/net/validations.rb:40:in validate_mac&#39; &gt; /usr/share/foreman/lib/net/dhcp/record.rb:7:ininitialize'
> /usr/share/foreman/app/models/concerns/orchestration/dhcp.rb:20:in new&#39; &gt; /usr/share/foreman/app/models/concerns/orchestration/dhcp.rb:20:indhcp_record'
> /usr/share/foreman/app/models/concerns/orchestration/dhcp.rb:158:in
queue_remove_dhcp_conflicts&#39; &gt; /usr/share/foreman/app/models/concerns/orchestration/dhcp.rb:111:inqueue_dhcp'

Once it is provisioned, I see it informing Foreman via

==> /var/log/httpd/puppet_access_ssl.log <==
192.168.130.101 - - [19/Apr/2016:07:42:36 +0000] "POST
/production/catalog/splunksh.auto HTTP/1.1" 200 158772 "-" "-"

==> /var/log/foreman/production.log <==
2016-04-19 07:42:50 [app] [I] Started POST "/api/reports" for 127.0.0.1 at
2016-04-19 07:42:50 +0000
2016-04-19 07:42:50 [app] [I] Processing by
Api::V2::ReportsController#create as JSON
2016-04-19 07:42:50 [app] [I] Parameters: {"report"=>"[FILTERED]",
"apiv"=>"v2"}
2016-04-19 07:42:50 [app] [I] processing report for splunksh.auto
2016-04-19 07:42:54 [app] [I] Imported report for splunksh.auto in 3.99
seconds
2016-04-19 07:42:55 [app] [I] Rendered api/v2/reports/create.json.rabl
(980.8ms)
2016-04-19 07:42:55 [app] [I] Completed 201 Created in 5025ms (Views:
852.3ms | ActiveRecord: 2542.7ms)

==> /var/log/httpd/foreman-ssl_access_ssl.log <==
127.0.0.1 - - [19/Apr/2016:07:42:50 +0000] "POST /api/reports HTTP/1.1" 201
84571 "-" "-"

==> /var/log/httpd/puppet_access_ssl.log <==
192.168.130.101 - - [19/Apr/2016:07:42:49 +0000] "PUT
/production/report/splunksh.auto HTTP/1.1" 200 11 "-" "-"

==> /var/log/foreman/production.log <==
2016-04-19 07:42:56 [app] [I] Started GET
"/unattended/built?token=4dc3fe6b-54b6-49ce-a2d6-98e61fb55a64" for
192.168.130.101 at 2016-04-19 07:42:56 +0000
2016-04-19 07:42:56 [app] [I] Processing by UnattendedController#built as
/
2016-04-19 07:42:56 [app] [I] Parameters:
{"token"=>"4dc3fe6b-54b6-49ce-a2d6-98e61fb55a64"}
2016-04-19 07:42:56 [app] [I] Found splunksh.auto
2016-04-19 07:42:56 [app] [I] unattended: splunksh.auto is Built!
2016-04-19 07:42:56 [app] [I] Completed 409 Conflict in 517ms
(ActiveRecord: 19.8ms

==> /var/log/httpd/foreman_access.log <==
192.168.130.101 - - [19/Apr/2016:07:42:56 +0000] "GET
/unattended/built?token=4dc3fe6b-54b6-49ce-a2d6-98e61fb55a64 HTTP/1.0" 409
1 "-" "Wget/1.12 (linux-gnu)"

When machine reboots it receives same PXE Provisioning script and goes in
infinite loop of reboot/install/reboot.

Hammer_foreman_CLI documentation is somewhat out of date with respect to
current version. Is there a plan to update it soon?

Thanks,

Hiya,

Sorry for the slow reply, I guess this one slipped through the cracks.

A few questions, if you're still struggling:

  1. You;re having issues with Hammer, and mention bare metal, but have you
    confirmed Libvirt hosts build without issue from the UI using the same
    classes etc?
  2. Do you have the option to upgrade to 1.11? Hammer regained it's abaility
    to use compute profiles in 1.11, and now my libvirt script looks like this:

hammer -u gwmngilfen -p passwd host create
–name "oactest${rng}"
–hostgroup-id 16
–operatingsystem-id 24
–compute-resource-id 7
–compute-profile-id 1
–image-id 24
–medium-id 6
–partition-table-id 107
–provision-method image \

–compute-attributes="start=1,image_id=/var/lib/libvirt/images/tracksbase.elysium.emeraldreverie.org-disk1"

The last two lines are for the copy-on-write backing diskimage I use, but
otherwise it;s fairly trivial and should work well.

Cheers,
Greg