I am upgrading my small install of Foreman (~20 hosts) from 1.7 to 1.8. I
performed the upgrade once and had to rollback the Foreman VM to 1.7 so I
could hopefully capture some useful data about what happened.
Foreman 1.7:
> Host.find_by_name('FQDN').interfaces
[#<Nic::Managed id: 68, mac: "00:25:90:99:44:d6", ip: "<PUBLIC_IP>", type:
"Nic::Managed", name: nil, host_id: 39, subnet_id: nil, domain_id: nil,
attrs: {"mtu"=>"1500", "network"=>"<OMIT>", "netmask"=>"255.255.255.192"},
created_at: "2015-05-14 18:40:43", updated_at: "2015-05-15 15:40:12",
provider: nil, username: nil, password: nil, virtual: false, link: true,
identifier: "eth0", tag: "", attached_to: "", managed: false, mode:
"balance-rr", attached_devices: "", bond_options: "">, #<Nic::Managed id:
69, mac: "00:25:90:99:44:d7", ip: nil, type: "Nic::Managed", name: nil,
host_id: 39, subnet_id: nil, domain_id: nil, attrs: {"mtu"=>"1500"},
created_at: "2015-05-14 18:40:43", updated_at: "2015-05-15 15:40:12",
provider: nil, username: nil, password: nil, virtual: false, link: true,
identifier: "eth1", tag: "", attached_to: "", managed: false, mode:
"balance-rr", attached_devices: "", bond_options: "">, #<Nic::Managed id:
67, mac: "68:05:ca:33:3c:7b", ip: "10.0.20.200", type: "Nic::Managed",
name: "", host_id: 39, subnet_id: 8, domain_id: nil, attrs: {"mtu"=>"1500",
"network"=>"10.0.20.0", "netmask"=>"255.255.255.0"}, created_at:
"2015-05-14 18:27:47", updated_at: "2015-05-15 15:40:12", provider: nil,
username: nil, password: nil, virtual: false, link: true, identifier:
"eth3", tag: "", attached_to: "", managed: true, mode: "balance-rr",
attached_devices: "", bond_options: "">]
The primary_interface value shows correctly as "eth2" in console.
#<Host::Managed id: 39, name: "<FQDN>", ip: "10.0.30.200", last_compile:
"2015-05-15 15:40:12", last_freshcheck: nil, last_report: "2015-05-15
15:40:01", updated_at: "2015-05-15 15:40:48", source_file_id: nil,
created_at: "2015-05-08 16:42:16", mac: "68:05:ca:33:3c:7a", root_pass:
"<OMIT>", serial: nil, puppet_status: 2147483648, domain_id: 1,
architecture_id: 1, operatingsystem_id: 4, environment_id: 1, subnet_id: 7,
ptable_id: 14, medium_id: 6, build: false, comment: "", disk: "",
installed_at: nil, model_id: 11, hostgroup_id: 12, owner_id: 2, owner_type:
"User", enabled: true, puppet_ca_proxy_id: 1, managed: true, use_image:
nil, image_file: nil, uuid: nil, compute_resource_id: nil, puppet_proxy_id:
1, certname: "<FQDN>", image_id: nil, organization_id: 4, location_id: 1,
type: "Host::Managed", compute_profile_id: nil, otp: nil, realm_id: nil,
provision_method: nil, primary_interface: "eth2", grub_pass: "<OMIT>">
During the upgrade process I had this one host's edit page open. Upon
completing the upgrade the interfaces tab has two with identifier "eth0".
(Attached screenshot). That prevents the host from saving. The one in
the screenshot that's marked as primary + provision is not eth0, it's eth2
which seemed to be correct in 1.7. I've noticed I can't modify the
identifier field which brings up a whole different issue.
To try and fix this host I moved primary + provision to the first eth0 (the
actual eth0 on server) and set the hostname + domain and network on that
interface. I then delete the second eth0 (wrong eth0) and hit Submit which
results in "Some of the interfaces are invalid. Please check the table
below.". When that error pops up the eth0 I deleted is back in the
interface list. The first eth0 says the DNS name is already taken and the
second eth0 says the identifier is already taken. I tried renaming the DNS
name on the wrong eth0 before deleting it and that doesn't work either.
If I could change the identifier that would likely solve my problem. The
real problem is why did the Foreman set two interfaces to eth0 when that
seems to be a violation of the Rails validators , or at least produces
validation errors in the interface.
I was able to "fix" the problem in console basically by doing
host = Host.find_by_name('FQDN')
i = host.interfaces[1]
i.identifier = "eth2"
i.save!
Ideally I wouldn't have to do that process over and over. If it would help
to roll back this system to 1.7 and test things please let me know. I use
this production instance of Foreman to test out features before I upgrade
my 400 host instance of Foreman where Foreman is a much more integral part
of the infrastructure for a HPC cluster.
Thanks,
- Trey