Smartproxy Infoblox DHCP: Proxy suggests used DNS entries as free IPs

Problem:
We are currently experiencing the problem that foreman-proxy suggests IP addresses as free that have a DNS record associated with the via Infoblox host object, but no DHCP object.
There is a somewhat large subset of systems in our datacenter that are not managed by Foreman (mainly legacy and Windows servers). Those usually do not have DHCP enabled on all of their interfaces, but have DNS entries in our Infoblox system.
I have by now found out that the proxy now tries to ping addresses before marking them as unused, but on some subnets that our Foreman server cannot ping to due to network restrictions, this does not protect us from ending up with duplicate IP usage.
Is there a way to prevent this behaviour and/or make the smartproxy check for host objects on that IP regardless of related DHCP objects?

Expected outcome:
No IPs are beeing assinged to multiple servers.

Foreman and Proxy versions:
1.20.3

Foreman and Proxy plugin versions:
rubygem-algebrick.noarch 0.7.3-4.el7
rubygem-dynflow.noarch 0.8.34-2.fm1_17.el7
rubygem-faraday.noarch 0.9.1-6.el7
rubygem-faraday_middleware.noarch 0.10.0-2.el7
rubygem-sequel.noarch 4.20.0-6.el7
rubygem-smart_proxy_dhcp_infoblox.noarch 0.0.13-1.fm1_18.el7
rubygem-smart_proxy_dynflow.noarch 0.2.1-1.el7
rubygem-smart_proxy_remote_execution_ssh.noarch 0.2.0-2.el7
tfm-rubygem-angular-rails-templates.noarch 1:1.0.2-4.el7
tfm-rubygem-bastion.noarch 6.1.16-1.fm1_20.el7
tfm-rubygem-deface.noarch 1.3.2-1.el7
tfm-rubygem-diffy.noarch 3.0.1-5.el7
tfm-rubygem-docker-api.noarch 1.28.0-4.el7
tfm-rubygem-foreman-tasks.noarch 0.14.3-1.fm1_20.el7
tfm-rubygem-foreman-tasks-core.noarch 0.2.5-2.fm1_20.el7
tfm-rubygem-foreman_docker.noarch 4.1.0-2.fm1_20.el7
tfm-rubygem-foreman_hooks.noarch 0.3.15-1.fm1_20.el7
tfm-rubygem-foreman_remote_execution.noarch 1.6.7-1.fm1_20.el7
tfm-rubygem-foreman_remote_execution_core.noarch 1.1.4-1.el7
tfm-rubygem-foreman_snapshot_management.noarch 1.5.1-1.fm1_20.el7
tfm-rubygem-foreman_templates.noarch 6.0.3-2.fm1_20.el7
tfm-rubygem-git.noarch 1.2.5-9.el7
tfm-rubygem-hammer_cli_foreman_bootdisk.noarch 0.1.3-7.el7
tfm-rubygem-hammer_cli_foreman_docker.noarch 0.0.4-4.el7
tfm-rubygem-hammer_cli_foreman_tasks.noarch 0.0.13-1.fm1_20.el7
tfm-rubygem-parse-cron.noarch 0.1.4-4.fm1_20.el7
tfm-rubygem-polyglot.noarch 0.3.5-2.el7
tfm-rubygem-rainbow.noarch 2.2.1-3.el7
tfm-rubygem-smart_proxy_dynflow_core.noarch 0.2.1-1.fm1_20.el7
tfm-rubygem-wicked.noarch 1.3.3-1.el7

Other relevant data:
I think we had a patch in place back in 1.15 that prevented that behaviour. I can not recall though and am unable (due to lack of ruby knowledge) to tell whether the patch file I found would solve my issue (or even work on the current plugin version). The coworker that wrote this patch left the company some time ago. For completions sake, here is the patch I could find, maybe someone will be able to tell me what that does :wink:

diff -Naur smart_proxy_dhcp_infoblox.orig/common_crud.rb smart_proxy_dhcp_infoblox/common_crud.rb
--- smart_proxy_dhcp_infoblox.orig/common_crud.rb       2017-02-21 13:28:11.362928366 +0100
+++ smart_proxy_dhcp_infoblox/common_crud.rb    2017-02-28 11:00:08.208631398 +0100
@@ -27,7 +27,7 @@
       validate_mac(options[:mac])
       raise(Proxy::DHCP::Error, "Must provide hostname") unless options[:hostname]

-      build_host(options).post
+      build_host(options).put
       # TODO: DELETE ME needed for testing on infoblox ipam express
       #host.configure_for_dns = false
     rescue Infoblox::Error => e
diff -Naur smart_proxy_dhcp_infoblox.orig/host_ipv4_address_crud.rb smart_proxy_dhcp_infoblox/host_ipv4_address_crud.rb
--- smart_proxy_dhcp_infoblox.orig/host_ipv4_address_crud.rb    2017-02-21 13:28:11.363928394 +0100
+++ smart_proxy_dhcp_infoblox/host_ipv4_address_crud.rb 2017-02-28 11:00:31.490276324 +0100
@@ -3,6 +3,7 @@

 module ::Proxy::DHCP::Infoblox
   class HostIpv4AddressCRUD < CommonCRUD
+    include ::Proxy::Log
     def initialize(connection)
       @memoized_host = nil
       @memoized_condition = nil
@@ -45,9 +46,21 @@
     end

     def build_host(options)
-      host = ::Infoblox::Host.new(:connection => @connection)
-      host.name = options[:hostname]
-      host_addr = host.add_ipv4addr(options[:ip]).last
+      logger.debug "in build_host"
+      logger.debug @connection.inspect
+      logger.debug options[:ip]
+      host = ::Infoblox::Host.find(@connection, 'ipv4addr' => options[:ip])
+      if host.empty?
+        logger.debug "in host.empty if"
+        host = ::Infoblox::Host.new(:connection => @connection)
+        host.name = options[:hostname]
+        host_addr = host.add_ipv4addr(options[:ip]).last
+        host.post
+      end
+      host = ::Infoblox::Host.find(@connection, 'ipv4addr' => options[:ip]).first
+      raise "Hostname #{options[:hostname]} does not match infoblox host record #{host.name}" unless host.name == options[:hostname]
+      logger.debug host.ipv4addrs.inspect
+      host_addr = host.ipv4addrs.find { |ip| ip.ipv4addr == options[:ip] }
       host_addr.mac = options[:mac]
       host_addr.configure_for_dhcp = true
       host_addr.nextserver = options[:nextServer]

Well, Foreman treats DHCP and DNS as two separate entities. Existence of a DNS entry is not performed, therefore IP is returned as free.

Infoblox DHCP module provide both host and fixedaddress modes, we do recommend fixedaddress because that works fine with DNS module. If you want, you can turn it into “host” type and then you need to disable DNS foreman module to prevent conflicts.

Hi,

thanks for the reply.

I assumed it worked like that but was not sure.
Sorry for the missing information. We are using only DHCP Proxy to set both DHCP and DNS records via the host objects (because that is what our Infoblox people want).

In /etc/foreman-proxy/settings.d/dhcp_infoblox.yml we have :record_type: 'host'. I think that is what you were suggesting? If so, this does not solve our problem.

Also, we have currently implemented a workaround for the problem by allowing our Foreman server to ping addresses in the affected networks. I do fear though that this might reoccure in the future and would like a more permanent, less error-prone solution.

If you use host record type, then all records managed by Foreman are registered by foreman as used IPs. If you have some other allocated IPs and Foreman can’t ping them, that’s by design a conflict of course.

A clean way could be to extend our conflict detection code to do a reverse (PTR) DNS lookup and treat that IP as used. But not so keen about this, what you think @ekohl?

In various places there’s a no-yet-used-ip.example.com reverse for all unused IPs which could show up as false positives. It’s hard to tell how common this is. Other common issues are not cleaning up old reservations from when it was manual.

IMHO it would be best to complete the administration so Foreman can rely on Infoblox rather than trying to design heuristics.

I haven’t used it, but maybe @aruzicka’s foreman_probing could be useful?

From a user’s perspective, I would prefere a false-positive used IP over a false-positive unused IP by quite a lot. Having IP addresses assigned to two systems is way more problematic than having some unused ones IMHO.
I would asume, that a setup where not all host records are managed by Foreman is a not-so-uncommon setup, but of course I do not have any data on that. In our case, we have legacy systems managed by a cobblerd instance with some Infoblox WAPI scripts as well as Windows servers with manually assigned DNS records, both in the same subnets as the hosts managed by Foreman. In all cases, we do have host objects in the Infoblox, but DHCP is only enabled for the Foreman systems on all interfaces. The other systems have DHCP only enabled for the records that need them.

With ISC DHCP we do look at all leases but this doesn’t cover statically assigned IPs. That’s why we have a difference in subnet ranges. You can have a /24 network but only allow assigning from .10 to .100 for example. The idea is that you find free space Foreman is allowed to use. The other (as you’ve found) is pinging. Not perfect, but that’s the brown field deployment strategy.

To me it sounds like this could be an smart_proxy_dhcp_infoblox setting since it doesn’t make sense for most DHCP providers.

There is an ongoing effort to bring new “external IPAM” feature into Foreman, search for MyIPAM Foreman patches. The goal is to define Smart Proxy API that will allow various external IPAM implementations, with MyIPAM as the first one. If you want a good IPAM in Foreman you can take a look and than have someone to implement IPAM provider for your system, I am assuming it’s Infoblox. Then you will be able to associate Subnets with your IPAM pools and make sure that only IPs allocated by IPAM will be handed over to Foreman. The API is small and the plugin should be easy to implement, also it is a good time to start now since the API is not yet merged and we can still customize it to your needs.