Foreman 1.21.2 dns_nsupdate does not invoke nsupdate for remote BIND server


I’m running Foreman, ISC DHCPD and Smart Proxy on one libvirt host and cannot get the Proxy to invoke nsupdate to update a remote BIND server. There are no log entries of nsupdate attempts from the Proxy.

Expected outcome:

BIND nsupdate to be issued by the proxy to remote server.

Foreman and Proxy versions:

1.21.2 with proxy functions TFTP, Puppet, Puppet CA, Logs, Dynflow, SSH, DNS, and DHCP

Foreman and Proxy plugin versions:

foreman-tasks 0.14.5
foreman_ansible 2.3.3
foreman_cockpit 2.0.3
foreman_default_hostgroup 5.0.0
foreman_dhcp_browser 0.0.8
foreman_remote_execution 1.7.0
foreman_setup 6.0.0

Other relevant data:

Foreman was installed via Debian Repositories and configured using foreman-installer.

Version: Foreman 1.21.2
Host Environment: KVM/libvirt
Host OS: Debian 9.8

Turning on DEBUG level logging on the the proxy, I do not get any further log entries concerning nsupdate invocation, but the initialization log entries lead me to believe the configuration should allow for nsupdate invocation.

As the foreman-proxy user, I can succesfully manually run nsupdate with the key configured in dns_nsupdate.yml.

grep dns /var/log/foreman-proxy/proxy.log

2019-04-27T07:22:21  [D] 'dns' settings: 'dns_ttl': 86400 (default), 'enabled': https, 'use_provider': dns_nsupdate (default)
2019-04-27T07:22:21  [D] 'dns' ports: 'http': false, 'https': true
2019-04-27T07:22:21  [D] 'tftp' settings: 'enabled': https, 'tftp_connect_timeout': 10 (default), 'tftp_dns_timeout': 10 (default), 'tftp_read_timeout': 60 (default), 'tftproot': /srv/tftp
2019-04-27T07:22:21  [D] Providers ['dns_nsupdate'] are going to be configured for 'dns'
2019-04-27T07:22:21  [D] 'dns_nsupdate' settings: 'dns_key': /etc/bind/foreman.key, 'dns_server': ns0.mydomain, 'dns_ttl': 86400, 'use_provider': dns_nsupdate
2019-04-27T07:22:21  [I] Successfully initialized 'dns_nsupdate'
2019-04-27T07:22:21  [I] Successfully initialized 'dns'

grep dns /etc/foreman-installer/scenarios.d/foreman-answers.yaml

  dns: true
  dns_listen_on: https
  dns_managed: true
  dns_provider: nsupdate
  dns_interface: enp8s0
  dns_server: ns0.mydomain
  dns_ttl: 86400
  dns_tsig_keytab: "/etc/foreman-proxy/dns.keytab"
  dns_tsig_principal: foremanproxy/foreman.mydomain@MYDOMAIN
  dns_forwarders: []
  freeipa_remove_dns: true
  dns_alt_names: []
foreman_proxy::plugin::dns::infoblox: false
foreman_proxy::plugin::dns::powerdns: false

cat /etc/foreman-proxy/settings.d/dns.yml

# DNS management
:enabled: https
:use_provider: dns_nsupdate

cat /etc/foreman-proxy/settings.d/dns_nsupdate.yml

# Configuration file for 'nsupdate' dns provider

:dns_key: /etc/bind/foreman.key
:dns_server: ns0.mydomain

grep log /etc/foreman-proxy/settings.yml

# Uncomment and modify if you want to change the location of the log file or use STDOUT or SYSLOG values
:log_file: /var/log/foreman-proxy/proxy.log
# Uncomment and modify if you want to change the log level
:log_level: DEBUG
# The maximum size of a log file before it's rolled (in MiB)
# The maximum age of a log file before it's rolled (in seconds). Also accepts 'daily', 'weekly', or 'monthly'.
# Number of log files to keep
# Logging pattern for file-based loging
#:file_logging_pattern: '%d %.8X{request} [%.1l] %m'
# Logging pattern for syslog or journal loging
#:system_logging_pattern: '%.8X{request} [%.1l] %m'
:log_buffer: 2000
:log_buffer_errors: 1000

Thanks in advance for any suggestions or tips to investigate this further!



what does the log say, in the debug mode there should be something. We put logs like “running nsupdate” etc you should see them.

Also do you have a correct SOA record? The nsupdate tool uses DNS to find master server to connect for the update:

$ host -t soa has SOA record 2019041013 7200 3600 1209600 3600


Thanks for your response.

I searched and saw such DEBUG level entries in other support posts.

The odd thing is I see no such logs

grep nsupdate /var/log/foreman-proxy/proxy.log

returns nothing after otherwise successful provisioning.

On top of which there are no access logs on either of 2 DNS servers that hold NS records for the domain for using the key.

One issue I did find is that there was an error in the SOA record, in that the master reference was pointing to the secondary! (thanks for the tip!).

I’ve corrected this and will try a test provision run again.


If you are sure that the proxy.log is really at DEBUG level then check foreman (production.log) at DEBUG level. You should see a if DNS update is or is not getting scheduled during orchestration. There are some chances when Foreman decides not to update DNS, e.g. associated subnet does not have DNS Proxy set etc.

Thanks again.

After the DNS correction propagated, I tried repeating a build.

Firstly, I checked that proxy.log is at DEBUG level config (also above log snippet does contains [D] entries).
Then I verified foreman was also at DEBUG level logging (production.log).

I then verified in the foreman web menu that the associated subnet DNS proxy has the reverse proxy set as the smart proxy (although ISC BIND9 on the smart proxy is not yet configured for serving this zone).

I also verified in the foreman menu that the associated domain has the forward proxy set as the smart proxy.

There appear to be no production.log DEBUG level entries for nsupdate calls.

I am under impression you have solved the issue with the change of your SOA. Can you repeat what is the issue again? If you can’t see proxy launching “nsupdate” then check if proxy actually receives /dns POST requests to identify if it’s the Foreman who thinks DNS does not need an update.

Is this a host create, edit or delete action?

Thanks for your reply.

I investigated this further.

Once I removed the manual DNS A record created by earlier invoking nsupdate from bash on the proxy, I deleted a host and rebuilt. The DNS records are now correctly created and deleted upon create, delete or build (specifying a new ip address for the interface).

I think the issue was the SOA misconfiguration, then subsequently the fix was masked in subsequent rebuilds because a correct DNS entry already existed from the manual invocation.

Thank you for your help!