RuntimeError - Unable to create directory on remote system /var/tmp/foreman-ssh-cmd

Trying to run a job on a remote host. Simple, just ping -c 9 google.com, as a given user (svc-dso). Running it from the command line on the foreman host works:

root@zztypuppet01:tty3@09:11:32:/var/log # ssh svc-dso@hostname "/bin/ping -c 9 google.com"
*******************************************************************
                       Authorized use only
*******************************************************************
PING google.com (172.217.9.14) 56(84) bytes of data.
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=1 ttl=51 time=10.8 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=2 ttl=51 time=2.17 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=3 ttl=51 time=2.08 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=4 ttl=51 time=2.10 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=5 ttl=51 time=5.48 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=6 ttl=51 time=2.21 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=7 ttl=51 time=2.25 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=8 ttl=51 time=3.43 ms
64 bytes from dfw28s02-in-f14.1e100.net (172.217.9.14): icmp_seq=9 ttl=51 time=2.32 ms

--- google.com ping statistics ---
9 packets transmitted, 9 received, 0% packet loss, time 8014ms
rtt min/avg/max/mdev = 2.088/3.662/10.886/2.760 ms
root@zztypuppet01:tty3@09:12:04:/var/log #

Expecting output in Foreman to be the nine attempts at pinging google.com.

Foreman is running at 1.16

Job errors out with the following details:

1:
Error initializing command: RuntimeError - Unable to create directory on remote system /var/tmp/foreman-ssh-cmd-a9396edf-6f01-403b-af66-aec8249f128d: exit code: 1
   2:
usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
   3:
[-l limit] [-o ssh_option] [-P port] [-S program]
   4:
[[user@]host1:]file1 ... [[user@]host2:]file2
   5:
Exit status: EXCEPTION

Hello,
please check if /var/tmp on the remote machine is writable by the user used to open the ssh connection (seems to be svc-dso in your example).

REX first copies the script there and then runs it. If the user can’t write there, it fails like this

Thanks! yes, /var/tmp on the remote machine is indeed writable.

> root@hostname@09:28:23:/var/tmp # cd /var
> root@hostname@09:29:11:/var # ls -al tmp/
> total 24
> drwxrwxrwt.  4 root   root   4096 Mar 27 10:32 .
> drwxr-xr-x. 24 root   root   4096 May 16  2016 ..
> -rw-rw-r--   1 root   root    118 Mar 15 23:54 testfile
> drwx------   2 root   root   4096 Mar 26 22:49 yum-root-dokp8f
> drwx------   2 youngt youngt 4096 Jan 25 11:18 yum-youngt-jXHAHB
> root@hostname@09:29:16:/var #

Here’s how my job is configured in Foreman, as well.

Any SELinux denials? Private systemd temp directories enabled for ssh server?

Try to do manually what Foreman tries to do:

scp -v script user@host:/var/tmp/script
ssh user@host "cat /var/tmp/script"

Edit: We might want to add “-v” option to scp for more debug output.

Edit 2: Doublecheck “scp” is installed on the server, it’s in ssh client package not server! My most successful stackoverflow answer:

openssh-clients is installed on both hosts, and scp run at the command-line from source to target hosts works as expected.

This shouldn’t be so hard.

Can you show the output of ls -ld /var/tmp? That would show the permissions on the directory rather than the content of that directory.

Also could you please try setting log_level to DEBUG in /etc/smart_proxy_dynflow_core, restarting smart_proxy_dynflow_core service, running the job again and post the resulting logs here?

Another thing which would be good to know is what platform are you on and what version of remote execution you have

root@hostname@08:18:37:/home/svc-dso/.ssh # ls -ld /var/tmp
drwxrwxrwt. 4 root root 4096 Apr  4 21:52 /var/tmp
root@hostname@08:38:31:/home/svc-dso/.ssh #

Could you remind me of the command to restart that service (“smart_proxy_dynflow_core”), please ? I’ve been up all night and I have no brain this morning. Thank you.

Platform is RHEL6 and I believe I’m on REX 1.3 since I just installed it a day or two ago.

On rhel6 it should be just service smart_proxy_dynflow_core restart if I recall correctly.

Yeah, something is bad now:

root@zztypuppet01:tty3@09:10:42:~/.puppetlabs # systemctl status smart_proxy_dynflow_core.service -l
● smart_proxy_dynflow_core.service - Foreman smart proxy dynflow core service
   Loaded: loaded (/usr/lib/systemd/system/smart_proxy_dynflow_core.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2018-04-05 09:08:32 CDT; 2min 13s ago
     Docs: https://github.com/theforeman/smart_proxy_dynflow
  Process: 14800 ExecStart=/usr/bin/smart_proxy_dynflow_core -d -p /var/run/foreman-proxy/smart_proxy_dynflow_core.pid (code=exited, status=1/FAILURE)
 Main PID: 26561 (code=killed, signal=TERM)

Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /opt/theforeman/tfm/root/usr/share/gems/gems/smart_proxy_dynflow_core-0.1.7/lib/smart_proxy_dynflow_core/launcher.rb:29:in `load_settings!'
Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /opt/theforeman/tfm/root/usr/share/gems/gems/smart_proxy_dynflow_core-0.1.7/lib/smart_proxy_dynflow_core/launcher.rb:13:in `start'
Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /opt/theforeman/tfm/root/usr/share/gems/gems/smart_proxy_dynflow_core-0.1.7/lib/smart_proxy_dynflow_core/launcher.rb:9:in `launch!'
Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /opt/theforeman/tfm/root/usr/share/gems/gems/smart_proxy_dynflow_core-0.1.7/bin/smart_proxy_dynflow_core:32:in `<top (required)>'
Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /usr/bin/smart_proxy_dynflow_core:23:in `load'
Apr 05 09:08:32 zztypuppet01 smart_proxy_dynflow_core[14800]: from /usr/bin/smart_proxy_dynflow_core:23:in `<main>'
Apr 05 09:08:32 zztypuppet01 systemd[1]: smart_proxy_dynflow_core.service: control process exited, code=exited status=1
Apr 05 09:08:32 zztypuppet01 systemd[1]: Failed to start Foreman smart proxy dynflow core service.
Apr 05 09:08:32 zztypuppet01 systemd[1]: Unit smart_proxy_dynflow_core.service entered failed state.
Apr 05 09:08:32 zztypuppet01 systemd[1]: smart_proxy_dynflow_core.service failed.
root@zztypuppet01:tty3@09:10:45:~/.puppetlabs #