Ohai!
Recently, sp-dynflow 0.8.0 (and later 0.8.1) landed in nightly and currently breaks Ansible
Looking at the log, I see:
2022-03-31T12:04:57 64d60934 [I] Started POST /dynflow/tasks/launch
2022-03-31T12:04:57 64d60934 [I] Finished POST /dynflow/tasks/launch with 200 (34.33 ms)
2022-03-31T12:04:57 64d60934 [E] <ArgumentError> unknown keyword: :id
/usr/share/gems/gems/smart_proxy_ansible-3.3.1/lib/smart_proxy_ansible/runner/ansible_runner.rb:13:in `initialize'
/usr/share/gems/gems/smart_proxy_dynflow-0.8.1/lib/smart_proxy_dynflow/action/batch_runner.rb:11:in `new'
/usr/share/gems/gems/smart_proxy_dynflow-0.8.1/lib/smart_proxy_dynflow/action/batch_runner.rb:11:in `initiate_runner'
/usr/share/gems/gems/smart_proxy_dynflow-0.8.1/lib/smart_proxy_dynflow/action/runner.rb:45:in `init_run'
/usr/share/gems/gems/smart_proxy_dynflow-0.8.1/lib/smart_proxy_dynflow/action/runner.rb:12:in `run'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action.rb:582:in `block (3 levels) in execute_run'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action/progress.rb:31:in `with_progress_calculation'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action/progress.rb:17:in `run'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/middleware.rb:19:in `pass'
Looking at recent sp-ansible commits, I would guess itâs fixed by Fixes #34585 - Process artifact files on demand ¡ theforeman/smart_proxy_ansible@d47ed7a ¡ GitHub but this didnât land in nightly yet. Downgrading sp-dynflow to 0.7.0 fixes the issue.
Breakage in nightly happens, thatâs what nightly is for, nevertheless I found a few oddities while investigating this, and would like to discuss them.
- This was known to break things â can we somehow ensure packages that need to be updated together are somehow tracked and not land uncoordinated? Maybe even adding
Conflicts
on the package, so they are not accidentally cherry-picked? - Our CI (well, the plugins and luna pipelines) execute
hammer job-invocation create
for REX and Ansible and hung there for hours (before Jenkins aborted the job), shouldnâthammer
error out if the job failed to launch or takes too long to execute? - The jobs are marked as running/pending in
hammer job-invocation list
, which probably explains the above waiting â shouldnât Foreman notice the error on the Proxy?
Thanks
Evgeni