RFC: Migrate Smart Proxy Dynflow Core to Dynflow Plugins and API

This design is the idea of @aruzicka, I am mostly the messenger. There are three parts to the discussion: background, challenges or why we are proposing a change, and the proposal.


The Smart Proxy Dynflow Core (SPDC) is characterized by the following:

  • Provides a dynflow execution environment
  • Can be mounted inside smart proxy as a sidecar plugin
  • Provides a web server and restful API for interacting with Dynflow
  • Configured to use in-memory database

SPDC can be deployed within the smart proxy as a plugin or as a stand-alone service. In RPM based environments, SPDC is deployed as a stand-alone service running on port 8008. In Debian based environments, SPDC is deployed as a plugin inside a smart proxy.

The smart-proxy-dynflow plugin provides an API that proxies calls to SPDC so that features like remote execution only need to talk to a smart proxy but have a background processing service available to execute jobs closer to the hosts being managed by the smart proxy.

SPDC contains a plugin-like model for Foreman plugins to provide the execution code that Dynflow would execute. These plugins are characterized by providing a ‘_core’ labeled rubygem and package. For example, foreman_ansible provides the Foreman plugin for Ansible support and foreman_ansible_core provides the non-rails pieces needed by Dynflow on a smart-proxy to perform Ansible jobs. The naming of the _core packages is rooted in older versions of remote execution running within Foreman. These code bases are often co-located in the same code repository. They do not share any code however. All _core packages are targeted at and use the remote execution framework except for the foreman_tasks_core which is stand-alone and provides support to remote execution framework.


  • SPDC connects to an in-memory SQLite database by default
  • Smart Proxy talks to SPDC via HTTP on port 8008 to initiate jobs
  • SPDC talks to Foreman server via HTTPS and sends status reports

Remote Execution Context

As of today, SPDC design is generic but only remote execution related actions are handled. All ‘_core’ plugins are targeted at living within the remote execution framework for performing actions on hosts.

  • SPDC initiates and handles SSH connections
  • SPDC initiates and handles Ansible Runner calls
  • SPDC talks to salt master


The SPDC projects have for some developers created challenges in understanding and delivery. They are:

  • General understanding of what SPDC is
  • The use of the core naming scheme produces confusion around what each _core package provides
  • The tie in to smart-proxy naming leads to confusion, is it a service? Is it a plugin to the smart proxy?
  • What is the difference between SPDC and Dynflow on a server?
  • Projects that provide a _core plugin often store both gemspec’s and code bases in the same repository. This makes build automation hard.
  • Features advertised by the smart proxy and actually available things in SPDC do not necessarily have to match


  • Create dynflow_api project in Dynflow organization
    • Provide an HTTP API to dynflow
    • Provide plugin interface
    • Drops all ties to smart proxy
  • The set of _core SPDC plugins would now be dynflow plugins, e.g.
    • foreman_ansible_core -> dynflow_foreman_ansible
    • foreman_tasks_core -> dynflow_foreman_tasks
  • dynflow_api would be a stand-alone service running on port 8008
  • dynflow_api would by default deploy using in-memory sqlite as the default
    • It could be “fat” and have the executor inside it (what we do now)
    • It could be “thin” and have the executor outside (same deployment as we have in non-proxy dynflow), but then it would require external DB

My understanding is that we’ll run a different dynflow to support Foreman itself, correct? So if you co-locate Foreman and Foreman Proxy then you essentially have two separate dynflow deployments on that box.

Exactly. There will be dynflow instance used by Foreman itself and then the other one for the smart proxy

I am closing this RFC as a newer RFC supersedes and renders this one no longer needed: