Remote execution design

Greetings!

Over the past couple days I've been working on putting together a broad design document for improving the remote execution capabilities of Foreman. Currently we support running puppet through mcollective, but nothing beyond that. The goal of this discussion is to figure out a plan for allowing management of machines in Foreman using a tool like mcollective.

Although I mention mcollective throughout the document, that doesn't mean it's the only tool we should consider. It does have the benefit of having a reasonable PKI scheme, is very extensible, has a solid communication model, and is already widely accepted in the Puppet community.

Here's the document in markdown for comment, but it's also on gist [1], which is much easier for reading:

Implementation

Architectural overview

Mcollective is the most generic and flexible solution for running controlled, selective commands and jobs against groups of hosts. It uses a plugin-based architecture which allows it to run virtually any task remotely without giving users the kind of unfettered access to hosts like Func, polysh, or rundeck allow. Users that want to allow administrators to run commands against large swaths of hosts in a free-form manner, like running yum upgrade through a shell across all app servers, can do so, but more controlled jobs, like enabling or disabled the puppet agent across many hosts solely using the puppet-agent mcollective plugin is also possible. Remote execution capabilities will be handled through the proxy since users are likely to run the mcollective master and ActiveMQ on each puppet master.

Command execution and storage

  • Actions will be stored in the database by namespace and arguments.
    • The mcollective plugin name will provide the namespace.
    • The arguments will be
  • Remote actions and metadata about the hosts they've been run across should be stored.
    • The metadata (i.e. hostgroup) of hosts the command ran across.
    • All the hosts that complied with that metadata at the time of command running.
  • Actions that have been run in the past should be able to be replayed against hosts with different attributes.
    • If an action was taken for all the machines in the app hostgroup, it should be possible to run that same command on all the machines in the db-master hostgroup.
    • Fact-based matching for target systems.
  • All the commands that have been run on a single system, a hostgroup, single/multiple facts, or globally, should be available along with that object.
    • If a command was run based on a fact, that fact's page will have the command listed.
  • Commands that have already been run can be scheduled based on their namespace and arguments (this is reliant upon having a queue with support for pushing future-dated tasks onto it)

User interface

  • The user will have ACL's based on the normal permissions and ancestry hierarchy
  • The output of commands will be parsed and presented back to the user
  • Long polling happens to get the status of execution on host(s)

Single host

  • Hosts assigned to a puppet master with remote execution capabilities will have an additional icon in the section with "Build" and "Delete"

Remote management process:

  1. User clicks the remote management button on the host page
  2. User is shown all the plugins (namespaces) they have access to use
  3. User chooses which management namespace (plugin) they will use
  4. The user enters arguments or reloads arguments from history

Multiple hosts from the listing page

  • Hosts can be filtered on the host listing page and then be selected to have remote management tasks run on them
    • Remote execution tasks need to be orchestrated across different pupetmasters since all the hosts in the listing post-filter may not all talk to the same master
    • How do we handle hosts that are on a master that don't have mcollective configured? Just ignore them? Or report back to the user that they didn't have the command run?

"Remote execution command center"

Standalone page

  • This page will accessible through the dropdown (like Puppet CA's) that's next to each proxy on the proxy listing page

  • It'll load plugins that are available on that proxy

    • Plugins will rely on locations & organizations for filtering based on ACL
    • Roles can be used to control access to plugins globally
    • This needs more thought
  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs
  • Retrieve available namespaces
  • Retrieve all future-dated tasks
    • Potentially based on date range
  • Retrieve a history of tasks
    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)
    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through mcollective directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you've got any questions. I'm looking forward to your feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

> Greetings!
>
> Over the past couple days I've been working on putting together a broad
> design document for improving the remote execution capabilities of Foreman.
> Currently we support running puppet through mcollective, but nothing beyond
> that. The goal of this discussion is to figure out a plan for allowing
> management of machines in Foreman using a tool like mcollective.
>
> Although I mention mcollective throughout the document, that doesn't mean
> it's the only tool we should consider. It does have the benefit of having a
> reasonable PKI scheme, is very extensible, has a solid communication model,
> and is already widely accepted in the Puppet community.
>
> Here's the document in markdown for comment, but it's also on gist [1],
> which is much easier for reading:
>
> ## Implementation
> ### Architectural overview
> Mcollective is the most generic and flexible solution for running
> controlled, selective commands and jobs against groups of hosts. It uses a
> plugin-based architecture which allows it to run virtually any task
> remotely without giving users the kind of unfettered access to hosts like
> Func, polysh, or rundeck allow. Users that want to allow administrators to
> run commands against large swaths of hosts in a free-form manner, like
> running yum upgrade through a shell across all app servers, can do so,
> but more controlled jobs, like enabling or disabled the puppet agent across
> many hosts solely using the puppet-agent mcollective plugin is also
> possible. Remote execution capabilities will be handled through the proxy
> since users are likely to run the mcollective master and ActiveMQ on each
> puppet master.
>
> ### Command execution and storage
> * Actions will be stored in the database by namespace and arguments.
> * The mcollective plugin name will provide the namespace.
> * The arguments will be
> * Remote actions and metadata about the hosts they've been run across should
> be stored. * The metadata (i.e. hostgroup) of hosts the command ran across.

Does this include facts?

> * All the hosts that complied with that metadata at the time of command
> running. * Actions that have been run in the past should be able to be
> replayed against hosts with different attributes. * If an action was taken
> for all the machines in the app hostgroup, it should be possible to run
> that same command on all the machines in the db-master hostgroup. *
> Fact-based matching for target systems.
> * All the commands that have been run on a single system, a hostgroup,
> single/multiple facts, or globally, should be available along with that
> object. * If a command was run based on a fact, that fact's page will have
> the command listed. * Commands that have already been run can be scheduled
> based on their namespace and arguments (this is reliant upon having a queue
> with support for pushing future-dated tasks onto it)
>
> ### User interface
> * The user will have ACL's based on the normal permissions and ancestry
> hierarchy * The output of commands will be parsed and presented back to the
> user * Long polling happens to get the status of execution on host(s)
>
> #### Single host
> * Hosts assigned to a puppet master with remote execution capabilities will
> have an additional icon in the section with "Build" and "Delete"
>
> Remote management process:
>
> 1. User clicks the remote management button on the host page
> 2. User is shown all the plugins (namespaces) they have access to use
> 3. User chooses which management namespace (plugin) they will use
> 4. The user enters arguments or reloads arguments from history
>
> #### Multiple hosts from the listing page
> * Hosts can be filtered on the host listing page and then be selected to
> have remote management tasks run on them * Remote execution tasks need to
> be orchestrated across different pupetmasters since all the hosts in the
> listing post-filter may not all talk to the same master * How do we handle
> hosts that are on a master that don't have mcollective configured? Just
> ignore them? Or report back to the user that they didn't have the command
> run?

I would ignored them and display a user some kind of notification (not red
bubble, some kind of friendly yellow message). Can't think of a use-case that
must be run on all hosts xor none.

>
> ### "Remote execution command center"
> #### Standalone page
> * This page will accessible through the dropdown (like Puppet CA's) that's
> next to each proxy on the proxy listing page * It'll load plugins that are
> available on that proxy
> * Plugins will rely on locations & organizations for filtering based on
> ACL * Roles can be used to control access to plugins globally
> * This needs more thought
>
> * (not-MVP) Install plugins from Github or a tarball onto a proxy
>
> ### API & CLI
> #### General API
> * Retrieve the status of all running remote execution jobs
> * Retrieve available namespaces
> * Retrieve all future-dated tasks
> * Potentially based on date range
> * Retrieve a history of tasks
> * Date range
> * Task namespace
> * Filter that caused hosts to be selected
>
>
> #### Task API
> * Check for task status based on ID (long-polling for the CLI)
> * Returns:
> * Current progress
> * Successfully completed hosts
> * Failed hosts
> * Submit new tasks - this might be best to handle through mcollective
> directly

>
> #### CLI
> * All API methods should be exposed
> * Long-polling for task completion
> * Auto-completion for available hostgroups and facts
> * Search through past commands and re-run them
>
> Let me know if you've got any questions. I'm looking forward to your
> feedback!
>
> -Sam
>
> 1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!

··· On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:


Marek

Looks nice. Will there be a simple bash-over-ssh plugin from the day
one?

LZ

··· On Mon, Jul 29, 2013 at 10:00:11PM -0400, Sam Kottler wrote: > Greetings! > > Over the past couple days I've been working on putting together a broad design document for improving the remote execution capabilities of Foreman. Currently we support running puppet through mcollective, but nothing beyond that. The goal of this discussion is to figure out a plan for allowing management of machines in Foreman using a tool like mcollective. > > Although I mention mcollective throughout the document, that doesn't mean it's the only tool we should consider. It does have the benefit of having a reasonable PKI scheme, is very extensible, has a solid communication model, and is already widely accepted in the Puppet community. > > Here's the document in markdown for comment, but it's also on gist [1], which is much easier for reading: > > ## Implementation > ### Architectural overview > Mcollective is the most generic and flexible solution for running controlled, selective commands and jobs against groups of hosts. It uses a plugin-based architecture which allows it to run virtually any task remotely without giving users the kind of unfettered access to hosts like Func, polysh, or rundeck allow. Users that want to allow administrators to run commands against large swaths of hosts in a free-form manner, like running `yum upgrade` through a shell across all app servers, can do so, but more controlled jobs, like enabling or disabled the puppet agent across many hosts solely using the puppet-agent mcollective plugin is also possible. Remote execution capabilities will be handled through the proxy since users are likely to run the mcollective master and ActiveMQ on each puppet master. > > ### Command execution and storage > * Actions will be stored in the database by namespace and arguments. > * The mcollective plugin name will provide the namespace. > * The arguments will be > * Remote actions and metadata about the hosts they've been run across should be stored. > * The metadata (i.e. hostgroup) of hosts the command ran across. > * All the hosts that complied with that metadata at the time of command running. > * Actions that have been run in the past should be able to be replayed against hosts with different attributes. > * If an action was taken for all the machines in the `app` hostgroup, it should be possible to run that same command on all the machines in the `db-master` hostgroup. > * Fact-based matching for target systems. > * All the commands that have been run on a single system, a hostgroup, single/multiple facts, or globally, should be available along with that object. > * If a command was run based on a fact, that fact's page will have the command listed. > * Commands that have already been run can be scheduled based on their namespace and arguments (this is reliant upon having a queue with support for pushing future-dated tasks onto it) > > ### User interface > * The user will have ACL's based on the normal permissions and ancestry hierarchy > * The output of commands will be parsed and presented back to the user > * Long polling happens to get the status of execution on host(s) > > #### Single host > * Hosts assigned to a puppet master with remote execution capabilities will have an additional icon in the section with "Build" and "Delete" > > **Remote management process**: > > 1. User clicks the remote management button on the host page > 2. User is shown all the plugins (namespaces) they have access to use > 3. User chooses which management namespace (plugin) they will use > 4. The user enters arguments or reloads arguments from history > > #### Multiple hosts from the listing page > * Hosts can be filtered on the host listing page and then be selected to have remote management tasks run on them > * Remote execution tasks need to be orchestrated across different pupetmasters since all the hosts in the listing post-filter may not all talk to the same master > * How do we handle hosts that are on a master that don't have mcollective configured? Just ignore them? Or report back to the user that they didn't have the command run? > > ### "Remote execution command center" > #### Standalone page > * This page will accessible through the dropdown (like Puppet CA's) that's next to each proxy on the proxy listing page > * It'll load plugins that are available on that proxy > * Plugins will rely on locations & organizations for filtering based on ACL > * Roles can be used to control access to plugins globally > * **This needs more thought** > > * (not-MVP) Install plugins from Github or a tarball onto a proxy > > ### API & CLI > #### General API > * Retrieve the status of all running remote execution jobs > * Retrieve available namespaces > * Retrieve all future-dated tasks > * Potentially based on date range > * Retrieve a history of tasks > * Date range > * Task namespace > * Filter that caused hosts to be selected > > > #### Task API > * Check for task status based on ID (long-polling for the CLI) > * Returns: > * Current progress > * Successfully completed hosts > * Failed hosts > * Submit new tasks - **this might be best to handle through mcollective directly** > > #### CLI > * All API methods should be exposed > * Long-polling for task completion > * Auto-completion for available hostgroups and facts > * Search through past commands and re-run them > > Let me know if you've got any questions. I'm looking forward to your feedback! > > -Sam > > 1. https://gist.github.com/skottler/6a2a04a16470a36d38ee > > -- > You received this message because you are subscribed to the Google Groups "foreman-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to foreman-dev+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > >


Later,

Lukas “lzap” Zapletal
irc: lzap #theforeman

> From: "Lukas Zapletal" <lzap@redhat.com>
> To: foreman-dev@googlegroups.com
> Sent: Tuesday, July 30, 2013 7:58:37 AM
> Subject: Re: [foreman-dev] Remote execution design
>
> Looks nice. Will there be a simple bash-over-ssh plugin from the day
> one?

Mcollective doesn't use SSH for communication, but yes, there will be a way to run commands using a remote shell. Users will just need to install the shell agent plugin.

··· ----- Original Message -----

LZ

On Mon, Jul 29, 2013 at 10:00:11PM -0400, Sam Kottler wrote:

Greetings!

Over the past couple days I’ve been working on putting together a broad
design document for improving the remote execution capabilities of
Foreman. Currently we support running puppet through mcollective, but
nothing beyond that. The goal of this discussion is to figure out a plan
for allowing management of machines in Foreman using a tool like
mcollective.

Although I mention mcollective throughout the document, that doesn’t mean
it’s the only tool we should consider. It does have the benefit of having
a reasonable PKI scheme, is very extensible, has a solid communication
model, and is already widely accepted in the Puppet community.

Here’s the document in markdown for comment, but it’s also on gist [1],
which is much easier for reading:

Implementation

Architectural overview

Mcollective is the most generic and flexible solution for running
controlled, selective commands and jobs against groups of hosts. It uses a
plugin-based architecture which allows it to run virtually any task
remotely without giving users the kind of unfettered access to hosts like
Func, polysh, or rundeck allow. Users that want to allow administrators to
run commands against large swaths of hosts in a free-form manner, like
running yum upgrade through a shell across all app servers, can do so,
but more controlled jobs, like enabling or disabled the puppet agent
across many hosts solely using the puppet-agent mcollective plugin is also
possible. Remote execution capabilities will be handled through the proxy
since users are likely to run the mcollective master and ActiveMQ on each
puppet master.

Command execution and storage

  • Actions will be stored in the database by namespace and arguments.
    • The mcollective plugin name will provide the namespace.
    • The arguments will be
  • Remote actions and metadata about the hosts they’ve been run across
    should be stored.
    • The metadata (i.e. hostgroup) of hosts the command ran across.
    • All the hosts that complied with that metadata at the time of command
      running.
  • Actions that have been run in the past should be able to be replayed
    against hosts with different attributes.
    • If an action was taken for all the machines in the app hostgroup, it
      should be possible to run that same command on all the machines in the
      db-master hostgroup.
    • Fact-based matching for target systems.
  • All the commands that have been run on a single system, a hostgroup,
    single/multiple facts, or globally, should be available along with that
    object.
    • If a command was run based on a fact, that fact’s page will have the
      command listed.
  • Commands that have already been run can be scheduled based on their
    namespace and arguments (this is reliant upon having a queue with support
    for pushing future-dated tasks onto it)

User interface

  • The user will have ACL’s based on the normal permissions and ancestry
    hierarchy
  • The output of commands will be parsed and presented back to the user
  • Long polling happens to get the status of execution on host(s)

Single host

  • Hosts assigned to a puppet master with remote execution capabilities will
    have an additional icon in the section with “Build” and “Delete”

Remote management process:

  1. User clicks the remote management button on the host page
  2. User is shown all the plugins (namespaces) they have access to use
  3. User chooses which management namespace (plugin) they will use
  4. The user enters arguments or reloads arguments from history

Multiple hosts from the listing page

  • Hosts can be filtered on the host listing page and then be selected to
    have remote management tasks run on them
    • Remote execution tasks need to be orchestrated across different
      pupetmasters since all the hosts in the listing post-filter may not all
      talk to the same master
    • How do we handle hosts that are on a master that don’t have mcollective
      configured? Just ignore them? Or report back to the user that they
      didn’t have the command run?

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s) that’s
    next to each proxy on the proxy listing page

  • It’ll load plugins that are available on that proxy

    • Plugins will rely on locations & organizations for filtering based on
      ACL
    • Roles can be used to control access to plugins globally
    • This needs more thought
  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs
  • Retrieve available namespaces
  • Retrieve all future-dated tasks
    • Potentially based on date range
  • Retrieve a history of tasks
    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)
    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through mcollective
    directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Later,

Lukas “lzap” Zapletal
irc: lzap #theforeman


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> From: "Marek Hulan" <mhulan@redhat.com>
> To: foreman-dev@googlegroups.com
> Sent: Tuesday, July 30, 2013 2:21:52 AM
> Subject: Re: [foreman-dev] Remote execution design
>
> > Greetings!
> >
> > Over the past couple days I've been working on putting together a broad
> > design document for improving the remote execution capabilities of Foreman.
> > Currently we support running puppet through mcollective, but nothing beyond
> > that. The goal of this discussion is to figure out a plan for allowing
> > management of machines in Foreman using a tool like mcollective.
> >
> > Although I mention mcollective throughout the document, that doesn't mean
> > it's the only tool we should consider. It does have the benefit of having a
> > reasonable PKI scheme, is very extensible, has a solid communication model,
> > and is already widely accepted in the Puppet community.
> >
> > Here's the document in markdown for comment, but it's also on gist [1],
> > which is much easier for reading:
> >
> > ## Implementation
> > ### Architectural overview
> > Mcollective is the most generic and flexible solution for running
> > controlled, selective commands and jobs against groups of hosts. It uses a
> > plugin-based architecture which allows it to run virtually any task
> > remotely without giving users the kind of unfettered access to hosts like
> > Func, polysh, or rundeck allow. Users that want to allow administrators to
> > run commands against large swaths of hosts in a free-form manner, like
> > running yum upgrade through a shell across all app servers, can do so,
> > but more controlled jobs, like enabling or disabled the puppet agent across
> > many hosts solely using the puppet-agent mcollective plugin is also
> > possible. Remote execution capabilities will be handled through the proxy
> > since users are likely to run the mcollective master and ActiveMQ on each
> > puppet master.
> >
> > ### Command execution and storage
> > * Actions will be stored in the database by namespace and arguments.
> > * The mcollective plugin name will provide the namespace.
> > * The arguments will be
> > * Remote actions and metadata about the hosts they've been run across
> > should
> > be stored. * The metadata (i.e. hostgroup) of hosts the command ran across.
>
> Does this include facts?

Yep, it's for any data in Foreman that's used to filter down the list of hosts where a task will be run.

>
> > * All the hosts that complied with that metadata at the time of command
> > running. * Actions that have been run in the past should be able to be
> > replayed against hosts with different attributes. * If an action was taken
> > for all the machines in the app hostgroup, it should be possible to run
> > that same command on all the machines in the db-master hostgroup. *
> > Fact-based matching for target systems.
> > * All the commands that have been run on a single system, a hostgroup,
> > single/multiple facts, or globally, should be available along with that
> > object. * If a command was run based on a fact, that fact's page will have
> > the command listed. * Commands that have already been run can be scheduled
> > based on their namespace and arguments (this is reliant upon having a queue
> > with support for pushing future-dated tasks onto it)
> >
> > ### User interface
> > * The user will have ACL's based on the normal permissions and ancestry
> > hierarchy * The output of commands will be parsed and presented back to the
> > user * Long polling happens to get the status of execution on host(s)
> >
> > #### Single host
> > * Hosts assigned to a puppet master with remote execution capabilities will
> > have an additional icon in the section with "Build" and "Delete"
> >
> > Remote management process:
> >
> > 1. User clicks the remote management button on the host page
> > 2. User is shown all the plugins (namespaces) they have access to use
> > 3. User chooses which management namespace (plugin) they will use
> > 4. The user enters arguments or reloads arguments from history
> >
> > #### Multiple hosts from the listing page
> > * Hosts can be filtered on the host listing page and then be selected to
> > have remote management tasks run on them * Remote execution tasks need to
> > be orchestrated across different pupetmasters since all the hosts in the
> > listing post-filter may not all talk to the same master * How do we handle
> > hosts that are on a master that don't have mcollective configured? Just
> > ignore them? Or report back to the user that they didn't have the command
> > run?
>
> I would ignored them and display a user some kind of notification (not red
> bubble, some kind of friendly yellow message). Can't think of a use-case that
> must be run on all hosts xor none.

I've been thinking a bit more about this and the user should be given a list of hosts that won't have the action run on them before the task runs. For example, the administrator would want to get upgraded if they're applying security fixes across a group of machines and then a few of them don't get upgraded because they're not on a master that supports remote execution.

··· ----- Original Message ----- > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s) that’s
    next to each proxy on the proxy listing page * It’ll load plugins that are
    available on that proxy

    • Plugins will rely on locations & organizations for filtering based on
      ACL * Roles can be used to control access to plugins globally
    • This needs more thought
  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs
  • Retrieve available namespaces
  • Retrieve all future-dated tasks
    • Potentially based on date range
  • Retrieve a history of tasks
    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)
    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through mcollective
    directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!


Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> > Greetings!
> >
> > Over the past couple days I've been working on putting together a broad
> > design document for improving the remote execution capabilities of
> Foreman.
> > Currently we support running puppet through mcollective, but nothing
> beyond
> > that. The goal of this discussion is to figure out a plan for allowing
> > management of machines in Foreman using a tool like mcollective.
> >
> > Although I mention mcollective throughout the document, that doesn't mean
> > it's the only tool we should consider. It does have the benefit of
> having a
> > reasonable PKI scheme, is very extensible, has a solid communication
> model,
> > and is already widely accepted in the Puppet community.
> >
> > Here's the document in markdown for comment, but it's also on gist [1],
> > which is much easier for reading:
> >
> > ## Implementation
> > ### Architectural overview
> > Mcollective is the most generic and flexible solution for running
> > controlled, selective commands and jobs against groups of hosts. It uses
> a
> > plugin-based architecture which allows it to run virtually any task
> > remotely without giving users the kind of unfettered access to hosts like
> > Func, polysh, or rundeck allow. Users that want to allow administrators
> to
> > run commands against large swaths of hosts in a free-form manner, like
> > running yum upgrade through a shell across all app servers, can do so,
> > but more controlled jobs, like enabling or disabled the puppet agent
> across
> > many hosts solely using the puppet-agent mcollective plugin is also
> > possible. Remote execution capabilities will be handled through the proxy
> > since users are likely to run the mcollective master and ActiveMQ on each
> > puppet master.
> >
> > ### Command execution and storage
> > * Actions will be stored in the database by namespace and arguments.
> > * The mcollective plugin name will provide the namespace.
> > * The arguments will be
> > * Remote actions and metadata about the hosts they've been run across
> should
> > be stored. * The metadata (i.e. hostgroup) of hosts the command ran
> across.
>
> Does this include facts?
>
> > * All the hosts that complied with that metadata at the time of command
> > running. * Actions that have been run in the past should be able to be
> > replayed against hosts with different attributes. * If an action was
> taken
> > for all the machines in the app hostgroup, it should be possible to run
> > that same command on all the machines in the db-master hostgroup. *
> > Fact-based matching for target systems.
> > * All the commands that have been run on a single system, a hostgroup,
> > single/multiple facts, or globally, should be available along with that
> > object. * If a command was run based on a fact, that fact's page will
> have
> > the command listed. * Commands that have already been run can be
> scheduled
> > based on their namespace and arguments (this is reliant upon having a
> queue
> > with support for pushing future-dated tasks onto it)
> >
> > ### User interface
> > * The user will have ACL's based on the normal permissions and ancestry
> > hierarchy * The output of commands will be parsed and presented back to
> the
> > user * Long polling happens to get the status of execution on host(s)
> >
> > #### Single host
> > * Hosts assigned to a puppet master with remote execution capabilities
> will
> > have an additional icon in the section with "Build" and "Delete"
> >
> > Remote management process:
> >
> > 1. User clicks the remote management button on the host page
> > 2. User is shown all the plugins (namespaces) they have access to use
> > 3. User chooses which management namespace (plugin) they will use
> > 4. The user enters arguments or reloads arguments from history
> >
> > #### Multiple hosts from the listing page
> > * Hosts can be filtered on the host listing page and then be selected to
> > have remote management tasks run on them * Remote execution tasks need to
> > be orchestrated across different pupetmasters since all the hosts in the
> > listing post-filter may not all talk to the same master * How do we
> handle
> > hosts that are on a master that don't have mcollective configured? Just
> > ignore them? Or report back to the user that they didn't have the command
> > run?
>
> I would ignored them and display a user some kind of notification (not red
> bubble, some kind of friendly yellow message). Can't think of a use-case
> that
> must be run on all hosts xor none.
>

Not sure how batch actions were implemented in Foreman - one of the
standard approaches is to not to allow an action if not all selected
objects support it. This may not be the most reasonable approach, however,
if the number of objects in the selection is high.

Also, should we capture the selection somehow? So that another bunch of
actions can be run on the same hosts (as filtering can return a different
set of hosts each time).

>
> >
> > ### "Remote execution command center"
> > #### Standalone page
> > * This page will accessible through the dropdown (like Puppet CA's)
> that's
> > next to each proxy on the proxy listing page * It'll load plugins that
> are
> > available on that proxy
> > * Plugins will rely on locations & organizations for filtering based on
> > ACL * Roles can be used to control access to plugins globally
> > * This needs more thought
> >
> > * (not-MVP) Install plugins from Github or a tarball onto a proxy
> >
> > ### API & CLI
> > #### General API
> > * Retrieve the status of all running remote execution jobs
> > * Retrieve available namespaces
> > * Retrieve all future-dated tasks
> > * Potentially based on date range
> > * Retrieve a history of tasks
> > * Date range
> > * Task namespace
> > * Filter that caused hosts to be selected
> >
> >
> > #### Task API
> > * Check for task status based on ID (long-polling for the CLI)
> > * Returns:
> > * Current progress
> > * Successfully completed hosts
> > * Failed hosts
> > * Submit new tasks - this might be best to handle through mcollective
> > directly

>

Task status: Is this going to rely on MCollective's, or we need our own
async task status tracking?

> >
> > #### CLI
> > * All API methods should be exposed
> > * Long-polling for task completion
> > * Auto-completion for available hostgroups and facts
> > * Search through past commands and re-run them
> >
> > Let me know if you've got any questions. I'm looking forward to your
> > feedback!
> >
> > -Sam
> >
> > 1. https://gist.github.com/skottler/6a2a04a16470a36d38ee
>
> Seems pretty good so far!
> +1

-d

··· On Tue, Jul 30, 2013 at 7:21 AM, Marek Hulan wrote: > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:

Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> Mcollective doesn't use SSH for communication, but yes, there will be
> a way to run commands using a remote shell. Users will just need to
> install the shell agent plugin.

I understand, the question tho is - if you are going to only write
mcollective support, or add one extra (some kind of plain ssh).

Building a generic interface is always hard, when one is building it
with only one implementation. That was my concern, or rather idea than
concern.

··· -- Later,

Lukas “lzap” Zapletal
irc: lzap #theforeman

> From: "Dmitri Dolguikh" <witlessbird@gmail.com>
> To: foreman-dev@googlegroups.com
> Sent: Tuesday, July 30, 2013 6:01:34 AM
> Subject: Re: [foreman-dev] Remote execution design
>
>
> > > Greetings!
> > >
> > > Over the past couple days I've been working on putting together a broad
> > > design document for improving the remote execution capabilities of
> > Foreman.
> > > Currently we support running puppet through mcollective, but nothing
> > beyond
> > > that. The goal of this discussion is to figure out a plan for allowing
> > > management of machines in Foreman using a tool like mcollective.
> > >
> > > Although I mention mcollective throughout the document, that doesn't mean
> > > it's the only tool we should consider. It does have the benefit of
> > having a
> > > reasonable PKI scheme, is very extensible, has a solid communication
> > model,
> > > and is already widely accepted in the Puppet community.
> > >
> > > Here's the document in markdown for comment, but it's also on gist [1],
> > > which is much easier for reading:
> > >
> > > ## Implementation
> > > ### Architectural overview
> > > Mcollective is the most generic and flexible solution for running
> > > controlled, selective commands and jobs against groups of hosts. It uses
> > a
> > > plugin-based architecture which allows it to run virtually any task
> > > remotely without giving users the kind of unfettered access to hosts like
> > > Func, polysh, or rundeck allow. Users that want to allow administrators
> > to
> > > run commands against large swaths of hosts in a free-form manner, like
> > > running yum upgrade through a shell across all app servers, can do so,
> > > but more controlled jobs, like enabling or disabled the puppet agent
> > across
> > > many hosts solely using the puppet-agent mcollective plugin is also
> > > possible. Remote execution capabilities will be handled through the proxy
> > > since users are likely to run the mcollective master and ActiveMQ on each
> > > puppet master.
> > >
> > > ### Command execution and storage
> > > * Actions will be stored in the database by namespace and arguments.
> > > * The mcollective plugin name will provide the namespace.
> > > * The arguments will be
> > > * Remote actions and metadata about the hosts they've been run across
> > should
> > > be stored. * The metadata (i.e. hostgroup) of hosts the command ran
> > across.
> >
> > Does this include facts?
> >
> > > * All the hosts that complied with that metadata at the time of command
> > > running. * Actions that have been run in the past should be able to be
> > > replayed against hosts with different attributes. * If an action was
> > taken
> > > for all the machines in the app hostgroup, it should be possible to run
> > > that same command on all the machines in the db-master hostgroup. *
> > > Fact-based matching for target systems.
> > > * All the commands that have been run on a single system, a hostgroup,
> > > single/multiple facts, or globally, should be available along with that
> > > object. * If a command was run based on a fact, that fact's page will
> > have
> > > the command listed. * Commands that have already been run can be
> > scheduled
> > > based on their namespace and arguments (this is reliant upon having a
> > queue
> > > with support for pushing future-dated tasks onto it)
> > >
> > > ### User interface
> > > * The user will have ACL's based on the normal permissions and ancestry
> > > hierarchy * The output of commands will be parsed and presented back to
> > the
> > > user * Long polling happens to get the status of execution on host(s)
> > >
> > > #### Single host
> > > * Hosts assigned to a puppet master with remote execution capabilities
> > will
> > > have an additional icon in the section with "Build" and "Delete"
> > >
> > > Remote management process:
> > >
> > > 1. User clicks the remote management button on the host page
> > > 2. User is shown all the plugins (namespaces) they have access to use
> > > 3. User chooses which management namespace (plugin) they will use
> > > 4. The user enters arguments or reloads arguments from history
> > >
> > > #### Multiple hosts from the listing page
> > > * Hosts can be filtered on the host listing page and then be selected to
> > > have remote management tasks run on them * Remote execution tasks need to
> > > be orchestrated across different pupetmasters since all the hosts in the
> > > listing post-filter may not all talk to the same master * How do we
> > handle
> > > hosts that are on a master that don't have mcollective configured? Just
> > > ignore them? Or report back to the user that they didn't have the command
> > > run?
> >
> > I would ignored them and display a user some kind of notification (not red
> > bubble, some kind of friendly yellow message). Can't think of a use-case
> > that
> > must be run on all hosts xor none.
> >
>
> Not sure how batch actions were implemented in Foreman - one of the
> standard approaches is to not to allow an action if not all selected
> objects support it. This may not be the most reasonable approach, however,
> if the number of objects in the selection is high.

>
> Also, should we capture the selection somehow? So that another bunch of
> actions can be run on the same hosts (as filtering can return a different
> set of hosts each time).

Yes, this was one of the things that would be stored in the history of the command (see the "Command execution and storage" section above).

>
>
> >
> > >
> > > ### "Remote execution command center"
> > > #### Standalone page
> > > * This page will accessible through the dropdown (like Puppet CA's)
> > that's
> > > next to each proxy on the proxy listing page * It'll load plugins that
> > are
> > > available on that proxy
> > > * Plugins will rely on locations & organizations for filtering based on
> > > ACL * Roles can be used to control access to plugins globally
> > > * This needs more thought
> > >
> > > * (not-MVP) Install plugins from Github or a tarball onto a proxy
> > >
> > > ### API & CLI
> > > #### General API
> > > * Retrieve the status of all running remote execution jobs
> > > * Retrieve available namespaces
> > > * Retrieve all future-dated tasks
> > > * Potentially based on date range
> > > * Retrieve a history of tasks
> > > * Date range
> > > * Task namespace
> > > * Filter that caused hosts to be selected
> > >
> > >
> > > #### Task API
> > > * Check for task status based on ID (long-polling for the CLI)
> > > * Returns:
> > > * Current progress
> > > * Successfully completed hosts
> > > * Failed hosts
> > > * Submit new tasks - this might be best to handle through mcollective
> > > directly

> >
>
> Task status: Is this going to rely on MCollective's, or we need our own
> async task status tracking?

This would ideally use a task system. Running commands across thousands of hosts can obviously take a while. Also, tasks that are scheduled for sometime in the future will need a queue; I don't think storing that data in the DB and running tasks on cron or something along those lines will scale or perform very well.

··· ----- Original Message ----- > On Tue, Jul 30, 2013 at 7:21 AM, Marek Hulan wrote: > > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!
+1

-d

Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> From: "Lukas Zapletal" <lzap@redhat.com>
> To: foreman-dev@googlegroups.com
> Sent: Tuesday, July 30, 2013 10:16:32 AM
> Subject: Re: [foreman-dev] Remote execution design
>
> > Mcollective doesn't use SSH for communication, but yes, there will be
> > a way to run commands using a remote shell. Users will just need to
> > install the shell agent plugin.
>
> I understand, the question tho is - if you are going to only write
> mcollective support, or add one extra (some kind of plain ssh).
>
> Building a generic interface is always hard, when one is building it
> with only one implementation. That was my concern, or rather idea than
> concern.

So originally I was thinking it should be pluggable, but I've since rethought that. The storage will be really complex for multiple backends so I'd prefer agreeing on one system and implementing it. Mcollective is widely accepted and has a lot of useful constructs for people to build upon (i.e. a plugin system) so it seems like the best option. As I said above, though, I'm totally open to having a discussion about which tool we want to use.

··· ----- Original Message -----


Later,

Lukas “lzap” Zapletal
irc: lzap #theforeman


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> So originally I was thinking it should be pluggable, but I've since
> rethought that. The storage will be really complex for multiple
> backends so I'd prefer agreeing on one system and implementing it.
> Mcollective is widely accepted and has a lot of useful constructs for
> people to build upon (i.e. a plugin system) so it seems like the best
> option. As I said above, though, I'm totally open to having a
> discussion about which tool we want to use.

Ah thats misunderstanding from my side. Sure, I get that.

··· -- Later,

Lukas “lzap” Zapletal
irc: lzap #theforeman

Since he can later filter out those that were not upgraded yet I wouldn't see
this as an issue. However being sure about on which hosts I'll execute a
command seems as a good idea. Hopefully won't be too much pain to e.g. run a
command that just adds a ssh key only on servers that supports that.

··· On Tuesday 30 of July 2013 09:50:27 Sam Kottler wrote: > ----- Original Message ----- > > > From: "Marek Hulan" > > To: foreman-dev@googlegroups.com > > Sent: Tuesday, July 30, 2013 2:21:52 AM > > Subject: Re: [foreman-dev] Remote execution design > > > > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote: > > > Greetings! > > > > > > Over the past couple days I've been working on putting together a broad > > > design document for improving the remote execution capabilities of > > > Foreman. > > > Currently we support running puppet through mcollective, but nothing > > > beyond > > > that. The goal of this discussion is to figure out a plan for allowing > > > management of machines in Foreman using a tool like mcollective. > > > > > > Although I mention mcollective throughout the document, that doesn't > > > mean > > > it's the only tool we should consider. It does have the benefit of > > > having a > > > reasonable PKI scheme, is very extensible, has a solid communication > > > model, > > > and is already widely accepted in the Puppet community. > > > > > > Here's the document in markdown for comment, but it's also on gist [1], > > > which is much easier for reading: > > > > > > ## Implementation > > > ### Architectural overview > > > Mcollective is the most generic and flexible solution for running > > > controlled, selective commands and jobs against groups of hosts. It uses > > > a > > > plugin-based architecture which allows it to run virtually any task > > > remotely without giving users the kind of unfettered access to hosts > > > like > > > Func, polysh, or rundeck allow. Users that want to allow administrators > > > to > > > run commands against large swaths of hosts in a free-form manner, like > > > running `yum upgrade` through a shell across all app servers, can do so, > > > but more controlled jobs, like enabling or disabled the puppet agent > > > across > > > many hosts solely using the puppet-agent mcollective plugin is also > > > possible. Remote execution capabilities will be handled through the > > > proxy > > > since users are likely to run the mcollective master and ActiveMQ on > > > each > > > puppet master. > > > > > > ### Command execution and storage > > > * Actions will be stored in the database by namespace and arguments. > > > > > > * The mcollective plugin name will provide the namespace. > > > * The arguments will be > > > > > > * Remote actions and metadata about the hosts they've been run across > > > should > > > be stored. * The metadata (i.e. hostgroup) of hosts the command ran > > > across. > > > > Does this include facts? > > Yep, it's for any data in Foreman that's used to filter down the list of > hosts where a task will be run. > > > * All the hosts that complied with that metadata at the time of command > > > running. * Actions that have been run in the past should be able to be > > > replayed against hosts with different attributes. * If an action was > > > taken > > > for all the machines in the `app` hostgroup, it should be possible to > > > run > > > that same command on all the machines in the `db-master` hostgroup. * > > > Fact-based matching for target systems. > > > * All the commands that have been run on a single system, a hostgroup, > > > single/multiple facts, or globally, should be available along with that > > > object. * If a command was run based on a fact, that fact's page will > > > have > > > the command listed. * Commands that have already been run can be > > > scheduled > > > based on their namespace and arguments (this is reliant upon having a > > > queue > > > with support for pushing future-dated tasks onto it) > > > > > > ### User interface > > > * The user will have ACL's based on the normal permissions and ancestry > > > hierarchy * The output of commands will be parsed and presented back to > > > the > > > user * Long polling happens to get the status of execution on host(s) > > > > > > #### Single host > > > * Hosts assigned to a puppet master with remote execution capabilities > > > will > > > have an additional icon in the section with "Build" and "Delete" > > > > > > **Remote management process**: > > > > > > 1. User clicks the remote management button on the host page > > > 2. User is shown all the plugins (namespaces) they have access to use > > > 3. User chooses which management namespace (plugin) they will use > > > 4. The user enters arguments or reloads arguments from history > > > > > > #### Multiple hosts from the listing page > > > * Hosts can be filtered on the host listing page and then be selected to > > > have remote management tasks run on them * Remote execution tasks need > > > to > > > be orchestrated across different pupetmasters since all the hosts in the > > > listing post-filter may not all talk to the same master * How do we > > > handle > > > hosts that are on a master that don't have mcollective configured? Just > > > ignore them? Or report back to the user that they didn't have the > > > command > > > run? > > > > I would ignored them and display a user some kind of notification (not red > > bubble, some kind of friendly yellow message). Can't think of a use-case > > that must be run on all hosts xor none. > > I've been thinking a bit more about this and the user should be given a list > of hosts that won't have the action run on them before the task runs. For > example, the administrator would want to get upgraded if they're applying > security fixes across a group of machines and then a few of them don't get > upgraded because they're not on a master that supports remote execution.


Marek

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s)
    that’s
    next to each proxy on the proxy listing page * It’ll load plugins that
    are
    available on that proxy

    • Plugins will rely on locations & organizations for filtering based
      on

ACL * Roles can be used to control access to plugins globally

  • This needs more thought

  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs

  • Retrieve available namespaces

  • Retrieve all future-dated tasks

    • Potentially based on date range
  • Retrieve a history of tasks

    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)

    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through mcollective
    directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!


Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

> From: "Marek Hulan" <mhulan@redhat.com>
> To: foreman-dev@googlegroups.com
> Sent: Tuesday, July 30, 2013 10:53:30 AM
> Subject: Re: [foreman-dev] Remote execution design
>
> >
> > > From: "Marek Hulan" <mhulan@redhat.com>
> > > To: foreman-dev@googlegroups.com
> > > Sent: Tuesday, July 30, 2013 2:21:52 AM
> > > Subject: Re: [foreman-dev] Remote execution design
> > >
> > > > Greetings!
> > > >
> > > > Over the past couple days I've been working on putting together a broad
> > > > design document for improving the remote execution capabilities of
> > > > Foreman.
> > > > Currently we support running puppet through mcollective, but nothing
> > > > beyond
> > > > that. The goal of this discussion is to figure out a plan for allowing
> > > > management of machines in Foreman using a tool like mcollective.
> > > >
> > > > Although I mention mcollective throughout the document, that doesn't
> > > > mean
> > > > it's the only tool we should consider. It does have the benefit of
> > > > having a
> > > > reasonable PKI scheme, is very extensible, has a solid communication
> > > > model,
> > > > and is already widely accepted in the Puppet community.
> > > >
> > > > Here's the document in markdown for comment, but it's also on gist [1],
> > > > which is much easier for reading:
> > > >
> > > > ## Implementation
> > > > ### Architectural overview
> > > > Mcollective is the most generic and flexible solution for running
> > > > controlled, selective commands and jobs against groups of hosts. It
> > > > uses
> > > > a
> > > > plugin-based architecture which allows it to run virtually any task
> > > > remotely without giving users the kind of unfettered access to hosts
> > > > like
> > > > Func, polysh, or rundeck allow. Users that want to allow administrators
> > > > to
> > > > run commands against large swaths of hosts in a free-form manner, like
> > > > running yum upgrade through a shell across all app servers, can do
> > > > so,
> > > > but more controlled jobs, like enabling or disabled the puppet agent
> > > > across
> > > > many hosts solely using the puppet-agent mcollective plugin is also
> > > > possible. Remote execution capabilities will be handled through the
> > > > proxy
> > > > since users are likely to run the mcollective master and ActiveMQ on
> > > > each
> > > > puppet master.
> > > >
> > > > ### Command execution and storage
> > > > * Actions will be stored in the database by namespace and arguments.
> > > >
> > > > * The mcollective plugin name will provide the namespace.
> > > > * The arguments will be
> > > >
> > > > * Remote actions and metadata about the hosts they've been run across
> > > > should
> > > > be stored. * The metadata (i.e. hostgroup) of hosts the command ran
> > > > across.
> > >
> > > Does this include facts?
> >
> > Yep, it's for any data in Foreman that's used to filter down the list of
> > hosts where a task will be run.
> > > > * All the hosts that complied with that metadata at the time of command
> > > > running. * Actions that have been run in the past should be able to be
> > > > replayed against hosts with different attributes. * If an action was
> > > > taken
> > > > for all the machines in the app hostgroup, it should be possible to
> > > > run
> > > > that same command on all the machines in the db-master hostgroup. *
> > > > Fact-based matching for target systems.
> > > > * All the commands that have been run on a single system, a hostgroup,
> > > > single/multiple facts, or globally, should be available along with that
> > > > object. * If a command was run based on a fact, that fact's page will
> > > > have
> > > > the command listed. * Commands that have already been run can be
> > > > scheduled
> > > > based on their namespace and arguments (this is reliant upon having a
> > > > queue
> > > > with support for pushing future-dated tasks onto it)
> > > >
> > > > ### User interface
> > > > * The user will have ACL's based on the normal permissions and ancestry
> > > > hierarchy * The output of commands will be parsed and presented back to
> > > > the
> > > > user * Long polling happens to get the status of execution on host(s)
> > > >
> > > > #### Single host
> > > > * Hosts assigned to a puppet master with remote execution capabilities
> > > > will
> > > > have an additional icon in the section with "Build" and "Delete"
> > > >
> > > > Remote management process:
> > > >
> > > > 1. User clicks the remote management button on the host page
> > > > 2. User is shown all the plugins (namespaces) they have access to use
> > > > 3. User chooses which management namespace (plugin) they will use
> > > > 4. The user enters arguments or reloads arguments from history
> > > >
> > > > #### Multiple hosts from the listing page
> > > > * Hosts can be filtered on the host listing page and then be selected
> > > > to
> > > > have remote management tasks run on them * Remote execution tasks need
> > > > to
> > > > be orchestrated across different pupetmasters since all the hosts in
> > > > the
> > > > listing post-filter may not all talk to the same master * How do we
> > > > handle
> > > > hosts that are on a master that don't have mcollective configured? Just
> > > > ignore them? Or report back to the user that they didn't have the
> > > > command
> > > > run?
> > >
> > > I would ignored them and display a user some kind of notification (not
> > > red
> > > bubble, some kind of friendly yellow message). Can't think of a use-case
> > > that must be run on all hosts xor none.
> >
> > I've been thinking a bit more about this and the user should be given a
> > list
> > of hosts that won't have the action run on them before the task runs. For
> > example, the administrator would want to get upgraded if they're applying
> > security fixes across a group of machines and then a few of them don't get
> > upgraded because they're not on a master that supports remote execution.
>
> Since he can later filter out those that were not upgraded yet I wouldn't see
> this as an issue. However being sure about on which hosts I'll execute a
> command seems as a good idea. Hopefully won't be too much pain to e.g. run a
> command that just adds a ssh key only on servers that supports that.

There are many cases where you're running an action that is destructive or needs to be highly reliable and you'd want to know before the command gets run that it won't take place on certain machines.

··· ----- Original Message ----- > On Tuesday 30 of July 2013 09:50:27 Sam Kottler wrote: > > ----- Original Message ----- > > > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:


Marek

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s)
    that’s
    next to each proxy on the proxy listing page * It’ll load plugins that
    are
    available on that proxy

    • Plugins will rely on locations & organizations for filtering based
      on

ACL * Roles can be used to control access to plugins globally

  • This needs more thought

  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs

  • Retrieve available namespaces

  • Retrieve all future-dated tasks

    • Potentially based on date range
  • Retrieve a history of tasks

    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)

    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through mcollective
    directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!


Marek


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Mcollective already has support for this built in so Foreman could use
this. For example, upgrade package foo on bar type servers but only if said
bar servers are marked out of production and the user has the right
privileges and finally only perform on 6 nodes at a time.

Jim

··· On 30 July 2013 15:55, Sam Kottler wrote:

----- Original Message -----

From: “Marek Hulan” mhulan@redhat.com
To: foreman-dev@googlegroups.com
Sent: Tuesday, July 30, 2013 10:53:30 AM
Subject: Re: [foreman-dev] Remote execution design

On Tuesday 30 of July 2013 09:50:27 Sam Kottler wrote:

----- Original Message -----

From: “Marek Hulan” mhulan@redhat.com
To: foreman-dev@googlegroups.com
Sent: Tuesday, July 30, 2013 2:21:52 AM
Subject: Re: [foreman-dev] Remote execution design

On Monday 29 of July 2013 22:00:11 Sam Kottler wrote:

Greetings!

Over the past couple days I’ve been working on putting together a
broad

design document for improving the remote execution capabilities of
Foreman.
Currently we support running puppet through mcollective, but
nothing

beyond
that. The goal of this discussion is to figure out a plan for
allowing

management of machines in Foreman using a tool like mcollective.

Although I mention mcollective throughout the document, that
doesn’t

mean
it’s the only tool we should consider. It does have the benefit of
having a
reasonable PKI scheme, is very extensible, has a solid
communication

model,
and is already widely accepted in the Puppet community.

Here’s the document in markdown for comment, but it’s also on gist
[1],

which is much easier for reading:

Implementation

Architectural overview

Mcollective is the most generic and flexible solution for running
controlled, selective commands and jobs against groups of hosts. It
uses
a
plugin-based architecture which allows it to run virtually any task
remotely without giving users the kind of unfettered access to
hosts

like
Func, polysh, or rundeck allow. Users that want to allow
administrators

to
run commands against large swaths of hosts in a free-form manner,
like

running yum upgrade through a shell across all app servers, can
do

so,
but more controlled jobs, like enabling or disabled the puppet
agent

across
many hosts solely using the puppet-agent mcollective plugin is also
possible. Remote execution capabilities will be handled through the
proxy
since users are likely to run the mcollective master and ActiveMQ
on

each
puppet master.

Command execution and storage

  • Actions will be stored in the database by namespace and
    arguments.
  • The mcollective plugin name will provide the namespace.

  • The arguments will be

  • Remote actions and metadata about the hosts they’ve been run
    across

should
be stored. * The metadata (i.e. hostgroup) of hosts the command ran
across.

Does this include facts?

Yep, it’s for any data in Foreman that’s used to filter down the list
of

hosts where a task will be run.

  • All the hosts that complied with that metadata at the time of
    command

running. * Actions that have been run in the past should be able
to be

replayed against hosts with different attributes. * If an action
was

taken
for all the machines in the app hostgroup, it should be possible
to

run
that same command on all the machines in the db-master
hostgroup. *

Fact-based matching for target systems.

  • All the commands that have been run on a single system, a
    hostgroup,

single/multiple facts, or globally, should be available along with
that

object. * If a command was run based on a fact, that fact’s page
will

have
the command listed. * Commands that have already been run can be
scheduled
based on their namespace and arguments (this is reliant upon
having a

queue
with support for pushing future-dated tasks onto it)

User interface

  • The user will have ACL’s based on the normal permissions and
    ancestry

hierarchy * The output of commands will be parsed and presented
back to

the
user * Long polling happens to get the status of execution on
host(s)

Single host

  • Hosts assigned to a puppet master with remote execution
    capabilities

will
have an additional icon in the section with “Build” and “Delete”

Remote management process:

  1. User clicks the remote management button on the host page
  2. User is shown all the plugins (namespaces) they have access to
    use
  1. User chooses which management namespace (plugin) they will use
  2. The user enters arguments or reloads arguments from history

Multiple hosts from the listing page

  • Hosts can be filtered on the host listing page and then be
    selected

to
have remote management tasks run on them * Remote execution tasks
need

to
be orchestrated across different pupetmasters since all the hosts
in

the
listing post-filter may not all talk to the same master * How do we
handle
hosts that are on a master that don’t have mcollective configured?
Just

ignore them? Or report back to the user that they didn’t have the
command
run?

I would ignored them and display a user some kind of notification
(not

red
bubble, some kind of friendly yellow message). Can’t think of a
use-case

that must be run on all hosts xor none.

I’ve been thinking a bit more about this and the user should be given a
list
of hosts that won’t have the action run on them before the task runs.
For

example, the administrator would want to get upgraded if they’re
applying

security fixes across a group of machines and then a few of them don’t
get

upgraded because they’re not on a master that supports remote
execution.

Since he can later filter out those that were not upgraded yet I
wouldn’t see
this as an issue. However being sure about on which hosts I’ll execute a
command seems as a good idea. Hopefully won’t be too much pain to e.g.
run a
command that just adds a ssh key only on servers that supports that.

There are many cases where you’re running an action that is destructive or
needs to be highly reliable and you’d want to know before the command gets
run that it won’t take place on certain machines.


Marek

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s)
    that’s
    next to each proxy on the proxy listing page * It’ll load plugins
    that

are
available on that proxy

  • Plugins will rely on locations & organizations for filtering
    based

on

ACL * Roles can be used to control access to plugins globally

  • This needs more thought

  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs

  • Retrieve available namespaces

  • Retrieve all future-dated tasks

    • Potentially based on date range
  • Retrieve a history of tasks

    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)

    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - **this might be best to handle through
    mcollective

directly**

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to
your

feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!


Marek


You received this message because you are subscribed to the Google
Groups

“foreman-dev” group.
To unsubscribe from this group and stop receiving emails from it,
send an

email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Right, +1 from me.

··· On Tuesday 30 of July 2013 10:55:42 Sam Kottler wrote: > ----- Original Message ----- > > > From: "Marek Hulan" > > To: foreman-dev@googlegroups.com > > Sent: Tuesday, July 30, 2013 10:53:30 AM > > Subject: Re: [foreman-dev] Remote execution design > > > > On Tuesday 30 of July 2013 09:50:27 Sam Kottler wrote: > > > ----- Original Message ----- > > > > > > > From: "Marek Hulan" > > > > To: foreman-dev@googlegroups.com > > > > Sent: Tuesday, July 30, 2013 2:21:52 AM > > > > Subject: Re: [foreman-dev] Remote execution design > > > > > > > > On Monday 29 of July 2013 22:00:11 Sam Kottler wrote: > > > > > Greetings! > > > > > > > > > > Over the past couple days I've been working on putting together a > > > > > broad > > > > > design document for improving the remote execution capabilities of > > > > > Foreman. > > > > > Currently we support running puppet through mcollective, but nothing > > > > > beyond > > > > > that. The goal of this discussion is to figure out a plan for > > > > > allowing > > > > > management of machines in Foreman using a tool like mcollective. > > > > > > > > > > Although I mention mcollective throughout the document, that doesn't > > > > > mean > > > > > it's the only tool we should consider. It does have the benefit of > > > > > having a > > > > > reasonable PKI scheme, is very extensible, has a solid communication > > > > > model, > > > > > and is already widely accepted in the Puppet community. > > > > > > > > > > Here's the document in markdown for comment, but it's also on gist > > > > > [1], > > > > > which is much easier for reading: > > > > > > > > > > ## Implementation > > > > > ### Architectural overview > > > > > Mcollective is the most generic and flexible solution for running > > > > > controlled, selective commands and jobs against groups of hosts. It > > > > > uses > > > > > a > > > > > plugin-based architecture which allows it to run virtually any task > > > > > remotely without giving users the kind of unfettered access to hosts > > > > > like > > > > > Func, polysh, or rundeck allow. Users that want to allow > > > > > administrators > > > > > to > > > > > run commands against large swaths of hosts in a free-form manner, > > > > > like > > > > > running `yum upgrade` through a shell across all app servers, can do > > > > > so, > > > > > but more controlled jobs, like enabling or disabled the puppet agent > > > > > across > > > > > many hosts solely using the puppet-agent mcollective plugin is also > > > > > possible. Remote execution capabilities will be handled through the > > > > > proxy > > > > > since users are likely to run the mcollective master and ActiveMQ on > > > > > each > > > > > puppet master. > > > > > > > > > > ### Command execution and storage > > > > > * Actions will be stored in the database by namespace and arguments. > > > > > > > > > > * The mcollective plugin name will provide the namespace. > > > > > * The arguments will be > > > > > > > > > > * Remote actions and metadata about the hosts they've been run > > > > > across > > > > > should > > > > > be stored. * The metadata (i.e. hostgroup) of hosts the command ran > > > > > across. > > > > > > > > Does this include facts? > > > > > > Yep, it's for any data in Foreman that's used to filter down the list of > > > hosts where a task will be run. > > > > > > > > * All the hosts that complied with that metadata at the time of > > > > > command > > > > > running. * Actions that have been run in the past should be able to > > > > > be > > > > > replayed against hosts with different attributes. * If an action was > > > > > taken > > > > > for all the machines in the `app` hostgroup, it should be possible > > > > > to > > > > > run > > > > > that same command on all the machines in the `db-master` hostgroup. > > > > > * > > > > > Fact-based matching for target systems. > > > > > * All the commands that have been run on a single system, a > > > > > hostgroup, > > > > > single/multiple facts, or globally, should be available along with > > > > > that > > > > > object. * If a command was run based on a fact, that fact's page > > > > > will > > > > > have > > > > > the command listed. * Commands that have already been run can be > > > > > scheduled > > > > > based on their namespace and arguments (this is reliant upon having > > > > > a > > > > > queue > > > > > with support for pushing future-dated tasks onto it) > > > > > > > > > > ### User interface > > > > > * The user will have ACL's based on the normal permissions and > > > > > ancestry > > > > > hierarchy * The output of commands will be parsed and presented back > > > > > to > > > > > the > > > > > user * Long polling happens to get the status of execution on > > > > > host(s) > > > > > > > > > > #### Single host > > > > > * Hosts assigned to a puppet master with remote execution > > > > > capabilities > > > > > will > > > > > have an additional icon in the section with "Build" and "Delete" > > > > > > > > > > **Remote management process**: > > > > > > > > > > 1. User clicks the remote management button on the host page > > > > > 2. User is shown all the plugins (namespaces) they have access to > > > > > use > > > > > 3. User chooses which management namespace (plugin) they will use > > > > > 4. The user enters arguments or reloads arguments from history > > > > > > > > > > #### Multiple hosts from the listing page > > > > > * Hosts can be filtered on the host listing page and then be > > > > > selected > > > > > to > > > > > have remote management tasks run on them * Remote execution tasks > > > > > need > > > > > to > > > > > be orchestrated across different pupetmasters since all the hosts in > > > > > the > > > > > listing post-filter may not all talk to the same master * How do we > > > > > handle > > > > > hosts that are on a master that don't have mcollective configured? > > > > > Just > > > > > ignore them? Or report back to the user that they didn't have the > > > > > command > > > > > run? > > > > > > > > I would ignored them and display a user some kind of notification (not > > > > red > > > > bubble, some kind of friendly yellow message). Can't think of a > > > > use-case > > > > that must be run on all hosts xor none. > > > > > > I've been thinking a bit more about this and the user should be given a > > > list > > > of hosts that won't have the action run on them before the task runs. > > > For > > > example, the administrator would want to get upgraded if they're > > > applying > > > security fixes across a group of machines and then a few of them don't > > > get > > > upgraded because they're not on a master that supports remote execution. > > > > Since he can later filter out those that were not upgraded yet I wouldn't > > see this as an issue. However being sure about on which hosts I'll > > execute a command seems as a good idea. Hopefully won't be too much pain > > to e.g. run a command that just adds a ssh key only on servers that > > supports that. > There are many cases where you're running an action that is destructive or > needs to be highly reliable and you'd want to know before the command gets > run that it won't take place on certain machines.


Marek


Marek

“Remote execution command center”

Standalone page

  • This page will accessible through the dropdown (like Puppet CA’s)
    that’s
    next to each proxy on the proxy listing page * It’ll load plugins
    that
    are
    available on that proxy

    • Plugins will rely on locations & organizations for filtering
      based
      on

ACL * Roles can be used to control access to plugins globally

  • This needs more thought

  • (not-MVP) Install plugins from Github or a tarball onto a proxy

API & CLI

General API

  • Retrieve the status of all running remote execution jobs

  • Retrieve available namespaces

  • Retrieve all future-dated tasks

    • Potentially based on date range
  • Retrieve a history of tasks

    • Date range
    • Task namespace
    • Filter that caused hosts to be selected

Task API

  • Check for task status based on ID (long-polling for the CLI)

    • Returns:
    • Current progress
    • Successfully completed hosts
    • Failed hosts
  • Submit new tasks - this might be best to handle through
    mcollective
    directly

CLI

  • All API methods should be exposed
  • Long-polling for task completion
  • Auto-completion for available hostgroups and facts
  • Search through past commands and re-run them

Let me know if you’ve got any questions. I’m looking forward to your
feedback!

-Sam

  1. https://gist.github.com/skottler/6a2a04a16470a36d38ee

Seems pretty good so far!


Marek


You received this message because you are subscribed to the Google
Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


You received this message because you are subscribed to the Google Groups
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.