Dropping "Run puppet" button with a replacement using remote execution plugin

oh I see that makes more sense. I didn’t see the other jobs because I don’t
have the plugin installed, but the run puppet button is in the interface
regardless. Thanks for the clarification!

… and there are REX jobs to run Salt.

So this is now implemented, user input value type can be of value type search, users can specify target hosts in a search syntax and use load_hosts macro (probably only available in reports scope now). That would address the need for users who would still use mco/choria due to push base is not a possibility in their infra. Sadly, I have no longer time to finish this effort, if someone is interested in taking over the PR, here it is. I’m closing it for now.

Pending tasks:

  • rebase the dropping PR
  • make load_hosts macro available in job templates
  • optionally provide a job template with input of hosts that would trigger mco/choria at least as an example

I will bring this old topic to live. I’ve starting moving Puppetrun to REX plugin. So that also means remove puppetrun (https://github.com/theforeman/foreman/pull/7719) code from core. PR is inspired from previous Marek’s closed PR.

3 Likes

@dmatoulek have these two tasks been completed already?

I would be very interested in a job template example for using REX with mco/choria. I’m new to remote_execution and having a hard time understanding how to make the job run mco locally on the smartproxy instead of ssh’ing to each host.

Taking this from release notes

Another option is for mcollective / choria users. The best way is to write your own job template that leverages load_hosts macro for choosing hosts and use it as a parameter for mco program, for example: mco rpc puppetd runonce -I <%= load_hosts(search: input('hosts')).join(' ') %> . This template would be executed against a host that has mco and access to the target hosts, instead of being executed at the target hosts directly.

The key here is to define the job template with hosts input and then use it in the mco command. The template could be as simple as

The template from the example won’t work, but it’s not hard to make it work. It’s not nice but works.

mco rpc puppetd runonce -I <%= load_hosts(search: input('hosts')).map {|b| b.map { |h| h.name} }.flatten.join(' ') %>

the result is then

mco rpc puppetd runonce -I a.example.com b.example.com

This job is to be ran agaist the host that should trigger mco.

Hope that helps

Ah, ok. In our case our users only have permission to view and edit the servers in Foreman which they own. And if they make an edit they can then use the ‘Run Puppet’ button on the host page to trigger a puppet run. This solution would mean that users have to first edit their host, then switch to our mco-trigger-host (which we then must give everyone permission to view) and run a mco-puppetrun job there and provide their servername as input. This seems like it defeats the purpose of being able to quickly trigger a puppetrun on the host you’re editing.

I guess there would need to be some other type of rex provider to be able to trigger it directly from the host page, like in Feature #24714: Support for non-ssh agent provider - Foreman Remote Execution - Foreman

Thanks for explaining, I think I understand the gap now.

The new “Run Puppet” REX job is configurable and you can change how it works internally. The problem though is, the target of such REX job is always a host you click the button on. It’s also available as a @host variable inside of the template, so if there was a way to run that job on a different target host, it could be used to generate mco command. However that would still mean your users would need to have execution permission on that mco host. I don’t see a way around that, so SSH REX provider won’t help you here.

The fact this could have been done through old puppet run method worked thanks to we didn’t need any view permission for the smart proxy host. In fact the host that runs smart proxy didn’t even have to exist in Foreman. In theory you can limit users so they can run only specific job on a given host. Would that be an option? Or does your mco view_hosts permission reveal too much to these users?

The non-ssh agent provider would most likely be something running on the client machine too. We’d still need tell proxy there’s a job for hosts a,b,c. But it would work similar to the old puppet run, where user don’t need explicit permission to host running that proxy. I think the design of pull provider is being put together at the moment, however it will take at least 2 releases (I guess) before you could use it.

I’ve heard about salt provider that should already work and could be probably used this way. Perhaps @m-bucher would know more about this.

However that would still mean your users would need to have execution permission on that mco host. I don’t see a way around that, so SSH REX provider won’t help you here.

If you mean execution permissions on the mco host in Foreman, that shouldn’t be a showstopper. But could be a bit confusing for users as to why they see this unknown server in their server list.

In theory you can limit users so they can run only specific job on a given host. Would that be an option? Or does your mco view_hosts permission reveal too much to these users?

The view_hosts permission shouldn’t be a problem, can you expand on this way of running the job? I’m not quite following… :slight_smile:

You can create a role with a filter “create_template_invocations” permission and uncheck “unlimited” checkbox. That allows you to specify additional search for the filter, e.g.

host.name = mco.example.tst and name = "Puppet Run Once Through mco"

Users granted the role with this filter can only execute a given template on the mco host. It would still be a bit confusing, why they need to run a job on another host and pass the desired target host as an input to that template. But should be doable :slight_smile:

Thanks! Might be an option but probably won’t be used because it’s not very intuitive. I really just want to be able to trigger puppet runs for a host on it’s own page, and since we already have the choria infrastructure set up that would be the ideal way to do it for us. Looks like we’ll have to wait for the non-ssh provider or set up the SSH infrastructure to do it that way instead.

Alternatively, if you’re adventurous, you could put together a choria provider plugin. I’m not familiar with choria at all, but I guess in general the provider plugin would be quite similar to the current salt provider plugin which is quite small.

I recall that @ananace and @aruzicka spoke about a choria REX provider at cfgmgmtcamp this year because @ananace was interested in such a provider. Perhaps there are more people who are interested in writing such a provider?