Seamless-cockpit and Foreman

Hi all,

Here’s a discussion I was having with a coworker recently through email that I thought it’d be more appropriate if we have it here, so we can all chime in.

The goal would be to jump from Foreman to Cockpit on a machine
“seamlessly”, without having to log in again.

There is https://github.com/theforeman/foreman_cockpit already. Would
it make sense to improve this when the ultimate goal is to get Cockpit
integrated with Foreman?

That makes sense I would say. The gif in the README is quite outdated,
but I think in general it’d be tremendously nice to go to a machine and
be able to click on ‘Cockpit’ and automatically get to the UI.

Cool, I’ll look more closely at foreman_cockpit then and trust that we
can somehow reuse that.

Would you be willing with helping me to set up a devel environment for
Foreman and/or Satellite? That might save me a week or two of my
time… :slight_smile:

It should be a matter of going to https://github.com/theforeman/forklift
then ‘vagrant up centos7-devel’ - you definitely want Foreman and Katello
and Pulp to ensure maximum compatibility.

Alright, thanks!

2 things:

But Foreman still can execute things remotely, right? We need to run
cockpit-bridge on the target host and interactively communicate with its
stdin/stdout. Is that feasible via a capsule? Or more concretely, is
there a way to get a shell on the target host from the master without
having to type in any credentials? If it involves some complicated ssh
tunneling, that’s fine.

  • For SSH authentication, the Remote Execution plugin sets up some SSH
    keys in all hosts and proxies automatically which newly provisioned
    hosts will have. I think it would make sense to use them with Cockpit
    if possible. Ansible also reuses that.

Yes, that would be the idea.

I’ll keep on answering below

1 Like

Yes. We can use the remote execution plugin (https://www.theforeman.org/plugins/foreman_remote_execution/1.3/index.html) to install/run cockpit-bridge. This runs through smart-proxies (capsules) just fine.

Just one thing to keep in mind is that the SSH connection is not interactive, it’s not a prompt, but you should be able to run any scripts.

Btw. we’re just starting to consider introducing job for installing things like https://github.com/weldr/lorax, perhaps using the ansible-playbook via rex. @aruzicka might be interested in this topic.

Can you elaborate please?

1 Like

(I wrote the first messages that Daniel has quoted here.)

Just one thing to keep in mind is that the SSH connection is not interactive, it’s not a prompt, but you should be able to run any scripts.

That is a problem. Cockpit needs a long-running, interactive session on the target host, i.e., a shell with a prompt. Not getting one would be a show-stopper, unfortunately.

But let’s see, impossible is nothing, this hopefully just means that it takes some extra work in the remote plugin. :slight_smile:

The idea is to expose a rex job template to get a service, such as lorax-composer up and running, so that it can be used with Katello to build images, as well as upload the images back to katello, to be able to use as provisioning, following the work we did in the past regarding the oVirt hypervisor provisioning.

I expect more to come in the following weeks.

But lorax creates bootable ISO images, it’s a tool used to generate Fedora installation or LiveCD ISO files. This does not make sense to me how this fits Katello/Foreman workflows.

Also, why to use ssh/cli when there is lorax-composer daemon API? That could be used directly by Katello.

It fits when you want to build your own images that should be then distributed by Katello and used in Foreman provisioning.

Sorry for the confusion: l had different usage of Rex in mind, than the original though in this post, which probably caused the misunderstanding: this is not about using ssh for talking to the service, but just getting the service up and running. The integration in the first phase will be probably more on documentation level, perhaps evolving into some UI integration later.

@aruzicka @iNecas is it possible currently to get an interactive SSH with REX these days? Basically what we would want is to send STDIN to a job execution that last as long as the session lasts. Or is this more of a DIY use case?

It’s not possible today, and seems like something that goes against the rex use-case itself, as here seems more like keeping a tunnel opened. OTOH I can imagine we could expose this functionality, but not for rex purposes itself, but for the cockpit integration case, so that we don’t have to keep the ssh-part on various places + leverage the proxy infrastructure.

help me understand how this would work with REX. My understanding is this:

  • User submits a REX Job via the UI. A Record is put in the DB.
  • A worker on a smart proxy picks up the job from the DB and connects to the target machine.

Is this a correct assumption? If so, the browser connects to the Foreman Server, and the capsule connects to the host.

I assumed with Cockput that the browser needs toconnect to the host, or, we set up some bridging through the smart proxy. Having a connector run on the host is good, but it does not get the bridge we need via the capsule.

Daniel, can you elaborate more on what kind of integration is this all about? I still don’t understand the whole context. Is this integration of remote shell Cockpit feature (the nifty xterm.js they have for remote access via HTTPS)?

Here is a demo of the first baby steps: https://youtu.be/mGJB64jxsL0

Next I want to dig out the SSH parameters from Foreman that a remote execution would use for the given host, and pretend they will work from the Foreman master. Dealing with smart proxies and segmented networks comes later.

1 Like

Marius, it looks great. Thank you for sharing with us the progress.

What is your plan in regard to authentication on the cockpit side? Using UNIX PAM account is probably not the best way, stealing the credentials should not give the attacker full root access. Perhaps some separate PAM account could be used and stored in Foreman inventory as parameters.

What is your plan in regard to authentication on the cockpit side?

The cockpit web server (cockpit-ws) can run as a unprivileged user or even in a container. It will use SSH to get access to the target host and run cockpit-bridge there.

The credentials to use with SSH will come from the Foreman API: user, password, private key, passphrase for the private key, etc. Foreman will only hand out those credentials to clients that are authorized appropriately.
In the demo, cockpit-ws uses the _session_id cookie, which fortunately also allows access to the API.

My idea right now is that the actual user, password, private key that cockpit-ws will get from Foreman are the ones that remote execution would use by default. People will need to setup their hosts for remote execution in order to get a seamless Cockpit.

1 Like

My idea right now is that the actual user, password, private key that cockpit-ws will get from Foreman are the ones that remote execution would use by default. People will need to setup their hosts for remote execution in order to get a seamless Cockpit.

Here is a demo that shows this: https://www.youtube.com/watch?v=IM374KBtV04

I like seeing this. a couple of questions:

  1. Can this work through a smart proxy? It sounds like you said no
  2. What would be the steps to get it to be per user?

–bk

The code I have right now doesn’t work through a proxy. But we definitely need to involve the smart proxy somehow because of two reasons (afaics): The ssh private key might only be accessible to the smart proxy and not the Foreman master; and the target host might only be reachable by the smart proxy and not by the Foreman master.

Can you elaborate what you mean with “per user”?

My idea is that there is one new API entry point (in the foreman_remote_execution plugin) which gives out the ssh credentials. We can add any kind of authorization rules to that API so the credentials are only given out to certain users, for example.

We can also completely separate the Cockpit ssh credentials from the Remote Execution Credentials, but that would have the drawback that people would not be able to reuse their existing Remote Execution setup. They would need to distribute another set of public keys, for example.

The “Foreman Way” is probably to make this all possible and configurable, right?

1 Like

We can also completely separate the Cockpit ssh credentials from the Remote Execution Credentials

I like reusing REX credentials. Besides it works out of the box, it’s single key pair which is easier to maintain.

Note that each REX proxy has private+public key, public keys are available via API, so that Foreman can deploy them during provisioning. But Foreman doesn’t have access to private key, that’s stored on smart proxy. So perhaps the “through smart-proxy connection” should be figured out first.

Yes, I start to think that, too. I ran into limitations of our cockpit-ssh already, and before starting to fix those, it would be good to have figured out the whole picture. A couple of options off the top of my head:

  • the smart proxy can reflect the sshd port of the target host to the smart proxy host, and cockpit-ws would connect to that.

  • the smart proxy remote execution plugin can get a new “interactive job” API call that would upgrade the GET or POST request to a WebSocket and cockpit-ws would talk to that. (Foreman could reuse that to run an interactive shell on the target host. I think there were some ideas in that direction.)

  • cockpit-ws could be co-located with the smart proxy and the reverse proxy in front of Foreman could redirect to that.