Infrastructure roles

Although I haven’t mentioned it explicitly in the rfc, in the POC PRs I store the UUID in the facet for both foreman and the smart proxy. This allows us to break the association if for example proxy’s uuid changes. In foreman’s case it allows us to distinguish if a host is this foreman, another foreman or not foreman at all.

I’m not sure if relying on something outside of our control is a good idea. Especially since you can run smart proxy on various non-systemd platforms, such as windows.

Originally foreman’s instance id was generated on first start and stored in the db. In this proposal and poc prs I tried to do as few changes as I could, leading to the possibility of the fact and the setting getting out of sync. If we decide matching using facts is the way to go, then it would make sense to make the setting more static.

Currently it does not. Should it? My thinking was that once a machine reports itself as foreman/smart proxy, it keeps being foreman/smart proxy until reprovisioned.

I mentioned earlier, FQDN is not a great identifier for local networks and example.com-like domains. I’d prefer some other identifier, the machine-id sounds cool. I only wonder if we can rely on that in future e.g. inside a container. If yes, then plus one, if not, randomly generated UUID sounds more appealing to me.

I know that this was required ealier to be changeable. Therefore the DB was set to be the source of the truth and we let users to modify this easily. I’d be OK with this getting out of sync and letting user manually change the relationship, since moving Foreman between hosts seems quite rare, but I know if can happen during upgrade of underlaying OS for example.

Good question, I’d like to echo that question. My assumption is, we’d not do any changes for missing fact but we could clear on existing fact key but nil value?

:+1: UUID was also what I think would be the correct implementation. This allows for a use case where there is a Foreman instance, but not the one currently used (like managing a Foreman instance in a lab under management of another Foreman).

Good point.

Maybe not automatically, but if it was linked we should provide a way to break the relationship. This allows correcting mistakenly linked hosts or after a migration. For example when a Foreman is cloned for a lab but then starts to live its own life. Then it should not be linked. Being able to manage this via the UI/API is probably sufficient.

If two hosts checks-in via HTTPS with the same X509 client certificate (the same CN), Foreman will only keep a single inventory record ultimately leading those hosts overwriting their facts (thus UUID) over and over again.

What I am not comfortable with is ability for any host with a valid certificate to upload a UUID and pretend it’s a smart proxy. I am talking security. I am assuming that any UUID checked-in via facts would “upgrade” a host to a smart proxy. If I don’t understand your proposal correctly, then fill me in.

My thinking is, if there was information in the cert itself “this is a smart proxy”, then this could be verified on fact upload. There must be human involved in the process confirming the association, in my idea this happens before a host can check-in for the first time.

Now that I am thinking about it, I see that a human could confirm which host is the correct one in Foreman UI in a more comfortable way. That would work too as long as it is mandatory.

I believe I mostly answered it here.

To rephrase, it only matches a host aginst an existing smart proxy and creates a link if the uuids match. It does not create a new proxy, it doesn’t grant the host any new privileges or capabilities and it doesn’t alter the proxy in any other way. If a host gets linked against a smart proxy, then users might need additional permissions to interact with that host in foreman. What is the attack vector here?

I can think of the following attack vector:

  1. create a normal host in Foreman (that’s what most users are allowed to do)
  2. install a smart proxy on the host, but don’t really run it
  3. inject the instance_id fact of an existing proxy and thus let Foreman think that userhost1.example.com is proxy1.example.com
  4. wait
  5. the admin updates proxy1.example.com using the proposed “upgrade proxy” playbook
  6. as proxy1.example.com is really userhost1.example.com, THAT gets upgraded (and as it has a proxy installed that works)
  7. the user now owns all credentials (oauth keys, certs, blah) of proxy1.example.com as the upgrade process made sure those are refreshed during the upgrade
  8. do whatever you want with the permissions proxy1 has

This is a rather long running attack, but I think we’ve all seen that those are the ones that are most interesting :wink:

PS: I’ve of course did not ensure that the upgrade playbooks (so they exist) do refresh any credentials or anything, but just because they don’t today, doesn’t mean someone won’t add that tomorrow, not knowing that the proxy can be impersonated (it really shouldn’t be).

This is indeed possible. I always assumed that even if using those shortcuts the user would still go through the remote execution form, where they could spot that they’re trying to update proxy1.example.com on suspicious-host.somewhere.else as a last line of defense.

Maybe, or maybe they’ll do that once, twice, see how smooth it is, and just write an Ansible playbook “schedule a proxy upgrade on all available proxies” for the third time.

Security should never rely on the user catching things (if possible).

In that case it’s safe, I got a wrong impression. Should have read more carefully.

I think what @evgeni talks about is a generic chicken and egg problem. We want some unknown machine to tell us it’s actually known. We can’t trust unknown machines though. This is very similar to the proxy registration feature of the installer, an unknown machine tells Foreman, hey I’m here and I’m your new proxy. There we only rely on the knowledge of oauth credentials. Oauth credentials are stored as settings in Foreman. User pass it to the installer. We hope only people who install real proxies know the credentials.

I think that should be the base for what we use to authorize instance uuid information in facts. We should only trust these facts if the fact author knows the oauth credentials, then we should trust them. What would be the best way to combine oauth with the suggested facts approach? I looked at puppet trusted facts but that does not seem to help, at least not now with our current CA infrastructure. I’m sure there are better options.

1 Like

If we can rely on the oauth secrets being available, then we could add an additional fact which would act as a signature to the uuid fact using the oauth secret as a key. Then on Foreman’s side we would verify the signature (since we know the uuid fact and the secret) and establish the link between proxy and a host only if the signature is valid.

This would still allow us to use facts as the delivery mechanism, allow us to do it by hand if needed and most importantly add certain level of trust to the established links between hosts and proxies.

The oauth credentials are only passed into the installer. Technically they are saved to the answers file but they don’t have to be. The proxy never has these credentials. Are you suggesting to extend the registration protocol to also identify a host that it runs on and implement that in the installer?

Now that we have the ability to generate JWT tokens with limited scope to a specific action, perhaps we could actually get rid of the oauth tokens and use a JWT instead?

Does it matter? The signature would be generated by the installer when it runs and deploys the custom facts. Once that is done, the secret should not be needed any more

Not at all. All I’m suggesting is extending my original fact based proposal with a signature fact in addition to each originally proposed uuid fact.

I like this. This means the initial fact upload performed from the installer has the information signed by the same mechanism. I assume the custom fact lives only for the sake of the installation. And if someone wants to setup continous update of the mapping, they need to create such signature as a custom fact.

Even if it lived longer, I assume the custom fact deployed by the installer is not any worse than the answer file. It does not contain the credential, just the signature. If it can only be read by root, that’s good even in that scenario.

Long term, I’m all for replacing the oauth with other type of credentials. But that increases the scope of this feature that we wanted to get to 2.5, because it’s changing what we already have and rely on in the installer registration feature.

Thinking out loud here: how will we ensure the facts are uploaded? There are plans to make Puppet optional and off by default. Then how does it work? On the other hand, it would also mean that the host is not present in Foreman at all so the relation doesn’t have to be there.

I think there’s no plan to stop using puppet in the installer, so this should continue working. But using facts mechanism is easily reusable by any other fact source, be it chef, ansible, salt, subscription-manager in case one wants to use that without puppet.

Correct, but the installer does not send facts. It runs Puppet in an agentless way and we take steps to isolate it from any config so it doesn’t attempt to connect to any server. Today the installer defaults to enabling Puppet on a machine. A Puppet check in results in the Puppetserver sending facts to Foreman which ends up creating a database entry. If you disable Puppetserver, there is no foreman.example.com host in Foreman (unless you use other means).

Roughly the flow is:

  • Install Foreman
  • Install Foreman Proxy
  • When both are done, register the Foreman Proxy (if desired)
  • If registration is enabled, ensure Puppet is started after the Proxy has been registered

This piece of code takes care of the last bit.

Only after that you can see some host entry show up.

You can verify this if you follow the steps documented in Defaulting Puppet to off in the Katello Scenario.

Ah, I was unaware of the fact, the puppetserver is necessary to be running for this. I thought we’d call the enc script after the proxy is registered. Today, the proxy needs to have Puppet feature enabled for the authorization, however we’d allow the alternative authorization by the facts signature. Is it a hard change to use the script directly with isolated puppet agent run? Or could we even use the facts gathered during the installer run?

Sounds like a very good idea. Thanks.

I guess, this would also mean, that use-cases like this are possible then: Run REX Job directly on Smart Proxy

Run a REX job on a smart proxy, even if the host (= which is the smart proxy) is not in the same org/location as the currently logged in user.