The main technical blocker to get an initial deployment going is certificates. After that I don’t know how many issues I’ll hit. I expect that candlepin will happily chug on alone but I expect there are many assumptions around pulp. Tools like foreman_maintain and the installer will probably also need a lot of love. Patches like https://github.com/Katello/katello/commit/c1f2c0bb5b89128fc3a6c1e8927686e7e5c5f8b3 suggest there is still a very strong coupling between the server hostname and the service name.
@TimoGoebel has done a lot of experimentation around this but I don’t know how far he’s actually gotten.
My “plan” is to finish the puppet module patches so we can deploy standalone services. Then the forklift PR should be finished which will give us an easy deployment environment. When that’s complete, it will need testing. Ideally automated with actual clients.
I don’t know all the odd corners that could hit issues, but a proper integration/regression test suite is very useful anyway. It can be used in the regular all-in-one setup, but also in a split setup or a containerized setup.
I know Timo’s environment and it works fine consisting of multiple Foreman host, one Pulp server, one Database, one Qpid server, one Candlepin server and a host called Foreman CA used to precreate the certficates. All this is setup by using the Puppet modules, but I am sure not all modifications have landed upstream (but most of it should).
I was not involved in the initial setup. While moving it into production we found small and most times trivial problems like Katello expecting a webapp on a specific server which than had to be redirected to another. A bigger problem is updating such an environment because of the additional complexity.
Yep, we’re currently revisiting this as we want to update our instance. One part will be revisiting the puppet module, there were still very small changes left.
In addition, we’re writing an ansible playbook to distribute the certificates from the ca host to the other machines. That part has been done manually in the past.
My limited testing using the forklift PR with the suggested changes works, though it currently requires you to manually copy the certs tar generated on the Candlepin node to the Foreman app node, writing automation for that should be fairly easy.
IMO with all the open PRs I think were close to being able to add it to the release notes and getting users to try it out. If only there was more time to dedicate towards it.
I’d like take that approach a step further and modify the certs module to do the certs tar extract. Right now we do it in the proxy content module. The current PR adds tar extraction in the katello module but that creates an overlap in functionality in all-in-one installations where you’ll end up with 2 parameters doing the same thing. --certs-tar-file would be a unified way for all installations. Probably a good Friday project.
An alternative approach I’ve been thinking about is folding the foreman-proxy-content module into the katello module. There is some overlap and I believe we can provide a better user experience. Biggest downside is that we’re moving things again (capsule -> foreman-proxy-content) and in the proxy scenario you end up with lots of useless parameters though we could make the module wrap the katello module as a transition.
Sorry to bump this. I couldn’t keep up with the technicalities of what was thrown up by my initial question. Is there anything close to testing for this? It would be phenomenal to be able to split the full package out - for scaling and/or failover.
A more general blocker is that some acceptance tests are red because of dependencies. The beaker-puppet install helper only works if all dependencies allow the latest version so all modules need to allow stdlib 5.0.0. This means I need to do releases of our modules. Given we’re getting close to the 1.20 branching I want to use it as an excuse for major version bumps which are always more invasive.
The first simple and fully compatible PR was just merged.
My next step is to change the certs module to always allow using a tarball (Refactor #24947: Move tar file parameter to puppet-certs module - Katello - Foreman). That should be a standalone change. We can probably bikeshed over the parameter naming so please have a look there. For actual testing I need to rebuild the RPMs with the latest changes. I don’t remember if I actually got the tests to pass. My testing setup for this is at home and it looks like I didn’t git push it. This is at a point where testing can be done and I’ll write up instructions.
When that’s merged, the next step is to finish https://github.com/theforeman/puppet-certs/pull/210. I just updated it to show the general direction I want it to go to, but I’ll need to do some actual tests to finish it. It can probably even be done in parallel to the above.
With those cert changes I think we can generate certs for any server type of server. We’ll still need some “manual” transfer of the tarballs.
This is my current planning and I’d welcome help. Do note that I’m balancing quite a few issues and will need to set priorities. Given the 1.20 branching is coming up I don’t expect everything it to make it in time but we’ll see where it lands.
Note that it uses a build from http://koji.katello.org/koji/taskinfo?taskID=135611. Our current (ansible) koji role in forklift doesn’t set (yum) priorities so and after the next nightly it’ll be older. Be sure that the right RPM version is installed or you’re not testing anything
You have probably already thought about this, but an easy fix could be an ansible playbook for the job to have some automation. That’s basically what we use to move certificates from our katello ca server (yeah, there is a dedicated one) to the other places.
I think it’s fine if this is a manual step for now. If we want to go crazy, we’d even make this a playbook that sets up the hosts via Foreman and then copies the certs on the nodes.
So the workflow (from a user’s perspective would be): Set up a standalone Foreman. Run an ansible playbook. Get new hosts with a split katello deployment. This imposes some new challenges like data migration, but it could be a simple as restoring a backup.