I already searched for answers and hints, but I don’t know if I searched with the wrong keywords.
Is it possible to manage a client with Fedora Silverblue through foreman?
I found some old posts from 2019, that mention I need to enable it with foreman-installer and
the parameter “–ostree” , but this is not mentioned in the documentation for foreman or katello.
What need I to do, to manage a Fedora Silverblue client through foreman?
From the first glance it looks like after adding the foreman-installer parameter --foreman-proxy-content-enable-ostree=true you are able to not only add ostree repos to Katello, but also have lifecycles above these repos, aka CVs and LCs work with it.
So yes you can at least clone Fedora’s ostree repo, with one catch, you will need to provide a direct endpoint instead of using the mirrorlist (as configured by default), I got this working by going to https://ostree.fedoraproject.org/mirrorlist and then taking the mirror it gives me.
If I got this right you need to control the normal YUM repos as well, as these are used if you do a rpm-ostree install <package-name> for layered packages, so this can be hooked in the system normally via subscription-manager.
Expect very long cloning times of the ostree repos though, taking a long while for me right now.
The 2nd part, managing clients
Subscription Manager would theoretically have a function to also manage ostree repos, but as far as I know Katello does not provide this functionality. (am testing around with this right now if it still could be used)
But the more known to work way I see right now would be using configuration management for the whole repo management and if needed update applying.
As I expect your clients being workstations they won’t be reachable all the time by the Foreman Server, so this would make it possible to use either Puppet, Salt, or more bare pull-mode job-invocations (with Yggdrasil), that might even get Ansible working on clients if Ansible is installed on the clients (not sure never tried)
And then you just make sure the config files and repo files point in the right direction. (i.e. /etc/ostree/remotes.d/fedora.conf which has the ostree remote in it, or also the rpm-ostreed service config to make it auto update), one downside here is of course, you won’t be able to use authentication for that, every client needs to be able to world read this repo.
The YUM repos will then be controlled normally via rhsm and it’s redhat.repo file.
The point also is, dnf can’t be used, rpm-ostree has to be used, so all job-invocation templates are not ready for that, but you could duplicate and adapt them to this usecase, if you want to be able to package operations via Katello.
To be continued when I got a client in the testsystem.
Also the other thing to think about, ostree-based Fedora encourages the user to use toolbox, containers or Flatpaks for most stuff, controlling that might be another long rabbithole.
You can control container remotes for Podman (and Docker) similarly to how you could do it for the ostree remote, just different files to control via configuration management.
Only getting all used container repos together into Katello might be a bit much work, as you cannot clone the whole Quay.io or Docker Hub only organizations.
Flatpak though, there is now the option to host individual oci-based Flatpak repos in Katello, but the mass is not that, the fedora remote is though, and especially you can’t again clone the whole Flathub or whole Fedora remote.
As far as I can see the remotes should also similarly as above being able to be controlled, with the /var/lib/flatpak/repo/config file.
For both cases that might take a lot of space, as there is no caching only option for these.
I’m currently at 26h of sync time and still ongoing, looks like the Fedora OSTree enpoint houses all OSTree builds since version 27, that’s gotta be a lot of data. (Looking into getting a include/exclude refs property)
The sync timed out after 2 days, so yeah there is really the need to have the include filter, that might be available in a future version, but not right now.
Is there a way to tell foreman the depth for the mirror? Like you can do it with ostree --repo=repo pull --depth 1 --mirror coreos:fedora/x86_64/coreos/stable?
At the moment I’ve added fedora/x86_64/coreos/stable
to the Include Refs but it takes a day and stops with a 502 Error because there is a missing link/ref
There is one in the underlying application (pulp_ostree), but it’s not exposed to Katello.
But I also don’t understand how this should help, because if you are using include: fedora/x86_64/coreos/stable, you limited it to a single head. As far as I understand it, the depth option is there if you want to i.e. sync all heads in fedora/40/x86_64 excluding the testing and updates head directories, by setting the depth to 1.
Nevermind I just understood what depth does, it’s for the git history depth
Well yes would make sense to get that in there as well.
Tbh would need to see how much impact that has, because it could totally be that Fedora is just that large and there isn’t really a history.
Furthermore doesn’t it need the history to update? (as far as I understand it an update in ostree is a git diff/apply)
If pulp-ostree accepts an option for depth, it would likely be relatively easy to add to Katello. Easy, but time-consuming enough for us to have to fit it in our priorities.
With that said, if there’s any interest in attempting to add the feature to Katello yourself, we’d be happy to give pointers. It would likely require the following:
Adding backend support to updating the depth field on the pulp-ostree remote.
Adding API support for the new param
Adding a text field in the repo create + update UI (optional – API support only is okay)
1-3 are things that are very easily copy/pasteable using examples in the Katello code.
Alternatively you could just set the depth on the remote yourself, but there’s a chance that Katello may overwrite the changes.
Tbh I can take that task as well again, it’s literally very simple to implement
(well why not will do it right away, thinking about it takes longer, than just doing it )