Add Linux on Z (s390x) Koji Builder to Foreman Ecosystem

Currently the Foreman ecosystem is not using the Koji builders in the Fedora system but a custom build internally. I would like to add s390x build capacity to the Foreman ecosystem to enable binaries to be released for IBM Z and the s390x ecosystem.

I’ve been researching Koji and it looks like there is a few components which are standard to most CICD systems.

Koji Components

On the server (koji-hub/koji-web)
httpd
mod_ssl
postgresql-server
mod_wsgi
On the builder (koji-builder)
mock
setarch (for some archs you’ll require a patched version)
rpm-build
createrepo

Builder Components

So on the builder we need the building tools mock, setarch, rpm-build, and createrepo which are standard and should be able to be installed on top of Fedora 30. Then we are going to need to connect it to the Koji Hub found in Katello Hosts | koji. If SSL is enabled then we are going to need to securely move them to our builders in order to have the build component to be able to talk to the Hub/WebUI.

Builder Configuration Required

Information regarding the SSL configuration required Server How To — Koji 1.17.0 documentation

Server How To — Koji 1.17.0 documentation

  1. Mount Koji Build Directory to Builder via Read-Only NFS
  2. Securely transfer SSL certs to builder (if required)
  3. Configure /etc/kojid/kojid.conf per Server How To — Koji 1.17.0 documentation
Hub/WebUI Configuration Required

The first thing it looks like we need to do is intialize the Koji Hub to add a new host pointing to the s390x builder

kojiadmin@localhost$ koji add-host kojibuilder1.example.com i386 x86_64

On the builder end we need to do some interesting things. In essence we are just polling the server for new builds but in order for the builder to know what the state of the “master” is it has to have read access to the root koji build directory. This will most likely require an update to the NFS exports list.

The root of the koji build directory (i.e., /mnt/koji) must be mounted on the builder. A Read-Only NFS mount is the easiest way to handle this.
The directory root where work data can be found from the koji hub
topdir=/mnt/koji

Final Steps

Once both of these components are configured we can add the host to the build channel.

kojiadmin@localhost$ koji add-host-to-channel kojibuilder1.example.com createrepo

I am going to work on the Koji node from the builder end. @ekohl are you the guy who can set up the Koji Hub/WebUI interfaces or would that fall under @pcreech 's expertise? FYI @packaging

Here are my notes from installing the old koji: KojiSetup - Foreman

We used to have external koji builders, but NFS was causing issues and a lot of data was transferred through it. Therefore we have decided to install Koji on a single node with beefier resources so all jobs can execute there. Adding new node is not relevant for our “new” setup: KojiBuilderSetup - Foreman

I am not sure if exporting our 1TB repositories over NFS is something we should be doing, this can be pretty costly on EC2 due to data transfer prices. But I am not from the @packaging team so this is solely my understanding which may be wrong.

We discussed the idea of using Fedora COPR for building Foreman, but there were some concerns about availability and I remember package signing (I don’t remember). COPR is still not official Fedora service yet. Also there were some ideas to move to CentOS build infrastructure, but that would also have drawback - e.g. we do release every 3 months and we need to have a flexible option to create new tags, compose scripts and other artifacts (Fedora and CentOS has much slower pace and things can be slower). We simply could not afford to wait for an infra ticket to be fulfilled thus we deployed our own instance, but things have changed maybe in 2019 and more things are automated. Anyway, both Fedora and CentOS don’t provide s390x build hosts as well, this was just to give you more info why we run our own Koji.

It’s indeed exciting to see Foreman running on a mainframe, however this can be quite a challenge. Probably some kind of NFS cache could solve the problem. I remember one of the reasons why we dropped NFS was poor performance on EC2, a cache (FS-Cache or similar high-level solution not a dumb and small block cache of course) should probably help a lot over longer distance too.

We actually do have a Fedora builder, but it’s also running in EC2 so it’s internal traffic there (which I believe is free).

Most recently it was the performance. We started to build our Rails SCL there but while working on it there was often a queue of > 1 hour. When your work depends on the result of another build, this really kills your productivity.

1 Like

What’s that for by the way? Probably the underlying OS was culprit of some build issue?

We needed DNF to build Fedora 28 (and newer). The only way was a Fedora builder. We may be able to use this to build CentOS 8 as well, otherwise we need to add a C8 builder but we’ll find out when C8 is actually released.

Yeah @ekohl and I were chatting on IRC a bit about that. We are exploring some possibilities of a local cache to reduce the network load using possibly a few different technologies.

First Option

FS-Cache is a client-side cache that that might slow down and have larger implications for the client end but is supposed to relieve the bandwidth requirements.

The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to cache NFS traffic, for instance, it may slow the client down a little, but massively reduce the network and server loading by satisfying read requests locally without consuming network bandwidth.

Second Option

The second one which we have some concerns about is RSYNC on top of a NFS mount or something inbetween. This we still need to flesh out the implications but if we can only sync the “deltas” then it would also reduce the network load.


@lzap fyi the target OS for our s390x builder is as of now also Fedora 30 which will put us in a position to be in currency with the x86 environments when the packaging team decides to upgrade.

Just reading through KojiBuilderSetup - Foreman linked by @lzap. It looks like adding FS-Cache was in the pipeline for someone at some point

TODO: Use FS-Cache/NFS cache to speed up NFS access: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fscachenfs.html

You would still have work directory on a fast (local) storage, NFS is only used for mounting external and foreman repositories to pull RPM packages from. I don’t see any big performance hit there at all, compared to rsync this will work transparently.

I’m happy to see discussions of the technical issues from how can we build it stand point. For me, there are a few questions I want to ask and a few points around our Koji infrastructure that need to be addressed before taking on new builds.

Given we build a lot of RPMs, our CI/CD tends to be closed to maxed at times, and new architectures add build, process and testing overhead to infrastructure and the maintainers:

  1. What is the motivation for adding IBM Z and s390x server installation support? Would smart proxy support be enough? Would client bits be enough?
  2. Are there situations where running on an x86_64 machine are not feasible for a set of users?

On the Koji side, our architecture needs some re-work to support increased build capacity. This is true for EL8 and would be doubly true for any new architectures. Further, we would need to include all of the external repositories for these architectures which means increasing our storage. All that is to say we would need to:

  1. Do work to re-architect and support more builders for more throughput
  2. Add additional storage
  3. Find more money to support the additional cost
1 Like

I’m back from my leave, and everything ehelms has said is accurate. We are up against a few limits architecturally in our koji and some analysis needs to be done as a best path forward.

Adding a “builder” is non-trivial, and there is some data transferrence that needs to take place. I got around some nfs issues and performance problems by utilizing http based external repos instead of file based external repos, but there will be potentially significant data transfer costs if it isn’t planned for.

I wanted to send along an update that another user reminded me of. Aside from build infrastructure, there is complexity in the need for at least Puppet packaged for s390x which we don’t handle ourselves… We have been using Puppetlabs provided puppet-agent packages directly for which there is no 390x builds [1]. We have tried to build puppet-agent ourselves for non-supported architectures… its one of the last fun activities we’ve ever done (and not we “tried” but were not successful).

I think this would mean someone figuring this out or we look into directly packaging the Puppet gem for use by the installer rather than relying on puppet-agent.

[1] http://yum.puppetlabs.com/puppet/el/7/

Can we close in favor of the new master thread ?