After talking to several Foreman and Satellite folks at RedHat Summit this week, there are still some unanswered questions I had in regards to different archtectures
Does Foreman have a full support of other than x86 architectures, namely ARM and PPC?
By full support I mean:
FDI for ARM and PPC
I have been told that ARM was not supported before because of issues with its PXE booting process, but UEFI support was added to Foreman since then, so this should not a technical drawback anymore, IMHO.
But it is unclear to me if this has been actually implemented and things like FDI and discovery plug-in are ready for that. PPC is even easier as it just needs PXE config file, but that file has to be created in, say, “ppc64” dir, not pxelinux.cfg so IBM’s petitboot can find it. From Foreman side it means that it needs to be able to at least differentiate the architectures, which I’m pretty sure it can. The question really is on how much of that fully implemented in 1.14 at this point.
Could someone clarify this for me? @lzap - most people I spoke at Summit were pointing at you
I can’t speak on the architecture side of things, but I’ll just add a versioning note - 1.14 is long out of upstream support now. We support N and N-1, so if this isn’t available today in 1.14, I wouldn’t expect to see it added (and my gut feeling is that it isn’t, or you’d already be doing it )
Obviously if changes are needed, then we’ll want to look at getting it added, but it would be for 1.17 or 1.18 at this point.
Greg, yes, I understand that we’re behind and at this point I just wanted to see what would it take to get ARM and PPC supported, if they are not already.
As for cadence you, guys, release Foreman - it was always an issue for me. In our large environment, it is simply impossible to keep up with your release schedule, otherwise it would be a full-time job just to manage all Foreman clusters that we have all the time. Plus, in production, I simply can’t go to too close to development branch or even latest release just because of possible bugs. There’s some work for me to do to catch up on this and make sure the proper test automation in place (not just CI/CD for Foreman itself, but a complete lifecycle of all work flows that use Foreman and SmartProxy for orchestration tasks), of course, but again, not a trivial thing to do. For example, in 1.16 the switch to puppet5 has happened, which means all of the puppet modules we have must be compatible and tested. That delays things.
I hear you - big enterprises move slowly, yet more nimble users don’t want to wait long times for new releases, and there isn’t the manpower for supporting many simultaneous releases. It’s simply not possible to please everyone on this. All I can do is point at the community survey data, and say that most people are happy with the schedule - but I realise that doesn’t make it any easier on you. I just wanted to be sure we were all on the same page, so thanks for confirming
We still support using older puppet master versions on the proxy, but the installer needs a newer puppet version. Our ENC and report processor still have all the code present from the 0.25 or 2.x days so those might still work.
I kinda missed the notification during the busy last week. Thanks Daniel for pinging me on IRC. Allright, this is gonna be long. Before I start, I hereby confirm these are solely my opinions and this is not official statement from Red Hat, please do reach out to Red Hat representatives for up-to-date and official info.
Let me cover PXE provisioning workflow first. Foreman has been tested on x86 booting BIOS and UEFI, that’s what was our target for 1.15ish version. To support UEFI, we added Grub2 because PXELinux project does not yet have a stable version with UEFI support and we don’t even ship it in RHEL, Grub2 however is fully supported and shipped in RHEL7.
Grub2 enabled provisioning of PPC64 architecture, while we haven’t tested ourselves, there were reports on successful PXE building, several smaller bugs were reported and hopefully ironed out. Due to clunky PXE files naming conventions which I came up with, switching between LE and BE is a bit tricky, we are tracking an issue to change naming conventions to follow Grub2 netboot directory tho.
Grub2 also enables support of booting ARM servers but the issue is we don’t have any PXE environment to test with. We do have sponsors providing some ARM resources but these are cloud instances - to test PXE provisioning we need full access to a lab with dedicated (V)LAN to be able to test PXE. Looks like ARM servers with BMC cards are quite expensive
I’d be more than happy to provide help and do upstream changes even in “blind” mode where I don’t have any access to ARM or PPC64 servers. That’s how we got PPC64 working actually. There are of course hardware resources available in Red Hat but these servers are part of existing clusters and getting them reconfigured for PXE testing takes lot of time and we can’t keep them forever. Also community could not participate on coding or testing. Therefore, there are two ways of improving non-intel architectures in Foreman:
Talk to Red Hat representatives and/or file Satellite RFE
Donate 2 servers with separate LAN of different architectures to the community and make sure that Jenkins team won’t grab them for tests (that’s what usually happen)
We only do intel today. Let me explain: FDI is based on RHEL for obvious reasons (downstream support), so to be able to introduce new architecture, RHEL must exist for that architecture. Not only that, we must be able to use LiveCD from Fedora and create bootable live-cd ISO file. I am not sure how LiveCD (livecd-tools) work on non-intel arches to be honest (if they work).
Once we are able to build and boot such an ISO, it should not be that hard to introduce new architecture. Alternatively, FDI can be built on top of different distribution or technology, for upstream it does not matter and there are plenty of discovery users maintaining their own FDI builds or even different live CDs based off debian or other distros.
The plugin is basically noarch, there should not be any problem with that.
We do have reports on using Grub2 to boot PPC64, I haven’t tried this myself. There is no support for Petitboot but we can add one if you tell us what is missing (perhaps just filename option - PXE loader). The rest is on you. For more details on PPC64/Grub2: https://www.theforeman.org/2016/09/new-post.html
Yeah, well for Satellite the key feature was to add UEFI support for bare-metal, which landed in 6.3 recently. For further plans you need to talk via official channels, but as you can see in the upstream there is no active work on ARM, PPC or zVM. For more info on official provisioning support in Satellite reach out to:
It’s worth mentioning that the only requirement of an FDI build is that it makes the right API calls to register in Foreman, and can receive the appropriate incoming API call to reboot at the right time. You can build it on pretty much anything - as @lzap says, we use a well known base to get a ton of hardware/driver support for free.
Yes, I am one of those users myself I’ve been running so called “miniOS” for many years now (started way before FDI become available), which is based on Casper (Ubuntu’s netbooted LiveCD + squashfs). However, if other architecture’s support was already a part of FDI, or at least planned in a near future, I’d rather go FDI and contribute back to the community if possible.
Wasn’t FDI based on CentOS7 rather than RHEL? The git repo for still says it is, so I just wanted to clarify this for myself.
Also, when you say “LiveCD”, do you mean that such image would have a full-blown features like ability to install packages after netbooting, for example? One of the short-comings, IMHO, of previous FDIs (CentOS7-based, I have not tried Fedora-based ones) is that I have to put all of packages into my FDI as after it boots on my BM, it is read-only. This limits what I can do with FDI - rebooting systems just to load a new version of FDI is impractical in large environments. So, proper LiveCD image like our miniOS, is what we’re sticking with so far just because of that reason - with RemoteExec capabilities that Foreman has, there sky becomes a limit of what one can do if Foreman discovered hosts become a first-class citizens and actually recognized as proper machine state. We discussed this some time ago, I just can’t find that right now.
Oh by RHEL I mean CentOS or RHEL or Fedora. We sometimes use this term as “Red Hat compatible”.
In this context, livecd means technology behind open-source utility livecd-tools which allows booting linux from memory buffer - everything is transient. You can write to “disk”, install packages but everything is gone after reboot. Fedora uses the same mechanism to provide “test and install” experience, we don’t have that. We also drop lots of parts of OS including YUM/RPM database to minimize the image to maximum degree.
Integrating FDI with ReX is indeed our priority, it should not be difficult and it is on top of my TODO list. As you say, sky is the limit, we can add all required RPM packages to the FDI on request (e.g. firmware updaters, whatever is needed and can be added legally). The rest can be downloaded from HTTP/TFTP via extension mechanisms, or users can write their own shell scripts to install required software.
My ultimate idea is fully automated bare-metal image-based provisioning - a ReX job will spawn script which will download image onto discovered nodes via either HTTPS or vice versa via udpcast.
Yes, in my previous exercises on building my own FDI, I change a lot of things (added/removed packages, etc.) including keeping YUM/RPM DB and structure, but that was not enough for FDI to become truly read-writable - even if I boot with RW kernel flag, I still couldn’t do yum/rpm installs.
Sorry, this still a bit unclear to me - add to the FDI at the image build time or actual runtime on a target BM?
Yes, I saw you had a couple of ways to add things - user-defined facter scripts, etc. with zip-files. One thing that was missing, in my opinion, is “git clone” capability there - it would be awesome to specify a git path, and optionally branch name, of such extensions.
I hate to ask this question, but any ETA or targeted Foreman version for this?
Awesome! We think alike a lot on this front
BTW, I wish there’s the same kind of mechanism for VM images on Foreman-managed compute resources like KVM hypervisors so instead of having to pre-download such images to HVs disk, I could just specify an URL to it and Foreman would download it for me if it is not there already. We spoke with @Marek_Hulan on this topic a little bit during RH Summit.
I wish we could have more dialogs on Foreman strategy and its future features and direction. I could share my thoughts and examples on many different topics as part of complete BM lifecycle in a large environment. I’m trying to work with own RH rep on settings something like that with Satellite folks, but ideally I’d love to have that conversation with people actually working on that, like you. If you’re interested, please let me know if there’re any other ways to accomplish that.
Yes, adding some packages you would like to have in the official build. As long as it is in CentOS7 official repositories it is easy.
Great idea, just go ahead and file a PR:
Adding the git package should be pretty easy. It is literally one line in that kickstart part.
As much as I would love to see that coming, this is not targeted yet.
We are having discussion right here, right now. Even if we had a chance to discuss things over coffee or beer, what’s not on our mailing list does not exist so eventually it needs to be written here. Managing KVM hypervisors is possible, that sounds like Ansible task to me. But I’d recommend taking a look on oVirt, that does exactly the same job, it adds decent amount of complexity though.
Working with community is actually the best way of pushing things forward. Do not hesitate to put your ideas here, we need more feedback. The community site is the best place, you can quickly get feedback from other users so the future changes are more relevant to others.
Reviving this conversation as I finally got my hands on ARM systems and was able to verify that PXE/grub2 part works rather well on ARM, just a bit slower (20 min install time vs 4 min for x86).
However, as mentioned by @lzap above, I can’t build an FDI mainly because there’s no support for aarch64 in setarch utility, which is often used with livecd-tools to build images for other architectures (“setarch livecd-creator…”).
Solving this is complicated as livecd-tools is not being published in RHEL, it’s only used in buildroots to build images. We are shipping it with Satellite 6 repos, but that’s not for aarch64 platform, so not much help here. Using Fedora is the best workaround I think for now.
Thank you so much for testing this, this is super exciting for me. If you have a chance to record a video of that and share this with us, this would be totally awesome. I am looking for the day to PXE provision my very first ARM64 server with Foreman, haven’t had a chance yet.
I have to set up Fedora mirror, etc. as we have not used it before and internal systems don’t have direct access to the internet, but it is in progress, so hopefully pretty soon I’ll have some results to report back here.
Not sure about the video though - so far there’s nothing really that much different from FDI build procedure, maybe just a separate repo file that points to aarch64 as a platform.
OK, I’ve installed f28 on my ARM system. After updating 00-repos-centos7.ks to use my local repos and 20-packages.ks to remove and/or update a few packages to reflect an arch change (see below a little note on newt), livecd fails for me at the end of a process with this error:
Error creating Live CD : Bootloader configuration is arch-specific, but not implemented for this arch!