Different architecture's support in Foreman 1.14 and plug-ins

Solving this is complicated as livecd-tools is not being published in RHEL, it’s only used in buildroots to build images. We are shipping it with Satellite 6 repos, but that’s not for aarch64 platform, so not much help here. Using Fedora is the best workaround I think for now.

Thank you so much for testing this, this is super exciting for me. If you have a chance to record a video of that and share this with us, this would be totally awesome. I am looking for the day to PXE provision my very first ARM64 server with Foreman, haven’t had a chance yet.

I have to set up Fedora mirror, etc. as we have not used it before and internal systems don’t have direct access to the internet, but it is in progress, so hopefully pretty soon I’ll have some results to report back here.

Not sure about the video though - so far there’s nothing really that much different from FDI build procedure, maybe just a separate repo file that points to aarch64 as a platform.

BTW, you can ignore kickstart timing issue - not ARM problem, but my Foreman/TFTP server is not-local to ARM systems network-wise, that’s where the delay turned to come from.

OK, I’ve installed f28 on my ARM system. After updating 00-repos-centos7.ks to use my local repos and 20-packages.ks to remove and/or update a few packages to reflect an arch change (see below a little note on newt), livecd fails for me at the end of a process with this error:

Error creating Live CD : Bootloader configuration is arch-specific, but not implemented for this arch!

A full output is available here - https://gist.github.com/korekhov/737dad658e95c15bb331409f450c4c01

If anyone has any pointers on how to proceed - please let me know.

Meanwhile, I’ll try to build f28-based FDI.

As for newt install, unlike any other package that failed to install for me, this one gives out this error:

package rubygem-newt-0.9.6-2.el7.x86_64 does not have a compatible architecture

I’m not overly concerned with this at the moment because we don’t really menu-driven discovery, but other folks might be, so JFYI.

BTW, while figuring out f28 kickstart for aarch64, I found this really interesting possibility for people w/o access to physical aarch64 systems - https://fedoraproject.org/wiki/Architectures/AArch64/F28/Installation#Install_with_QEMU

I haven’t tried that myself yet as I do have real systems to play with, but since you mentioned that you don’t, this maybe a way?

I’ve tried IBM POWER once it it was slow as hell. But thanks for the tip.

As for live cd, I think there is some chance that this will never work. Livecd-creator was built for intels, I guess Fedora ships all ARM bits as raw images today so they don’t use this for installation:

https://arm.fedoraproject.org/

But Fedora Atomic does provide ISO file, it might not be LiveCD tho:

https://ftp.icm.edu.pl/pub/Linux/dist/fedora-alt/atomic/stable/Fedora-28-updates-20181007.0/AtomicHost/aarch64/iso/

But I have an idea for Discovery NG - to use just init/kernel with Ingition which would then download additional packages (including all discovery services) from a yum repo. This would make discovery image even smaller I hope (just initramdisk no ISO embedded), easier updates since users could actually provide more up-to-date packages of Fedora/RHEL/CentOS and overall better flexibility installing own software and scripts. But this is long way to go.

Building a fedora28 aarch64 FDI on aarch64 system running the same fedora28 did not produce any new results and still fails. Since this seems like a bug to me, I’ve submitted one - https://bugzilla.redhat.com/show_bug.cgi?id=1641868

Meanwhile, I’ll try dracut method you described in [1] before I fallback to [2], which involves actual squashfs build.

[1] - Booting Discovery over HTTP
[2] - https://help.ubuntu.com/community/LiveCDCustomizationFromScratch

Thanks!

Crazy idea - how about to ditch fedora livecd and build your own initramdisk instead? All the discovery bits are in the github repo, you only need Ruby and few dependencies from the OS.

Another approach would be to containerize discovery services and then using CoreOS (they provide experimental ARM/ARM64 builds) to boot it (ignition script would download container and start it). Using Atomic is also an option, but it’s much bigger than CoreOS so far (almost 1 GB).

Not really crazy, but if I build something, I’d rather go with livecd/squashfs approach rather than just a ramdisk as the latter does not usually provide flexibility of installing more packages on the fly after a boot.

As for CoreOS and/or Atomic, as I understand from a different thread, containerization of Foreman and SmartProxy is just starting at this point, is it not? However, the container route seem to be the most flexible to me - one can deploy anything on top of that at any point of time after a boot. And not having to manage the OS image much. Looks good, but a bit away from now, IMHO. Although I’ll take a look at their LiveCD options.

1 Like

Sure, if you want to keep trying via the livecd path, then make sure you also try lorax/livemedia creator. It’s livecd-creator successor and the only supported way of building modern Fedora live CDs today: http://weldr.io/lorax/livemedia-creator.html

We still stick with livecd-creator because the new one is not yet available in CentOS 7.

Hi, Lukas and all!

I’ve built ubuntu18.04-based initrd+livecd squashfs (Casper) for arm64, installed foreman-proxy 1.19 from foreman mirror and smart_proxy_discovery_image 1.0.9 from rubygems.com since it is not a part of foreman bionic repo.

Because of facter3 running ubuntu18.04, I had to modify https://github.com/theforeman/foreman-discovery-image/blob/master/root/usr/bin/discovery-register slightly (no more facter/util/ip, for example), but that’s no really hard nor important. I had ARM64 systems registering now into Foreman discovered host, but only if network interfaces are named in a old standard format - eth0, eth1, etc.

If interfaces are like below, no registration is happening and you can see the error below as well:

Some interesting facts about this system:
bmc_ipaddress: 10.191.33.105
bmc_macaddress: d4:5d:df:1d:ef:ef
boardmanufacturer: Hyve
boardproductname: HS3017
hardwareisa: aarch64
hardwaremodel: aarch64
ipaddress: 10.191.1.216
ipaddress6: fe80::d65d:dfff:fe1d:eff0
ipaddress6_enP2p1s0f1: fe80::d65d:dfff:fe1d:eff0
ipaddress6_lo: ::1
ipaddress_enP2p1s0f1: 10.191.1.216
ipaddress_enP2p1s0f2: 169.254.1.2
ipaddress_lo: 127.0.0.1
ipmi_1_ipaddress: 10.191.33.105
ipmi_1_ipaddress_source: DHCP Address
ipmi_1_macaddress: d4:5d:df:1d:ef:ef
ipmi_ipaddress: 10.191.33.105
ipmi_ipaddress_source: DHCP Address
ipmi_macaddress: d4:5d:df:1d:ef:ef
macaddress: d4:5d:df:1d:ef:f0
macaddress_enP2p1s0f1: d4:5d:df:1d:ef:f0
macaddress_enP2p1s0f2: d4:5d:df:1d:ef:f1
manufacturer: Hyve
processor_manufacturer: Cavium Inc.
productname: Hydra-HS3017
Fact cache invalid, reloading to foreman
Discovered by URL: https://foreman.domain.com
Registering host at (https://foreman.domain.com)
Response from server 422: {“message”:“ERF42-8069 [Foreman::Exception]: Unable to detect primary interface using MAC ‘d4:5d:df:1d:ef:f0’ specified by discovery_fact ‘discovery_bootif’”}
Fact cache invalid, reloading to foreman

As soon as I add “biosdevname=0 net.ifnames=0” to pxeconfig to disable new interface names, things starting to work. Is this expected?

Thanks!

Actually, there are more important issues came up. During my attempts of provisioning my newly-discovered ARM64 system, Foreman just skips both creating a DHCP host reservation record and creating a grub2 pxeconfig file over TFTP API. I really can’t pinpoint anything in the log files with full debugging enabled - there are simply no errors. In Foreman UI, I see this:

But expected something like this:

I’m also going to check if this has anything to do with grub2/uefi as it is what ARM uses, but the second/good example is from x64/pxelinux-bios booted system.

But if anyone has any thoughts/ideas/suggestions on what else could be wrong, please let me know.
Thanks!

Hmm, x86_64 with grub2 uefi still works normally, so the issue above is architecture-related somehow.

Is aarch64 treated differently from x86? I just can’t think of a reason why DHCP and TFTP are simply skipped…

Hey, can you send me the facter --json output? Feel free to anonymize but don’t touch important data like MAC/IP/ifnames etc.

Puppet 3 is not yet supported by discovery, we have an initial version patch in core already but I haven’t tried this yet with discovery. It should work with new naming scheme as well as with Dell naming of course.

Do you have PXELoader present? Is there an interface marked as provision/managed?

JSON is attached - had to rename it to .log to trick your upload file extension restrictions. :slight_smile:

aarch64-facts.json.log (17.8 KB)

From UI things look good to me - there’s a bootif, which is marked as “main” and “provisioning”.

I do set PXEloader to “grub2 UEFI” during provisioning. The same PXEloader works just fine for x86.

It registered fine, do you want me to send JSON without netbiosname setting? :wink:

I guess, we got some messages crossed here - yes, I have no problem registering the system with biosdevname disabled. The registration only fails if biosdevname is enabled, but that’s lesser problem now.

The bigger problem for me right now is that I can’t provision aarch64 system - Foreman just skips creating TFTP file and DHCP record for it and just proceeds to running my hooks and simply reboots the box w/o even setting it to PXE mode.

Screenshots of that are above. I’ve also added the same for x86 where with the same grub2/uefi bootloader both TFTP and DHCP go through just fine and system gets provisioned. Not sure why this is happening for aarch64 - I see no errors anywhere in the logs (full debug enabled) nor in UI.

I wonder if you tried to fake-provision that system that you registered with my payload? If so, is TFTP file gets created for you?

Thanks!

I’m not sure what I’ve done differently, but I tried again today and see that both DHCP and TFTP tasks are now completing. PXE reboot of my ARM system still not happening though. When I manually force my system to PXE boot, it complained about bootloader and what I found in smart-proxy logs is this:

D, [2018-11-29T12:00:23.253263 ] DEBUG – : Starting task: /usr/bin/wget --timeout=10 --tries=3 --no-check-certificate -nv -c “http://installsvc.domain.com/images/centos/7/images/pxeboot/vmlinuz” -O “/tftpboot/boot/CentOS-7-aarch64-vmlinuz”
I, [2018-11-29T12:00:23.263640 ] INFO – : 10.20.109.31 - - [29/Nov/2018:12:00:23 -0700] “POST /tftp/fetch_boot_file HTTP/1.1” 20
0 - 0.0168

D, [2018-11-29T12:00:23.292633 ] DEBUG – : Starting task: /usr/bin/wget --timeout=10 --tries=3 --no-check-certificate -nv -c “http://installsvc.domain.com/images/centos/7/images/pxeboot/initrd.img” -O “/tftpboot/boot/CentOS-7-aarch64-initrd.img”
I, [2018-11-29T12:00:23.293160 ] INFO – : 10.20.109.32 - - [29/Nov/2018:12:00:23 -0700] “POST /tftp/fetch_boot_file HTTP/1.1” 200 - 0.0064

The problem here is that the source files are for x86_64, not aarch64, so I’m wondering how Foreman is even supposed to figure out different boot files for different architectures?

For example, CentOS7 media by default is defined as http://mirror.centos.org/centos/$version/os/$arch, but that would only work for x86 as other architectures are under a different path - http://mirror.centos.org/altarch/7/os/

So, if I understand things correctly, I need to jam all archs in my local HTTP service (installsvc in my earlier examples) under the same dir tree to make things work properly. Please correct me here if needed.

To keep going, I’ve just copied proper aarch74 boot files to my boot dir for now.

Thanks!

Hey, so Foreman defaults to installation media which has $arch variable in URL, this one gets replaced automatically. Most linux distributions have directories per architecture on its mirrors so this works flawlessly. If you have your own local mirror, you need to reflect that and also set it up appropriately. When using Katello, you need to pick the correct architecture when enabling repos.

Oh, that’s new in version 7 I guess. Hmmm. The only option is to create two installation media, one for intel, one for ARM. And then associate them correctly with your architectures. This would be good enough workaround, feel free to file an issue to our tracker.

Well, Ubuntu alt archs are also located in different place - http://ports.ubuntu.com/dists/bionic/main/, for example.

So, it does seem like having different medias for different architectures is the only way.