"Bulk generate applicability" stuck at Pending after Alma 8 > 9 Leapp Upgrade

Problem:
After upgrading my Foreman server from Alma 8 to Alma 9 using LEAP, on my Prod box.
I seem to have a few instances of “Bulk generate applicability” tasks sitting round at Pending for a subnet of about 10-15 of the approx 120 hosts on my box. There’s no obvious error in the logs, they just seem to start, sit at pending and hang indefinitely.

Some are generic " “Bulk generate applicability” tasks, some are specific to one individual boxes i.e. “Bulk generate applicability for host dcautlprdnet01.REDACTED.local”

There were a couple of minor issues to iron out in-situ during the upgrade, but nothing all that noteworthy, and most of those were ironed out on the ProProd Foreman box. All packages on the box are now el9, and everything else seems to be functioning normally.
There’s no obvious pattern to the hosts involved, they use a variety of different lifecycles.

Can anybody help me figure out where to go next with troubleshooting please?

Expected outcome:
Bulk generate applicability tasks run normally.

Foreman and Proxy versions:

Foreman and Proxy plugin versions:
Foreman 3.11.5-1
Katello 4.13.1-1
Pulpcore 3.49.22-1

Distribution and version:
Alma 9.5

Other relevant data:

[root@dca-foreman-al foreman]# grep bf453852-214f-4350-9548-3a5a67337859 production.log
2024-12-19T20:29:46 [I|bac|bb024630] Task {label: Actions::Katello::Applicability::Hosts::BulkGenerate, id: bf453852-214f-4350-9548-3a5a67337859, execution_plan_id: 3c29671f-41b0-427d-985f-dc13be9540b1} state changed: planning
2024-12-19T20:29:46 [I|bac|bb024630] Task {label: Actions::Katello::Applicability::Hosts::BulkGenerate, id: bf453852-214f-4350-9548-3a5a67337859, execution_plan_id: 3c29671f-41b0-427d-985f-dc13be9540b1} state changed: planned
2024-12-19T20:29:46 [I|bac|bb024630] Task {label: Actions::Katello::Applicability::Hosts::BulkGenerate, id: bf453852-214f-4350-9548-3a5a67337859, execution_plan_id: 3c29671f-41b0-427d-985f-dc13be9540b1} state changed: running
2024-12-20T09:11:45 [I|app|ee2c74b3] Started GET "/foreman_tasks/tasks/bf453852-214f-4350-9548-3a5a67337859" for 192.168.249.200 at 2024-12-20 09:11:45 +0000
2024-12-20T09:11:45 [I|app|ee2c74b3]   Parameters: {"id"=>"bf453852-214f-4350-9548-3a5a67337859"}
2024-12-20T09:12:36 [I|app|4eac8189] Started GET "/foreman_tasks/tasks/bf453852-214f-4350-9548-3a5a67337859" for 192.168.249.200 at 2024-12-20 09:12:36 +0000
2024-12-20T09:12:36 [I|app|4eac8189]   Parameters: {"id"=>"bf453852-214f-4350-9548-3a5a67337859"}
2024-12-20T09:12:37 [I|app|7e44088a] Started GET "/foreman_tasks/api/tasks/bf453852-214f-4350-9548-3a5a67337859/details?include_permissions" for 192.168.249.200 at 2024-12-20 09:12:37 +0000
2024-12-20T09:12:37 [I|app|7e44088a]   Parameters: {"include_permissions"=>nil, "id"=>"bf453852-214f-4350-9548-3a5a67337859"}
2024-12-20T09:12:42 [I|app|ff280cf6] Started GET "/foreman_tasks/api/tasks/bf453852-214f-4350-9548-3a5a67337859/details?include_permissions" for 192.168.249.200 at 2024-12-20 09:12:42 +0000
2024-12-20T09:12:42 [I|app|ff280cf6]   Parameters: {"include_permissions"=>nil, "id"=>"bf453852-214f-4350-9548-3a5a67337859"}
2024-12-20T09:12:47 [I|app|70546c35] Started GET "/foreman_tasks/api/tasks/bf453852-214f-4350-9548-3a5a67337859/details?include_permissions" for 192.168.249.200 at 2024-12-20 09:12:47 +0000
2024-12-20T09:12:47 [I|app|70546c35]   Parameters: {"include_permissions"=>nil, "id"=>"bf453852-214f-4350-9548-3a5a67337859"}
2024-12-20T09:12:53 [I|app|0a0d91bb] Started GET "/foreman_tasks/api/tasks/bf453852-214f-4350-9548-3a5a67337859/details?include_permissions" for 192.168.249.200 at 2024-12-20 09:12:53 +0000
2024-12-20T09:12:53 [I|app|0a0d91bb]   Parameters: {"include_permissions"=>nil, "id"=>"bf453852-214f-4350-9548-3a5a67337859"}
1 Like

Hi @alexz,

So the result is “Pending”, what is the state for these tasks?

Assuming that those tasks are old and have been hanging around a while, they should be safe to clean up:

foreman-rake foreman_tasks:cleanup TASK_SEARCH='result = pending and action ~ "Bulk generate"  VERBOSE=true NOOP=true

If the tasks are failing for specific hosts all the time, then I would recommend trying the following:

# First, pick a host that is failing to have its applicability regenerated.
# You can find the host inside the Dynflow console for the pending task.
host = ::Host.find_by(name: '_host_name_')
host.content_facet.calculate_and_import_applicability

See if that triggers okay.

If the tasks are pending, it could also mean that the Katello ::Katello::HOST_TASKS_QUEUE is misbehaving somehow – might need some help from @aruzicka if that is the case.

So, where did we end up with this?

Hi @iballou @aruzicka

Sorry for the delay on coming back to you on this. As well as being on annual leave over Christmas and New Year I did go on a bit of a tangent where it seemed like the subsection of hosts I was seeing had a common factor in that they had some additional repos configured aside from the standard list accessed through the Foreman server.

Having tidied these up however more have appeared, and these have no unusual repos configured, so it looks like that was a red herring, and these tasks continue to stack up and trigger pending task e-mails.

I’ve run the command specified against the hosts that were listed as hung, and cleared down all the currently stalled Bulk Availability Tasks.

I guess the next step is to just wait a few days and see if more tasks hang, or is there anything else I can do to proactively further investigate for now?

[root@dca-foreman-al ~]# foreman-rake console
Loading production environment (Rails 6.1.7.9)
irb(main):001:0> host = ::Host.find_by(name: 'dcbztststhap01.REDACTED')
=>
#<Host::Managed:0x00007f66fd3bff18
...
irb(main):002:0> host.content_facet.calculate_and_import_applicability
=> true

Thanks.

1 Like

I would say let’s watch it and see if more pile up. Wishful thinking says it was a one-time mystery.