Foreman does not sync - pulp-admin shows 0 tasks, but curl to pulp page shows a lot of waiting tasks

problem: Pulp Sync stuck at 4%
4: Actions::Pulp3::Repository::Sync (waiting for Pulp to start the task) [ 2263.81s / 9.67s ]

expected: Sync completed

Foreman and Proxy versions: foreman-2.3.3 ,

Foreman and Proxy plugin versions: 2.3.3

Hello to all,

I received an call from colleague about Foreman not syncing rpms from repos

I have checked but no success.

Steps i have done

/dev/mapper/vg_root-lv_pgsql 10G 33M 10G 1% /var/lib/pgsql
/dev/mapper/vg_root-lv_cachepulp 10G 33M 10G 1% /var/cache/pulp
/dev/mapper/vg_root-lv_pulp 315G 302G 14G 96% /var/lib/pulp
/dev/mapper/vg_root-lv_mongo 10G 242M 9.8G 3% /var/lib/mongodb

problem is , disk space is rising for nonsense, but sync is stuck since June

[root@foreman ~]# mongo pulp_database --eval ‘db.task_status.find({“state”: {"$in": [“waiting”,“running”]}}).count()’
MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017/pulp_database
MongoDB server version: 3.4.9
0
[root@fore-1 ~]#

curl https:/hostname/pulp/api/v3/tasks/?state=waiting --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key | jq

it shows a lot of waiting tasks to be completed

pulp-admin shows no waiting tasks.

Tried commands like:

foreman-rake foreman_tasks:cleanup TASK_SEARCH=‘result ==pending’ STATES=‘running’ VERBOSE=true
foreman-rake foreman_tasks:cleanup TASK_SEARCH=‘result ==error’ STATES=‘stopped’ VERBOSE=true
etc etc, to clean orphaned tasks, no success. only deleted taks, so i can start hammer repo update with no problem:
hammer repository list --organization-label=‘my_organization_name’| awk ‘{print $1}’ | egrep -v ‘||ID’ | less | perl -ne ‘print;chomp;system(“hammer repository update --download-policy immediate --id $_”)’
it was successful untill i started with hammer sync again

[root@foreman ~]# sudo su - postgres -c “psql -d foreman -c ‘select label,count(label),state,result from foreman_tasks_tasks where state <> ‘’‘stopped’’’ group by label,state,result ORDER BY label;’”
label | count | state | result
------------------------------------±------±----------±--------
Actions::BulkAction | 1 | running | pending
Actions::Katello::Repository::Sync | 1 | running | pending
Actions::Katello::SyncPlan::Run | 9 | scheduled | pending
CreateExpiredManifestNotifications | 1 | scheduled | pending
CreatePulpDiskSpaceNotifications | 1 | scheduled | pending
CreateRssNotifications | 1 | scheduled | pending
SendExpireSoonNotifications | 1 | scheduled | pending
StoredValuesCleanupJob | 1 | scheduled | pending
(8 rows)

[root@foreman ~]# journalctl -u dynflow-sidekiq@*

Aug 11 16:25:02 foreman systemd[1]: Stopped Foreman jobs daemon - worker on sidekiq.
Aug 11 16:25:07 foreman systemd[1]: Stopping Foreman jobs daemon - orchestrator on sidekiq…
Aug 11 16:25:08 foreman systemd[1]: Stopped Foreman jobs daemon - orchestrator on sidekiq.
Aug 11 16:25:23 foreman systemd[1]: Starting Foreman jobs daemon - orchestrator on sidekiq…
Aug 11 16:25:27 foreman dynflow-sidekiq@orchestrator[53482]: 2021-08-11T15:25:27.049Z 53482 TID-bpv6e INFO: GitLab reliable fetch activated!
Aug 11 16:25:27 foreman dynflow-sidekiq@orchestrator[53482]: 2021-08-11T15:25:27.086Z 53482 TID-mwmim INFO: Booting Sidekiq 5.2.7 with redis options {:id=>"Side
Aug 11 16:25:30 foreman dynflow-sidekiq@orchestrator[53482]: /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGEXP
Aug 11 16:25:30 foreman dynflow-sidekiq@orchestrator[53482]: /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
Aug 11 16:26:18 foreman systemd[1]: Started Foreman jobs daemon - orchestrator on sidekiq.
Aug 11 16:26:54 foreman systemd[1]: Starting Foreman jobs daemon - worker on sidekiq…
Aug 11 16:26:55 foreman dynflow-sidekiq@worker[54564]: 2021-08-11T15:26:55.970Z 54564 TID-7sukw INFO: GitLab reliable fetch activated!
Aug 11 16:26:56 foreman dynflow-sidekiq@worker[54564]: 2021-08-11T15:26:55.997Z 54564 TID-guh50 INFO: Booting Sidekiq 5.2.7 with redis options {:id=>"Sidekiq-se
Aug 11 16:26:58 foreman dynflow-sidekiq@worker[54564]: /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGEXP
Aug 11 16:26:58 foreman dynflow-sidekiq@worker[54564]: /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
Aug 11 16:27:15 foreman systemd[1]: Started Foreman jobs daemon - worker on sidekiq.
Aug 11 16:27:15 foreman systemd[1]: Starting Foreman jobs daemon - worker-hosts-queue on sidekiq…
Aug 11 16:27:16 foreman dynflow-sidekiq@worker-hosts-queue[54633]: 2021-08-11T15:27:16.956Z 54633 TID-9qe7d INFO: GitLab reliable fetch activated!
Aug 11 16:27:16 foreman dynflow-sidekiq@worker-hosts-queue[54633]: 2021-08-11T15:27:16.956Z 54633 TID-l0ijl INFO: Booting Sidekiq 5.2.7 with redis options {:id=
Aug 11 16:27:19 foreman dynflow-sidekiq@worker-hosts-queue[54633]: /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGE
Aug 11 16:27:19 foreman dynflow-sidekiq@worker-hosts-queue[54633]: /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
Aug 11 16:27:36 foreman systemd[1]: Started Foreman jobs daemon - worker-hosts-queue on sidekiq.
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: E, [2021-08-11T21:42:52.868047 #53482] ERROR – /parallel-executor-core: Skipping step in skipped i
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/run_ste
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan.rb:330:in ea Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan.rb:330:in sk
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan.rb:217:in bl Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan.rb:217:in ea
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan.rb:217:in pr Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/director.rb:251:in rescue!’
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/director.rb:221:in try_to_r Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/director.rb:214:in unless_d
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/director.rb:177:in `work_fin
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/abstract/core.rb:4
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/core.rb:71
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: [ concurrent-ruby ]
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/abstract/core.rb:1
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: [ concurrent-ruby ]
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/orchestrat
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/serializat
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: [ sidekiq ]
Aug 11 21:42:52 foreman dynflow-sidekiq@orchestrator[53482]: [ concurrent-ruby ]
lines 1413-1459/1459 (END)

Maybe i have to clean space, any better idea to clean space from pulp directory after removing some content views ?

BRc

I can’t answer your actual question (how to fix that), but:
Pulp3 is not using MongoDB anymore, so you won’t see anything in there. And the pulp-admin CLI is also for Pulp2 only. You’d need the python3-pulp-cli package, but I am not sure it’s available for your Foreman/Katello version.

For the actual issue, let’s see what @katello thinks

Thank you for info @evgeni, i installed pulp-cli and see there are differences between pulp task list and

curl https://foreman/pulp/api/v3/tasks/?state=waiting --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key | jq

pulp task list shows only newer ones, meanwhile curl shows new and old stuck processes…

BR

Ok your hint about pulp cli solved an issue: yes it is a lot of stuff to do, but seems it helps:

please note: it is unverified!
These steps must be verified by @katello team.

pip install pulp-cli

for i in pulp_celerybeat pulp_resource_manager pulp_workers; do service $i stop; done
curl https://foreman/pulp/api/v3/tasks/?state=waiting --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key | jq | grep /tasks/ | awk ‘{print $2}’ | tr -d ‘,’ |tr -d ‘"’ > pulpcancel.txt

for taskid in $(cat pulpcancel.txt); do pulp task cancel --href $taskid; echo ; done

for i in pulp_celerybeat pulp_resource_manager pulp_workers; do service $i start; done

check again if there are any new waiting tasks to complete in pulp:
curl https://foreman/pulp/api/v3/tasks/?state=waiting --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key | jq | grep /tasks/ | awk ‘{print $2}’ | tr -d ‘,’ |tr -d '"

suggestion to remove all taks pending, waiting, paused etc etc:

foreman-rake foreman_tasks:cleanup TASK_SEARCH=‘result ==pending’ STATES=‘running’ VERBOSE=true

example:
foreman-rake foreman_tasks:cleanup TASK_SEARCH=‘result ==pending’ STATES=‘running’ VERBOSE=true NOOP=true
if you are sure, remove NOOP=true from command

hammer repository list --organization-label=‘my_company_name’| awk ‘{print $1}’ | egrep -v ‘||ID’ | less | perl -ne ‘print;chomp;system(“hammer repository update --download-policy immediate --id $_”)’

Try to sync, but first, wait for some time, because it needs to check contents again.
Percentage will be changed from 4% to 37% “stucking” at another task.

@admins if it should make any danger for other customers, please remove my post…

forgot to mention, also need to restart katello service:

foreman-maintain service restart