Foremanctl Version 1.1.0

Problem:pulp-worker CrashLoopBackOff

Expected outcome: That the containers run without constantly restarting

Foreman and Proxy versions: Foremanctl Version 1.1.0 with Foreman 3.17

Foreman and Proxy plugin versions:

Distribution and version: Rocky Linux 9.7

Other relevant data: Rocky Linux 9.7 runs on a Proxmox
Hello everyone,

In addition to my RPM-based Foreman/Katello, I am running foremanctl version 1.1.0 to check out the container-based solution.
I used the official repo “@theforeman/foremanctl rhel-9-x86_64” to install foremanctl.
Everything worked fine and is running without any problems, although I wondered why there is no “foreman-proxy” container, but that’s another topic. :wink:
And at this point, many thanks for developing foremanctl, which I find makes the container-based solution much easier. :slight_smile:
But I have an effect that I can’t explain.
After I restart the VM and the containers, the pulp-worker containers go into a CrashLoopBackoff and I don’t know why.
In the logs, I can only see that the containers are in the DB, but I have no idea why they keep restarting. Maybe I’m overlooking something.
The only thing that helps is a new installation. But then the problem comes back, and so on.
Does anyone have any idea what the problem is and how I can fix it?

Best regards,
Dirk

Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.491 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.EntitlementCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er pulp-worker-3\[2637\]: pulp \[None\]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Feb 22 19:34:17 foremancontainer.linux.schnell.er pulp-worker-5\[2699\]: pulp \[None\]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.608 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.EntitlementCertificateCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.620 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public org.candlepin.model.Persisted org.candlepin.model.OwnerCurator.create(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er pulp-worker-3\[2637\]: pulp \[None\]: pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-3.foremancontainer.linux.schnell.er already exists in the database.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.740 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public org.candlepin.model.Persisted org.candlepin.model.ProductCurator.create(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.741 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.ProductCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.746 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public org.candlepin.model.Persisted org.candlepin.model.ProductCurator.merge(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er pulp-worker-5\[2699\]: pulp \[None\]: pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-5.foremancontainer.linux.schnell.er already exists in the database.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.814 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public org.candlepin.model.Persisted org.candlepin.model.ConsumerCurator.create(org.candlepin.model.Persisted,boolean)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.815 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.ConsumerCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:17.909 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.PoolCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:17 foremancontainer.linux.schnell.er pulp-worker-2\[2718\]: pulp \[None\]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Feb 22 19:34:18 foremancontainer.linux.schnell.er pulp-worker-2\[2718\]: pulp \[None\]: pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-2.foremancontainer.linux.schnell.er already exists in the database.
Feb 22 19:34:18 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:18.154 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public org.candlepin.model.Persisted org.candlepin.model.RulesCurator.create(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:18 foremancontainer.linux.schnell.er candlepin\[2131\]: 22-Feb-2026 18:34:18.155 WARNING \[main\] com.google.inject.internal.ProxyFactory. Method \[public void org.candlepin.model.RulesCurator.delete(org.candlepin.model.Persisted)\] is synthetic and is being intercepted by \[com.google.inject.persist.jpa.JpaLocalTxnInterceptor@58d18c09\]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Feb 22 19:34:18 foremancontainer.linux.schnell.er pulp-worker-1\[2671\]: pulp \[None\]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Feb 22 19:34:18 foremancontainer.linux.schnell.er pulp-worker-1\[2671\]: pulp \[None\]: pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-1.foremancontainer.linux.schnell.er already exists in the database.
Feb 22 19:34:18 foremancontainer.linux.schnell.er pulp-worker-8\[2724\]: pulp \[None\]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker
Feb 22 19:34:18 foremancontainer.linux.schnell.er podman\[2821\]: 2026-02-22 19:34:18.75366536 +0100 CET m=+0.381092036 container died 77c30dd3b8364ab52af81cfff769ad3f34699c604bff91619d469cb8aac9a11e (image=quay.io/foreman/pulp:foreman-3.17, name=pulp-worker-7, io.buildah.version=1.42.2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, PODMAN_SYSTEMD_UNIT=pulp-worker@7.service)
Feb 22 19:34:18 foremancontainer.linux.schnell.er pulp-worker-8\[2724\]: pulp \[None\]: pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-8.foremancontainer.linux.schnell.er already exists in the database.
Feb 22 19:34:18 foremancontainer.linux.schnell.er systemd\[1\]: var-lib-containers-storage-overlay\\x2dcontainers-77c30dd3b8364ab52af81cfff769ad3f34699c604bff91619d469cb8aac9a11e-userdata-shm.mount: Deactivated successfully.
Feb 22 19:34:18 foremancontainer.linux.schnell.er systemd\[1\]: var-lib-containers-storage-overlay-b76164e6eb0ef63dba977ef0cf1bef9dc5b6e3cf1dc893155c8690cc5c6c3d4b-merged.mount: Deactivated successfully.
pulpcore.tasking.worker:ERROR: A worker with name 1@pulp-worker-2.foremancontainer.linux.schnell.er already exists in the database.

This is your issue, there is a database entry for the worker (probably from the before-the-reboot) and thus the worker now refuses to start.

In theory (!) this should not happen, but you running into this clearly shows it does :confused:

Some time ago we implemented ensure apps are started after databases, and stopped before by evgeni · Pull Request #291 · theforeman/foremanctl · GitHub that is supposed to order the services in a way that during shutdown the pulp workers have enough time to clean up the DB before the DB is shut down itself. This clearly didn’t work for you.

Could you provide the logs (from journalctl) of the pulpcore workers and the PostgreSQL for the time between you typed reboot and the machine was off?

I have the same issue.

Foremanctl 1.2.0 running with foreman:3.18.

Don’t know when it happened, but found that after upgrading to 3.18 an old 3.16 worker was still getting started. Don’t know if that was the problem or not. Now we only get:
pulp-worker-1[27925]: pulp [None]: pulpcore.tasking.entrypoint:INFO: Starting distributed type worker

pulp-worker-1[27925]: pulp [None]: pulpcore.tasking.worker:ERROR: A worker with name 1@host.containers.internal already exists in the database.

podman[27983]: 2026-03-02 14:50:55.162688538 +0100 CET m=+0.015362707 container died a3e7c37752390b2c025b2b0a0f0aaf20b9117343ee8a2948b38a8e4f2f4540a8 (image=quay.io/foreman/pulp:foreman-3.18, name=pulp-worker-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, PODMAN_SYSTEMD_UNIT=pulp-worker@1.service, io.buildah.version=1.42.2, org.label-schema.build-date=20251118)

podman[27983]: 2026-03-02 14:50:55.242478319 +0100 CET m=+0.095152492 container remove a3e7c37752390b2c025b2b0a0f0aaf20b9117343ee8a2948b38a8e4f2f4540a8 (image=quay.io/foreman/pulp:foreman-3.18, name=pulp-worker-1, PODMAN_SYSTEMD_UNIT=pulp-worker@1.service, io.buildah.version=1.42.2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)

systemd[1]: pulp-worker@1.service: Main process exited, code=exited, status=1/FAILURE

Hello evgeni,

Please excuse my late reply, sorry :frowning:

So, I don’t have any logs for you, but I just recreated the problem here and ‘successfully’ ran into it again.
I had version 1.1.0 here, updated to version 1.2.0 and then performed the following two steps:
foremanctl pull-images
foremanctl deploy
and then the problem we described occurs.
The containers crash immediately and nothing works anymore.
As it stands, it appears to have nothing to do with restarting the VM; it seems to be the update process.

Hope that helps a little. :slight_smile:

Kind regards,
Dirk

Thanks, I’ll try to reproduce that.

Just making sure, you never stopped the services manually (e.g. via systemctl stop foreman.target)?

No, I didn’t do that.

The containers were all running and I didn’t stop any systemd targets.

Can you show me a podman images output, so it’s easier to see which container version you started with and went to.

Sure, here are the images:

\[root@foremancontainer \~\]# podman images
REPOSITORY                        TAG           IMAGE ID      CREATED        SIZE
quay.io/foreman/foreman           3.18          487e23d3ece3  6 days ago     978 MB
quay.io/sclorg/postgresql-13-c9s  latest        5bb3d51b9145  4 weeks ago    379 MB
quay.io/foreman/foreman           3.17          edaba8af96c1  7 weeks ago    1.04 GB
quay.io/foreman/pulp              foreman-3.18  7c6575297a81  7 weeks ago    650 MB
quay.io/foreman/pulp              foreman-3.17  7c6575297a81  7 weeks ago    650 MB
quay.io/foreman/candlepin         foreman-3.18  0de14aef6275  7 weeks ago    719 MB
quay.io/foreman/candlepin         foreman-3.17  0de14aef6275  7 weeks ago    719 MB
quay.io/sclorg/redis-6-c9s        latest        c7ac11061bbe  14 months ago  280 MB
\[root@foremancontainer \~\]#