Importing manifest taking for ever

Problem:
Importing manifest taking for ever, looks like stuck at 34% and not moving any further.

Expected outcome:
Manifest imported into foreman and subscriptions are available.
Foreman and Proxy versions:
Foreman and Proxy are 2.4.1

Foreman and Proxy plugin versions:
Foreman 2.4.1, Katello 4.0.1.2
Distribution and version:
RHEL7.9 (Maipo)
Other relevant data:

1 Like

Hello there,

Any idea, how to fix it?
I am not able to import/refresh manifest, it is stuck at 34% and moving any further for days.

Please help.

Best regards,
Balaji Sankaran

Hey,

Is this happening recently, after an upgrade, or did it just never work?

Do you see anything in your logs @Balaji_Arun_Kumar_Sa ?
Can you share anything from there that might help us find a solution?

Hello Mcorr,

Below are from the log messages…

Sep 13 11:00:57 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 0474e410dcbb481c9d1ca59895419339
Sep 13 11:00:57 hostname pulpcore-api: - - [13/Sep/2021:16:00:57 +0000] “GET /pulp/api/v3/tasks/04343eaf-e544-41c7-b70f-737c36f12646/ HTTP/1.1” 200 630 “-” “OpenAPI-Generator/3.9.0/ruby”
Sep 13 11:00:58 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 9aca8239af9449f2a66efe31a5d7d38a
Sep 13 11:00:58 hostname pulpcore-api: - - [13/Sep/2021:16:00:58 +0000] “GET /pulp/api/v3/tasks/f788f5be-42d9-40a1-bb6f-fe90aaab6e1b/ HTTP/1.1” 200 506 “-” “OpenAPI-Generator/3.9.0/ruby”
Sep 13 11:00:58 hostname pulpcore-api: pulp [None]: django_guid:INFO: Header Correlation-ID was not found in the incoming request. Generated new GUID: 8641cddc0cfe41c6af1b3162bf27c1f6
Sep 13 11:00:58 hostname pulpcore-api: - - [13/Sep/2021:16:00:58 +0000] “GET /pulp/api/v3/tasks/9dd7f0eb-da42-4fae-9fc6-4ce4c583bff3/ HTTP/1.1” 200 630 “-” “OpenAPI-Generator/3.9.0/ruby”

Today I tried cancelling the manifest import as it was running for days, I had to hit cancel about 4 times, every time hit cancel, the progress moved forward and completed.
From the manifest history, I see the import manifest was successful on 9/10/2021, I am not sure that it really imported the manifest, some of the repositories sync are failing, attached a screen shot of the same.

Hello Mcorr,

Forgot to mention, this is happening recently and I also tried to restore from backup, same issue.

Best regards,
Balaji Sankaran

Thanks for bringing this to our attention. I will see if some one from the Katello team has insight into what might be happening here.

@Balaji_Arun_Kumar_Sa you may want to upgrade to Katello 4.1, as we think this might be a problem with the pulp tasking system.

1 Like

Thank you!, jjeffers
I will upgrade and update you.

Best regards,
Balaji Sankaran

Great, thanks!

Hello jjeffers,

I am getting the following error when I run yum update, looks like the package qpid-proton-c-0.34.0-1.el7.x86_64 is no more available from epel.

Error: Package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64 (katello)
Requires: qpid-proton-c = 0.34.0
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c = 0.35.0-1.el7
Error: Package: python2-qpid-proton-0.34.0-1.el7.x86_64 (@epel)
Requires: qpid-proton-c(x86-64) = 0.34.0-1.el7
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c(x86-64) = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c(x86-64) = 0.35.0-1.el7


yum can be configured to try to resolve such errors by temporarily enabling
disabled repos and searching for missing dependencies.
To enable this functionality please set ‘notify_only=0’ in /etc/yum/pluginconf.d/search-disabled-repos.conf


→ Running transaction check
—> Package kernel.x86_64 0:3.10.0-1160.25.1.el7 will be erased
—> Package qpid-proton-c.x86_64 0:0.34.0-1.el7 will be updated
→ Processing Dependency: qpid-proton-c = 0.34.0 for package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64
→ Processing Dependency: qpid-proton-c(x86-64) = 0.34.0-1.el7 for package: python2-qpid-proton-0.34.0-1.el7.x86_64
—> Package tfm-rubygem-qpid_proton.x86_64 0:0.34.0-3.el7 will be an update
→ Processing Dependency: qpid-proton-c = 0.34.0 for package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64
→ Finished Dependency Resolution
Error: Package: tfm-rubygem-qpid_proton-0.34.0-3.el7.x86_64 (katello)
Requires: qpid-proton-c = 0.34.0
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c = 0.35.0-1.el7
Error: Package: python2-qpid-proton-0.34.0-1.el7.x86_64 (@epel)
Requires: qpid-proton-c(x86-64) = 0.34.0-1.el7
Removing: qpid-proton-c-0.34.0-1.el7.x86_64 (@epel)
qpid-proton-c(x86-64) = 0.34.0-1.el7
Updated By: qpid-proton-c-0.35.0-1.el7.x86_64 (epel)
qpid-proton-c(x86-64) = 0.35.0-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

Following is the yum repolist output:

repo id repo name status
centos-sclo-rh/x86_64 CentOS-7 - SCLo rh 7,650
*epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,665
foreman/x86_64 Foreman 2.5 712
foreman-plugins/x86_64 Foreman plugins 2.5 424
katello/x86_64 Katello 4.1 130
katello-candlepin/x86_64 Candlepin: an open source entitlement management system. 6
pulpcore/x86_64 pulpcore: Fetch, Upload, Organize, and Distribute Software Packages. 187
puppet6/x86_64 Puppet 6 Repository el 7 - x86_64 318
rhel-7-server-extras-rpms/x86_64 Red Hat Enterprise Linux 7 Server - Extras (RPMs) 1,406
rhel-7-server-optional-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server - Optional (RPMs) 23,069
rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs) 32,191

Best regards,
Balaji Sankaran

Check out the answer here:

Hello jjeffers,

I downloaded the packages manually and was able to run yum update successfully, but when I run foreman-installer I am getting errors as follows…

[root@hostname]# foreman-installer
2021-09-15 18:03:16 [NOTICE] [root] Loading installer configuration. This will take some time.
2021-09-15 18:03:22 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.
2021-09-15 18:03:22 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
2021-09-15 18:03:29 [NOTICE] [configure] Starting system configuration.
2021-09-15 18:03:45 [NOTICE] [configure] 250 configuration steps out of 2045 steps complete.
2021-09-15 18:04:00 [NOTICE] [configure] 500 configuration steps out of 2045 steps complete.
2021-09-15 18:04:01 [NOTICE] [configure] 750 configuration steps out of 2049 steps complete.
2021-09-15 18:04:03 [NOTICE] [configure] 1000 configuration steps out of 2052 steps complete.
2021-09-15 18:04:04 [NOTICE] [configure] 1250 configuration steps out of 2056 steps complete.
2021-09-15 18:04:55 [NOTICE] [configure] 1500 configuration steps out of 2057 steps complete.
2021-09-15 18:06:27 [NOTICE] [configure] 1750 configuration steps out of 2057 steps complete.
2021-09-15 18:07:01 [NOTICE] [configure] 2000 configuration steps out of 2057 steps complete.
2021-09-15 18:07:05 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Could not evaluate: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:06 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Failed to call refresh: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:06 [ERROR ] [configure] /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[hostname]: Error making GET request to Foreman at https://hostname/api/v2/smart_proxies: Response: 422 Unprocessable Entity
2021-09-15 18:07:09 [NOTICE] [configure] System configuration has finished.

There were errors detected during install.
Please address the errors and re-run the installer to ensure the system is properly configured.
Failing to do so is likely to result in broken functionality.

The full log is at /var/log/foreman-installer/katello.log

This may be related to Bug #32299: Installation of Katello 4 RC3 fails when --foreman-proxy-ssl-port is not set to default 9090 - SELinux - Foreman

Thank you so much jjeffers!
It was not working with selinux set to permissive, I set selinux to enforcing and was successfully able to upgrade.

Best regards,
Balaji Sankaran

1 Like

Great!

Hello jjeffers,

I think, I spoke too soon, after the upgrade, when login the foreman web gui, I get an error. Attached is the error, please help.

Best regards,
Balaji Sankaran

Check that you have enough RAM available, and restart Candlepin with

foreman-maintain service restart --only tomcat

Hello jeremylenz,

I have 32 GB RAM, I tried restarting the Candlepin with command you recommended, but I getting the same error.

Best regards,
Balaji Sankaran

hmm… what is the output of foreman-maintain service status?

All services are running OK and the following is what I see from the status command

Sep 21 15:50:27 hostname systemd[1]: Starting Foreman Proxy…
Sep 21 15:50:29 hostname smart-proxy[1378]: /usr/share/foreman-proxy is not writable.
Sep 21 15:50:29 hostname smart-proxy[1378]: Bundler will use `/tmp/bundler20210921-1378-10jmmex1378’ as your home directory temporarily.
Sep 21 15:50:29 hostname smart-proxy[1378]: Your Gemfile lists the gem rsec (< 1) more than once.
Sep 21 15:50:29 hostname smart-proxy[1378]: You should probably keep only one of them.
Sep 21 15:50:29 hostname smart-proxy[1378]: Remove any duplicate entries and specify the gem only once.
Sep 21 15:50:29 hostname smart-proxy[1378]: While it’s not a problem now, it could cause errors if you change the version of one of them later.
Sep 21 15:50:30 hostname systemd[1]: Started Foreman Proxy.
Sep 21 16:05:21 hostname smart-proxy[1378]: 10.220.146.89 - - [21/Sep/2021:16:05:21 CDT] “GET /features HTTP/1.1” 200 73
Sep 21 16:05:21 hostname smart-proxy[1378]: - → /features
\ All services are running [OK]