Issues Syncing Redhat yum repositories

I am experiencing issues syncing any of the core redhat repositories, I keep getting 502 errors. The troubleshooting i have done is i have rebuilt the foreman server on the current version as well as on the previous version and the issue continues to happen.

I am currently configured with the latest version of Katello (3.18.1) and Foreman (2.3.2) the failure seems to happen at the same point each time.

The server is running Redhat 7.8 and is fully up to date.

The indexing content step seems to be what is failing all other steps seem to succeed and it doesn’t result in any packages being cached.

I have tried on the discussion forum of redhat as i was suspecting this issue may be with their repositories however the redhat community has recommended raising a support case with redhat and i am on self support so i can not do that. additionally foreman is not supported.
Below is the dynflow and the error.

4: Actions::Pulp3::Repository::Sync (success) [ 562.30s / 19.57s ]
7: Actions::Pulp3::Repository::SaveVersion (success) [ 0.20s / 0.20s ]
10: Actions::Pulp3::Repository::CreatePublication (success) [ 1442.79s / 56.42s ]
12: Actions::Pulp3::Repository::SavePublication (success) [ 0.15s / 0.15s ]
16: Actions::Pulp3::Repository::RefreshDistribution (success) [ 2.93s / 1.21s ]
18: Actions::Pulp3::Repository::SaveDistributionReferences (success) [ 0.42s / 0.42s ]
19: Actions::Pulp3::Orchestration::Repository::Sync (success) [ 0.08s / 0.08s ]
21: Actions::Katello::Repository::IndexContent (skipped) [ 32.95s / 31.62s ]

Queue: default

Started at: 2021-02-02 00:56:10 UTC

Ended at: 2021-02-02 00:56:42 UTC

Real time: 32.95s

Execution time (excluding suspended state): 31.62s

Input:


id: 1
contents_changed: true
current_request_id:
current_timezone: UTC
current_user_id: 4
current_organization_id:
current_location_id:

Output:

— {}

Error:

PulpRpmClient::ApiError

Error message: the server returns an error HTTP status code: 502 Response headers: {“date”=>“Tue, 02 Feb 2021 00:56:11 GMT”, “server”=>“Apache”, “content-length”=>“445”, “connection”=>“close”, “content-type”=>“text/html; charset=iso-8859-1”} Response body: 502 Proxy Error

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /pulp/api/v3/content/rpm/packages/.

Reason: Error reading from remote server


  • “/opt/theforeman/tfm/root/usr/share/gems/gems/pulp_rpm_client-3.7.0/lib/pulp_rpm_client/api_client.rb:81:in
    `call_api’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/pulp_rpm_client-3.7.0/lib/pulp_rpm_client/api/content_packages_api.rb:236:in
    `list_with_http_info’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/pulp_rpm_client-3.7.0/lib/pulp_rpm_client/api/content_packages_api.rb:130:in
    `list’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/services/katello/pulp3/pulp_content_unit.rb:93:in
    `content_unit_list’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/services/katello/pulp3/pulp_content_unit.rb:106:in
    `fetch_content_list’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/services/katello/pulp3/pulp_content_unit.rb:75:in
    `block (2 levels) in pulp_units_batch_for_repo’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/services/katello/pulp3/pulp_content_unit.rb:69:in
    `loop’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/services/katello/pulp3/pulp_content_unit.rb:69:in
    `block in pulp_units_batch_for_repo’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/concerns/pulp_database_unit.rb:120:in
    `each’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/concerns/pulp_database_unit.rb:120:in
    `each’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/concerns/pulp_database_unit.rb:120:in
    `import_for_repository’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/repository.rb:902:in
    `block (2 levels) in index_content’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/lib/katello/logging.rb:6:in
    `time’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/repository.rb:901:in
    `block in index_content’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/repository.rb:900:in
    `each’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/models/katello/repository.rb:900:in
    `index_content’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/lib/actions/katello/repository/index_content.rb:17:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:571:in
    `block (3 levels) in execute_run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.1/app/lib/actions/middleware/execute_if_contents_changed.rb:5:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/rails_executor_wrap.rb:14:in
    `block in run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/activesupport-6.0.3.4/lib/active_support/execution_wrapper.rb:88:in
    `wrap’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/rails_executor_wrap.rb:13:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action/progress.rb:31:in
    `with_progress_calculation’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action/progress.rb:17:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_request_id.rb:15:in
    `block in run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_request_id.rb:49:in
    `restore_current_request_id’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_request_id.rb:15:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_timezone.rb:15:in
    `block in run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_timezone.rb:44:in
    `restore_curent_timezone’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_timezone.rb:15:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_user.rb:15:in
    `block in run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_user.rb:44:in
    `restore_curent_user’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_user.rb:15:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_taxonomies.rb:15:in
    `block in run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_taxonomies.rb:45:in
    `restore_current_taxonomies’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-3.0.3/app/lib/actions/middleware/keep_current_taxonomies.rb:15:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:27:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:19:in
    `pass’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware.rb:32:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/stack.rb:23:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/middleware/world.rb:31:in
    `execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:570:in
    `block (2 levels) in execute_run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:569:in
    `catch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:569:in
    `block in execute_run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:472:in
    `block in with_error_handling’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:472:in
    `catch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:472:in
    `with_error_handling’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:564:in
    `execute_run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/action.rb:285:in
    `execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:18:in
    `block (2 levels) in execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/abstract.rb:167:in
    `with_meta_calculation’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:17:in
    `block in execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:32:in
    `open_action’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:16:in
    `execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/director.rb:68:in
    `execute’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/worker_jobs.rb:11:in
    `block (2 levels) in perform’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors.rb:18:in
    `run_user_code’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/worker_jobs.rb:9:in
    `block in perform’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/worker_jobs.rb:25:in
    `with_telemetry’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/worker_jobs.rb:8:in
    `perform’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.7/lib/dynflow/executors/sidekiq/serialization.rb:27:in
    `perform’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in
    `execute_job’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in
    `block (2 levels) in process’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in
    `block in invoke’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in
    `invoke’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in
    `block in process’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in
    `block (6 levels) in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in
    `local’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in
    `block (5 levels) in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:37:in
    `block in module:Sidekiq’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in
    `block (4 levels) in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in
    `stats’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in
    `block (3 levels) in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in
    `call’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in
    `block (2 levels) in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in
    `global’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in
    `block in dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in
    `with_context’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in
    `with_job_hash_context’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in
    `dispatch’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in
    `process’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in
    `process_one’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in
    `run’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in
    `watchdog’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in
    `block in safe_thread’”
  • “/opt/theforeman/tfm/root/usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:474:in
    `block in create_with_logging_context’”

25: Actions::Katello::Repository::FetchPxeFiles (success) [ 0.12s / 0.12s ]
31: Actions::Pulp3::Repository::CreatePublication (success) [ 1464.32s / 71.24s ]
33: Actions::Pulp3::Repository::SavePublication (success) [ 0.14s / 0.14s ]
37: Actions::Pulp3::Repository::RefreshDistribution (success) [ 3.70s / 1.41s ]
39: Actions::Pulp3::Repository::SaveDistributionReferences (success) [ 0.35s / 0.35s ]
41: Actions::Katello::Repository::ErrataMail (success) [ 0.21s / 0.21s ]
44: Actions::Katello::Applicability::Repository::Regenerate (success) [ 0.20s / 0.20s ]
45: Actions::Katello::Repository::Sync (success) [ 1.10s / 1.10s ]
48: Actions::Katello::Repository::ImportApplicability (success) [ 0.09s / 0.09s ]

Error message: the server returns an error
HTTP status code: 502
Response headers: {“date”=>“Tue, 02 Feb 2021 00:56:11 GMT”, “server”=>“Apache”, “content-length”=>“445”, “connection”=>“close”, “content-type”=>“text/html; charset=iso-8859-1”}
Response body:

502 Proxy Error

Proxy Error

The proxy server received an invalid

response from an upstream server.

The proxy server could not handle the request GET /pulp/api/v3/content/rpm/packages/.


Reason: Error reading from remote server

For the record, whoever from @katello looks at this:
there are foreman debugs uploaded on debugs.theforeman.org, and looking at them I see pulp workers dying with

 The worker named 7041@foreman.example.com is missing. Canceling the tasks in its queue.

which I think is the reason for the errors – but I don’t see why they are dying.

Hi @bcoleman,

Do you see any errors in /var/log/messages?

What Red Hat repo are you syncing?

Can you sync non-Red Hat repos successfully?

Here’s something to try:

  1. Find your failing repository’s ID from the url in the UI. It’ll be numerical.
  2. Open the foreman console: foreman-rake console
  3. Run ::Katello::Repository.find(ID GOES HERE).index_content

The proxy error makes me think that the pulpcore-api service didnt’ return in time (or else threw an error) at index time.

You might try editing:
/etc/httpd/conf.d/05-foreman-ssl.conf and change one line in this section (note that yours will have a different hostname):

<Location "/pulp/api/v3">
  RequestHeader unset REMOTE_USER
  RequestHeader set REMOTE_USER "%{SSL_CLIENT_S_DN_CN}s" env=SSL_CLIENT_S_DN_CN
  ProxyPass unix:///run/pulpcore-api.sock|http://centos7-katello-3-18.windhelm.example.com/pulp/api/v3
  ProxyPassReverse unix:///run/pulpcore-api.sock|http://centos7-katello-3-18.windhelm.example.com/pulp/api/v3
</Location>

adding ‘timeout=600’ to the ProxyPass line:

<Location "/pulp/api/v3">
  RequestHeader unset REMOTE_USER
  RequestHeader set REMOTE_USER "%{SSL_CLIENT_S_DN_CN}s" env=SSL_CLIENT_S_DN_CN
  ProxyPass unix:///run/pulpcore-api.sock|http://centos7-katello-3-18.windhelm.example.com/pulp/api/v3 timeout=600
  ProxyPassReverse unix:///run/pulpcore-api.sock|http://centos7-katello-3-18.windhelm.example.com/pulp/api/v3
</Location>

then restart httpd:

systemctl restart httpd

and try again.

If that helps, it could indicate slow disks or not enough memory. The fact that a worker is also going missing could point to the same root cause.

1 Like

@Justin_Sharrill
I have just made the above mentioned change to the configuration adding 600 timeout and i am now trying to perform a resync. will post back the results.

@iballou
there seem to be many.
there are a whole lot of these errors
Feb 3 23:06:23 foreman pulpcore-worker-8: pulp: pulpcore.tasking.services.worker_watcher:ERROR: The worker named 6804@foreman.btecau.com is missing. Canceling the tasks in its queue.
Feb 3 23:06:23 foreman pulpcore-worker-8: pulp: pulpcore.tasking.services.worker_watcher:ERROR: Worker ‘7201@foreman.btecau.com’ has gone missing, removing from list of workers
Feb 3 23:06:23 foreman pulpcore-worker-8: pulp: pulpcore.tasking.services.worker_watcher:ERROR: The worker named 7201@foreman.btecau.com is missing. Canceling the tasks in its queue.
Feb 3 23:06:23 foreman pulpcore-worker-2: pulp: pulpcore.tasking.services.worker_watcher:ERROR: Worker ‘7281@foreman.btecau.com’ has gone missing, removing from list of workers
Feb 3 23:06:23 foreman pulpcore-worker-2: pulp: pulpcore.tasking.services.worker_watcher:ERROR: The worker named 7281@foreman.btecau.com is missing. Canceling the tasks in its queue.
Feb 3 23:06:23 foreman pulpcore-worker-5: pulp: pulpcore.tasking.services.worker_watcher:ERROR: Worker ‘7201@foreman.btecau.com’ has gone missing, removing from list of workers

there are also these however i believe they are caused by me rebooting the server.

Feb 4 09:29:39 foreman pulp: celery.worker.consumer.consumer:ERROR: (1437-39808) consumer: Cannot connect to qpid://localhost:5671//: [Errno 111] Connection refused.
Feb 4 09:29:39 foreman pulp: celery.worker.consumer.consumer:ERROR: (1422-55008) consumer: Cannot connect to qpid://localhost:5671//: [Errno 111] Connection refused.
Feb 4 09:29:39 foreman pulp: celery.worker.consumer.consumer:ERROR: (1437-39808) Trying again in 2.00 seconds…
Feb 4 09:29:39 foreman pulp: celery.worker.consumer.consumer:ERROR: (1422-55008) Trying again in 2.00 seconds…

and there are these two errors
Feb 4 09:36:29 foreman pulpcore-worker-2: pulp: pulpcore.tasking.services.worker_watcher:ERROR: Worker ‘resource-manager’ has gone missing, removing from list of workers
Feb 4 09:36:29 foreman pulpcore-worker-2: pulp: pulpcore.tasking.services.worker_watcher:ERROR: The worker named resource-manager is missing. Canceling the tasks in its queue.

i seem to be able to sync the extras repo and the extended support as well as the satellite tools repos. seems to only be effecting 7server and 6server and associated repos

I have tried the above however the sync still failes at the indexing stage. I noticed the firewall was disabled on the server so i have enabled it and added the ports. i am retrying the sync again.

edit: this had no effect. i deleted the messages file so it would recreate then rebooted and i noticed the errors are persistent. same as above

I have uploaded the latest debug to rsync://rsync.theforeman.org/debug-incoming

Thanks @bcoleman lets try doing some manual querying and seeing helps narrow down the problem. Can you run:

time curl  -vv "https://`hostname`/pulp/api/v3/content/rpm/packages/?arch__ne=src&fields=pulp_href%2Cname%2Cversion%2Crelease%2Carch%2Cepoch%2Csummary%2Cis_modular%2Crpm_sourcerpm%2Clocation_href%2CpkgId&limit=2000&offset=0&repository_version=%2Fpulp%2Fapi%2Fv3%2Frepositories%2Frpm%2Frpm%2Fd1c23ffb-5ff7-4b9f-b750-d128dcf0418f%2Fversions%2F1%2F" --cert /etc/pki/katello/certs/pulp-client.crt  --key /etc/pki/katello/private/pulp-client.key 

As the root user on the Foreman/Katello server and provide the output (if its not SUPER long). That should simulate the same request that is failing.

It would also be interesting to see:

time curl  -vv "https://`hostname`/pulp/api/v3/content/rpm/packages/?arch__ne=src&fields=pulp_href%2Cname%2Cversion%2Crelease%2Carch%2Cepoch%2Csummary%2Cis_modular%2Crpm_sourcerpm%2Clocation_href%2CpkgId&limit=500&offset=0&repository_version=%2Fpulp%2Fapi%2Fv3%2Frepositories%2Frpm%2Frpm%2Fd1c23ffb-5ff7-4b9f-b750-d128dcf0418f%2Fversions%2F1%2F" --cert /etc/pki/katello/certs/pulp-client.crt  --key /etc/pki/katello/private/pulp-client.key 

This is a similar command but is requesting less data.

@Justin_Sherrill
First command returns
[root@foreman ~]# time curl -vv “https://foreman.btecau.com/pulp/api/v3/content/rpm/packages/?arch__ne=src&fields=pulp_href%2Cname%2Cversion%2Crelease%2Carch%2Cepoch%2Csummary%2Cis_modular%2Crpm_sourcerpm%2Clocation_href%2CpkgId&limit=2000&offset=0&repository_version=%2Fpulp%2Fapi%2Fv3%2Frepositories%2Frpm%2Frpm%2Fd1c23ffb-5ff7-4b9f-b750-d128dcf0418f%2Fversions%2F1%2F” --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key

  • About to connect() to foreman.btecau.com port 443 (#0)
  • Trying 192.168.0.85…
  • Connected to foreman.btecau.com (192.168.0.85) port 443 (#0)
  • Initializing NSS with certpath: sql:/etc/pki/nssdb
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt
    CApath: none
  • NSS: client certificate from file
  • subject: CN=admin,OU=NODES,O=PULP,ST=North Carolina,C=US
  • start date: Jan 28 07:42:55 2021 GMT
  • expire date: Jan 18 07:42:55 2038 GMT
  • common name: admin
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US
  • SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Server certificate:
  • subject: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,ST=North Carolina,C=US
  • start date: Jan 28 07:42:44 2021 GMT
  • expire date: Jan 18 07:42:46 2038 GMT
  • common name: foreman.btecau.com
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US

GET /pulp/api/v3/content/rpm/packages/?arch__ne=src&fields=pulp_href%2Cname%2Cversion%2Crelease%2Carch%2Cepoch%2Csummary%2Cis_modular%2Crpm_sourcerpm%2Clocation_href%2CpkgId&limit=2000&offset=0&repository_version=%2Fpulp%2Fapi%2Fv3%2Frepositories%2Frpm%2Frpm%2Fd1c23ffb-5ff7-4b9f-b750-d128dcf0418f%2Fversions%2F1%2F HTTP/1.1
User-Agent: curl/7.29.0
Host: foreman.btecau.com
Accept: /

< HTTP/1.1 400 Bad Request
< Date: Thu, 04 Feb 2021 15:12:45 GMT
< Server: gunicorn/20.0.4
< Content-Type: application/json
< Vary: Accept,Cookie
< Allow: GET, POST, HEAD, OPTIONS
< X-Frame-Options: SAMEORIGIN
< Content-Length: 123
< Via: 1.1 foreman.btecau.com
< Connection: close
<

  • Closing connection 0
    [“URI /pulp/api/v3/repositories/rpm/rpm/d1c23ffb-5ff7-4b9f-b750-d128dcf0418f/versions/1/ not found for repositoryversion.”]
    real 0m1.412s
    user 0m0.218s
    sys 0m0.121s

Second command

  • About to connect() to foreman.btecau.com port 443 (#0)
  • Trying 192.168.0.85…
  • Connected to foreman.btecau.com (192.168.0.85) port 443 (#0)
  • Initializing NSS with certpath: sql:/etc/pki/nssdb
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt
    CApath: none
  • NSS: client certificate from file
  • subject: CN=admin,OU=NODES,O=PULP,ST=North Carolina,C=US
  • start date: Jan 28 07:42:55 2021 GMT
  • expire date: Jan 18 07:42:55 2038 GMT
  • common name: admin
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US
  • SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Server certificate:
  • subject: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,ST=North Carolina,C=US
  • start date: Jan 28 07:42:44 2021 GMT
  • expire date: Jan 18 07:42:46 2038 GMT
  • common name: foreman.btecau.com
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US

GET /pulp/api/v3/content/rpm/packages/?arch__ne=src&fields=pulp_href%2Cname%2Cversion%2Crelease%2Carch%2Cepoch%2Csummary%2Cis_modular%2Crpm_sourcerpm%2Clocation_href%2CpkgId&limit=500&offset=0&repository_version=%2Fpulp%2Fapi%2Fv3%2Frepositories%2Frpm%2Frpm%2Fd1c23ffb-5ff7-4b9f-b750-d128dcf0418f%2Fversions%2F1%2F HTTP/1.1
User-Agent: curl/7.29.0
Host: foreman.btecau.com
Accept: /

< HTTP/1.1 400 Bad Request
< Date: Thu, 04 Feb 2021 15:15:07 GMT
< Server: gunicorn/20.0.4
< Content-Type: application/json
< Vary: Accept,Cookie
< Allow: GET, POST, HEAD, OPTIONS
< X-Frame-Options: SAMEORIGIN
< Content-Length: 123
< Via: 1.1 foreman.btecau.com
< Connection: close
<

  • Closing connection 0
    [“URI /pulp/api/v3/repositories/rpm/rpm/d1c23ffb-5ff7-4b9f-b750-d128dcf0418f/versions/1/ not found for repositoryversion.”]
    real 0m0.344s
    user 0m0.157s
    sys 0m0.112s

Strange, lets try a simpler query:

 time  curl  -vv "https://`hostname`/pulp/api/v3/content/rpm/packages/?limit=2000" --cert /etc/pki/katello/certs/pulp-client.crt  --key /etc/pki/katello/private/pulp-client.key

and then

time  curl  -vv "https://`hostname`/pulp/api/v3/content/rpm/packages/?limit=500" --cert /etc/pki/katello/certs/pulp-client.crt  --key /etc/pki/katello/private/pulp-client.key

first query came back

[root@foreman /]# time curl -vv “https://foreman.btecau.com/pulp/api/v3/content/rpm/packages/?limit=2000” --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key

  • About to connect() to foreman.btecau.com port 443 (#0)
  • Trying 192.168.0.85…
  • Connected to foreman.btecau.com (192.168.0.85) port 443 (#0)
  • Initializing NSS with certpath: sql:/etc/pki/nssdb
  • CAfile: /etc/pki/tls/certs/ca-bundle.crt
    CApath: none
  • NSS: client certificate from file
  • subject: CN=admin,OU=NODES,O=PULP,ST=North Carolina,C=US
  • start date: Jan 28 07:42:55 2021 GMT
  • expire date: Jan 18 07:42:55 2038 GMT
  • common name: admin
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US
  • SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Server certificate:
  • subject: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,ST=North Carolina,C=US
  • start date: Jan 28 07:42:44 2021 GMT
  • expire date: Jan 18 07:42:46 2038 GMT
  • common name: foreman.btecau.com
  • issuer: CN=foreman.btecau.com,OU=SomeOrgUnit,O=Katello,L=Raleigh,ST=North Carolina,C=US

GET /pulp/api/v3/content/rpm/packages/?limit=2000 HTTP/1.1
User-Agent: curl/7.29.0
Host: foreman.btecau.com
Accept: /

< HTTP/1.1 502 Proxy Error
< Date: Thu, 04 Feb 2021 15:27:20 GMT
< Server: Apache
< Content-Length: 445
< Content-Type: text/html; charset=iso-8859-1
<

502 Proxy Error

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /pulp/api/v3/content/rpm/packages/.

Reason: Error reading from remote server

* Connection #0 to host foreman.btecau.com left intact

real 0m31.003s
user 0m0.159s
sys 0m0.091s

Second command filled the window with text i have only captured some of it as below
the last part was as follows

itialize hpet timer before irq is registered (Pratyush Anand) [1299001]\n- [x86] Add support for missing Kabylake Sunrise Point PCH (David Arcari) [1379401]\n- [x86] pci: vmd: Request userspace control of PCIe hotplug indicators (Myron Stowe) [1380181]\n- [pci] pciehp: Allow exclusive userspace control of indicators (Myron Stowe) [1380181]\n- [acpi] acpica: Fix for a Store->ArgX when ArgX contains a reference to a field (Lenny Szubowicz) [1330897]\n- [misc] cxl: Flush PSL cache before resetting the adapter (Steve Best) [1383478]\n- [scsi] ibmvfc: Fix I/O hang when port is not mapped (Steve Best) [1378001]\n- [netdrv] xen-netfront: avoid packet loss when ethernet header crosses page boundary (Vitaly Kuznetsov) [1348581]\n- [powerpc] ppc64: Fix incorrect return value from __copy_tofrom_user (Steve Best) [1387244]\n- [powerpc] pseries: use pci_host_bridge.release_fn() to kfree(phb) (Steve Best) [1385635]\n- [powerpc] pseries: Fix stack corruption in htpe code (Steve Best) [1384099]\n- [powerpc] eeh: Fix stale cached primary bus (Steve Best) [1383281]\n- [infiniband] ib/ipoib: move back IB LL address into the hard header (Jonathan Toppins) [1378656]"],[“Rafael Aquini aquini@redhat.com [3.10.0-517.el7]”,1477915201,"- [fs] fanotify: fix list corruption in fanotify_get_response() (Miklos Szeredi) [1362421]\n-* transfer closed with 146931790 bytes remaining to read

  • Closing connection 0
    curl: (18) transfer closed with 146931790 bytes remaining to read
    [fs] fsnotify: add a way to stop queueing events on group shutdown (Miklos Szeredi) [1362421]\n- [fs] dlm: Remove lock_sock to avoid scheduling while atomic (Robert S Peterson) [1377391]\n- [fs] sunrpc: move NO_CRKEY_TIMEOUT to the auth->au_flags (Dave Wysochanski) [1384666]\n- [fs] rbd: don’t retry watch reregistration if header object is gone (Ilya Dryomov) [1378186]\n- [fs] rbd: don’t wait for the lock forever if blacklisted (Ilya Dryomov) [1378186]\n- [fs] rbd: lock_on_read map option (Ilya Dryomov) [1378186]\n- [fs] ovl: during copy up, switch to mounter’s creds e
    real 0m32.360s
    user 0m3.875s
    sys 0m1.597s
    [root@foreman /]#

Great! That is telling me that the page size is too large for your particular hardware/deployment/disk i/o. Its possible some postgresql tuning may help, as it seems like you have plenty of memory. Are you running on Hard disks or SSDs?

For now we can lower the page size and retry.

In your /etc/foreman/plugins/katello.yaml, look for a pulp section:

:pulp:
  :url: https://my.hostname.com/pulp/api/v2/
  :ca_cert_file: /etc/pki/katello/certs/katello-server-ca.crt

simply add a new line “:bulk_load_size: 500”, so it looks like:

:pulp:
  :url: https://my.hostname.com/pulp/api/v2/
  :ca_cert_file: /etc/pki/katello/certs/katello-server-ca.crt
  :bulk_load_size: 500

Then restart services:

  foreman-maintain service restart

and try re-syncing.

1 Like

ok applying that change now. and will try and resync

the hardware this is running on at the moment is a Hyper-V system running on a SystemX 3850 X5. It is allocated 28GB ram and 10 VCPU and the disk allocation is on a raid 5 across 7 7200rpm sata disks connected to a sas controller card.
it will however be moved into a nutanix environment running entirely on ssd disks once it is working and tested.

the sync is still running and it is up to index content part. has not crashed immediatly with the server error so that’s a good sign. going by how long satellite took to do this sync I might let this run and pick it up in the morning

Awesome, let me know how it goes. One thing to keep in mind is that that setting in katello.yaml will get wiped out if you re-run the installer, i opened Bug #31808: move pulp bulk_load_size SETTING to a Setting - Katello - Foreman to move this setting into the database for easier control (no need to restart services). I imagine if/when you get this to faster disks you’ll see an improvement too.

I have been having the same issue, but only with one repository.

Red Hat Enterprise Linux 7 Server - Extended Update Support RPMs x86_64 7.6

All other Red Hat and non RH repositories have sync’d without any issues.

I will try the same bulk_load_size: 500 setting to see if it helps.

Thanks,
Cassuis

@Justin_Sherrill
I am happy to report that has fixed the syncing issue with the 7server repo

It has also worked for me. @Justin_Sherrill thanks so much for the assist and @bcoleman for asking the question.

BTW, I’m running a configuration that I haven’t done in my prior environments where MongoDB and PostgreSQL are running on their own server so that the Katello smart proxy doesn’t have to use resources for those services.

Team,
It looks like this issue remains present as of 3.6.1 and impacting only certain RHEL 7 repos.

The fix provided by Justin_Sherrill works as intended to resync the content that is failing.

what is the fix for 502 proxy error issue?