Oracle Linux 8 Appstream sync fails

We are running a air gapped environment, Oracle ULN mirror pulls down the repositories to a local server and the Foreman Katello server is set to use the local server URLs to sync from. All other repositories work as expected such as OEL8 baseos, addons, UEK, OEL7, CentOS 7, EPEL etc… I have a issue where OEL8 Appstream will not sync. I have tried syncing Appstream to the local server using Oracle ULN mirror and reposync, both end up with the same error.

Expected outcome:
The OEL8 Appstream repository should sync within Katello.

Foreman and Proxy versions:
Foreman 2.5.2
Katello 4.1.1

Foreman and Proxy plugin versions:
Pulp 4.14.1
Candlepin 4.0.4

Distribution and version:
Oracle Linux 8.4 with RHEL kernel 4.18.0-305.10.2

Other relevant data:

       "  File \"/usr/lib/python3.6/site-packages/pulpcore/tasking/\", line 266, in _perform_task\n" +
       "    result = func(*args, **kwargs)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/\", line 422, in synchronize\n" +
       "    version = dv.create()\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 151, in create\n" +
       "    loop.run_until_complete(pipeline)\n" +
       "  File \"/usr/lib64/python3.6/asyncio/\", line 484, in run_until_complete\n" +
       "    return future.result()\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 225, in create_pipeline\n" +
       "    await asyncio.gather(*futures)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 43, in __call__\n" +
       "    await\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 181, in run\n" +
       "    pb.done += task.result()  # download_count\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 207, in _handle_content_unit\n" +
       "    await asyncio.gather(*downloaders_for_content)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/\", line 89, in download\n" +
       "    download_result = await\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/download/\", line 258, in run\n" +
       "    return await download_wrapper()\n" +
       "  File \"/usr/lib/python3.6/site-packages/backoff/\", line 133, in retry\n" +
       "    ret = await target(*args, **kwargs)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/download/\", line 256, in download_wrapper\n" +
       "    return await self._run(extra_data=extra_data)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/\", line 93, in _run\n" +
       "    to_return = await self._handle_response(response)\n" +
       "  File \"/usr/lib/python3.6/site-packages/pulpcore/download/\", line 210, in _handle_response\n" +
       "    chunk = await  # 1 megabyte\n" +
       "  File \"/usr/lib64/python3.6/site-packages/aiohttp/\", line 380, in read\n" +
       "    await self._wait(\"read\")\n" +
       "  File \"/usr/lib64/python3.6/site-packages/aiohttp/\", line 306, in _wait\n" +
       "    await waiter\n" +
       "  File \"/usr/lib64/python3.6/site-packages/aiohttp/\", line 656, in __exit__\n" +
       "    raise asyncio.TimeoutError from None\n",
     [{"message"=>"Downloading Metadata Files",
      {"message"=>"Downloading Artifacts",
      {"message"=>"Associating Content",
      {"message"=>"Parsed Modulemd",
      {"message"=>"Parsed Modulemd-defaults",
      {"message"=>"Parsed Packages",
 "poll_attempts"=>{"total"=>78, "failed"=>1}}
Katello::Errors::Pulp3Error: Pulp task error
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/pulp3/abstract_async_task.rb:102:in `block in check_for_errors'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/pulp3/abstract_async_task.rb:100:in `each'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/pulp3/abstract_async_task.rb:100:in `check_for_errors'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/pulp3/abstract_async_task.rb:133:in `poll_external_task'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action/polling.rb:100:in `poll_external_task_with_rescue'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action/polling.rb:22:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action/cancellable.rb:14:in `run'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/pulp3/abstract_async_task.rb:10:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:571:in `block (3 levels) in execute_run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:32:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/middleware/remote_action.rb:16:in `block in run'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/middleware/remote_action.rb:40:in `block in as_remote_user'
/usr/share/gems/gems/katello-4.1.1/app/models/katello/concerns/user_extensions.rb:21:in `cp_config'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/middleware/remote_action.rb:27:in `as_cp_user'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/middleware/remote_action.rb:39:in `as_remote_user'
/usr/share/gems/gems/katello-4.1.1/app/lib/actions/middleware/remote_action.rb:16:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/rails_executor_wrap.rb:14:in `block in run'
/usr/share/gems/gems/activesupport- `wrap'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/rails_executor_wrap.rb:13:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action/progress.rb:31:in `with_progress_calculation'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action/progress.rb:17:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/load_setting_values.rb:20:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_request_id.rb:15:in `block in run'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_request_id.rb:52:in `restore_current_request_id'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_request_id.rb:15:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_timezone.rb:15:in `block in run'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_timezone.rb:44:in `restore_curent_timezone'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_timezone.rb:15:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_taxonomies.rb:15:in `block in run'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_taxonomies.rb:45:in `restore_current_taxonomies'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_taxonomies.rb:15:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:32:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:27:in `pass'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware.rb:19:in `pass'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_user.rb:15:in `block in run'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_user.rb:54:in `restore_curent_user'
/usr/share/gems/gems/foreman-tasks-4.1.2/app/lib/actions/middleware/keep_current_user.rb:15:in `run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/stack.rb:23:in `call'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/middleware/world.rb:31:in `execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:570:in `block (2 levels) in execute_run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:569:in `catch'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:569:in `block in execute_run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:472:in `block in with_error_handling'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:472:in `catch'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:472:in `with_error_handling'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:564:in `execute_run'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/action.rb:285:in `execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:18:in `block (2 levels) in execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/execution_plan/steps/abstract.rb:167:in `with_meta_calculation'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:17:in `block in execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:32:in `open_action'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:16:in `execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/director.rb:93:in `execute'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors/sidekiq/worker_jobs.rb:11:in `block (2 levels) in perform'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors.rb:18:in `run_user_code'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors/sidekiq/worker_jobs.rb:9:in `block in perform'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors/sidekiq/worker_jobs.rb:25:in `with_telemetry'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors/sidekiq/worker_jobs.rb:8:in `perform'
/usr/share/gems/gems/dynflow-1.4.8/lib/dynflow/executors/sidekiq/serialization.rb:27:in `perform'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:37:in `block in <module:Sidekiq>'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
/usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'
/usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

This error typically indicates a network error. Resyncing this a few times might help. Can you confirm what version of pulp you are on?

rpm -q python3-pulpcore

if you’re on 3.14, that should already have retry support, but if not you may need to update the katello-repos package and then update your pulp packages.

Thank you for your reply.

I done some testing by installing Katello onto a virtual machine with direct internet access, on Hetzner cloud to mitigate whether it was a issue with our network or not but also received the same TimeoutError again.

python3-pulpcore-3.14.1-1.el8 is the Pulp package we have installed.

Syncing OL 8 Appstream repo fails for me, too. Exact same message, and increasing the sync_connect_timeout parameter to 600 seconds didn’t help, either.

It would help if there was SOME indication of what action specifically times out. Quite frankly, end users need meanigful error messages to be able to investigate the problem, whether it is an infrastructure or application issue. Python/Ruby stracktraces that do not include any specifics (which URL is being retrieved, etc.) are not good enough.

We set the download currency for the OL 8 Appstream repo to 5 using the Hammer CLI and it seems to have fixed the issue. It has successfully synced overnight twice since so hopefully it works for you too.

hammer repository list

Take note of your Appstream repo id, then replace X with your id;

hammer repository update --id=X --download-concurrency=5
1 Like

Ah, so maybe it’s some throtteling on Oracle’s side, then? I will try tomorrow, thanks for the hint.

We’re getting this for OL8 Appstream too, I’m going to try the download concurrency workaround @dmann has suggested.

Not 100% fixed it for us, still getting some sync failures on both OL8 ‘baseos’ and ‘appstream’ repositories.

So, after having spent a number of hours yesterday chatting with the knowledgeable people over in the Pulp users chat channel, we’ve learnt more about this issue, and come up with a combination of changes which affect a workaround.

The timeout issue is caused by a number of things:

  1. download throttling - this one can be really obvious: at times I saw really high download speeds for a while, which would later drop down to much slower speeds
  2. The default download timeout settings in Katello (“Sync Connection Timeout” in Foreman settings, under “Content”) of 300s or 5 minutes. When the downloads are really being throttled this can easily be hit.
  3. The OL8 appstream repo has quite a few large SRPMs which vary in size from 1.3GiB to 2.5GiB. When “mirror on sync” is enabled (even if you’ve enabled “Ignore SRPMs”) SRPMs get downloaded, as Pulp is providing a complete mirror of the repo.
  4. This is an arguable one…Pulp is meant to retry a download a number of times (defaults to 3), but doesn’t do this when a download hits the timeout period.
  5. If you are performing one of these large syncs, and get part way through before the sync fails with the timeout, it seems everything which was downloaded into the tmp directory is thrown away, and not kept. So, the next time the sync is tried, it has to download everything again.

So, the combination of changes to manage a workaround:

  1. I set the timeout to 3000 seconds, but without the other changes this still wasn’t enough, because even that timeout (50 minutes!) was getting hit.
  2. Change concurrent downloads from the default of 10 to 5. Ditto the last sentence of the above.
  3. Change the repository to turn off “Mirror on sync” and turn on “Ignore SRPMs”
  4. Point 3. above then managed to hit another Pulp issue for these repositories, which would cause the sync to fail for another reason, albeit a lot sooner in the sync…
  5. In /etc/pulp/ set “RPM_ITERATIVE_PARSING = False” - this changes the RPM metadata parsing to an older routine, the routine which led to very high RAM usage in pulpcore-worker instances during sync of repositories with large metadata sets, like OL8/RHEL repositories.

All of the above leads to syncs which work, but don’t download SRPMs, so you have to accept that restriction. Also, RAM usage is high during the sync runs of these repositories.


Hmm, that is a workaround with awfully many restrictions and gotchas. Might as well just update using the official Oracle repositories for the time being, then…

I only found one relevant Pulp issue so far: #9233, which I think is important since starting from where the previous failed job left off would at least allow to make progress over time and not create an infinite loop of failed attempts.

I encountered a number of issues downloading/syncing the Oracle repositories, but came up with a workable (yet wasteful) workaround… I have a second server set up which uses reposync/createrepo to pull down all the RPMs from Oracle and then presents them using a standard Apache HTTPD Web Server. Foreman then connects to this server and synchronizes everything normally (without the throttling or timeout issues). Not the most ideal thing, but it does avoid my repositories going to 0 packages due to the failed sync and “Mirror on Sync” being checked.

We also already had the same setup as yourself in place but we were still facing the same issue. The only workaround was by changing the Download Concurrency, we upped the Apache config too to allow for larger timeouts etc but it didn’t help.

I’m not entirely the Download Concurrency was the only change that enabled it to work as we aso increased the “Sync Connection Timeout” that @John_Beranek mentioned… so it seems to be a combination of both settings that fixed our issue.

Was struggling to sync CentOS 8 Base/Appstream/Epel and Puppet 6.0 on a fresh Foreman/Katello for quite a while now… Steps 1-3 did the trick. For me the major difference was the first step, since I went from 3 to 1 and then tried different timeout settings (3600 was enough) though just changing the timeout settings without steps 2 and 3 did not help.


what, exactly, is this “Download Concurrency” parameter you guys are talking about? Is this the foreman_proxy_content_batch_size setting? That one was set to 100 by default in our installation.

By disabling mirror_on_sync, increasing the sync_connect_timeout setting to 3600, and reducing the foreman_proxy_content_batch_size to 5, I now get a completely new error message:

Task 55409e4f-c0d6-4245-aa2f-fa7964d722be failed (Package id from primary metadata (831cbe1e389d947e8015934b72c0dbb5edd8e866), does not match package id from filelists, other metadata (6681ff57427e630e80c63761b02d00455efbe23b))

I found a Pulp 3 issue (8944) that kind of matches the error message and has been marked as “closed - currentrelease”, but I am on python3-pulpcore-3.14.5 and still have this error.

A manually triggered “complete resync” did not help, either. Any ideas?

Kind Regards


No I think that’s different. Restore it to the original value. Connect to your foreman master server by ssh and run these commands:

hammer repository list --organization=“your_organization”

From the output get the id of the required repository and then run the next command:

hammer repository update --id=“your_repository_id” --download-concurrency=5 --organization=“your_organization”

This issue is why I had to set set “RPM_ITERATIVE_PARSING = False” in Pulp’s settings.

Hey @fbachmann and @John_Beranek , regarding the error “Package id from primary metadata
(…), does not match package id from filelists, other metadata (…))”

I figured out this issue yesterday so it should be fixed in the next release of pulp_rpm. Until then, as a workaround, if you disable “skip SRPMs” (I think this is what the Katello option is called) it should resolve the issue. If you don’t have the repo configured to skip SRPMs in the first place or if turning it off doesn’t help, let me know.

This would also mean that @John_Beranek you should be able to flip RPM_ITERATIVE_PARSING back on

1 Like

Regretfully, I’ve disabled SRPM downloads for my AppStream repo, and it’s still failing.

Publish Via HTTP:        yes
Published At:            <elided>
Relative Path:           <elided>
Download Policy:         immediate
Ignorable Content Units: srpm

I have 19,752 RPMs downloaded, which matches what I have in my legacy Spacewalk system, but I can’t get beyond this. It fails repeatedly with a Pulp task error, and I haven’t figured what to pull out of the error log.

@jkalchik Sorry, I was not clear enough. Skipping SRPMs is what triggers this error. If you do not skip the SRPMs (e.g. you do not ignore anything), that would avoid it (I believe)

But I’ve just released the new pulp_rpm, so you could also just wait until Katello builds those packages in a day or two.

Okay. I’ve never checked the box to skip SRPMs before, and this repository has been a pain in my side (if not a little lower south.) I see some updates tonight, but nothing for pulp. I’ll check the repos again in the morning.