Katello::Errors::Pulp3Error: column deb_aptremote.ignore_missing_package_indices does not exist

Problem:
By foreman-maintain content prepare i am getting this error:

column deb_aptremote.ignore_missing_package_indices does not exist
LINE 1: …mote".“sync_installer”, “deb_aptremote”.“gpgkey”, "deb_aptre…

Expected outcome:

Foreman and Proxy versions:

  • foreman-2.3.5-1.el7.noarch
  • katello-3.18.4-1.el7.noarch
    CentOS Linux release 7.9.2009 (Core)
    Foreman and Proxy plugin versions:

Distribution and version:

Other relevant data:

Hi @senetm

It’s possible this is a Pulp bug. You could try manually running the Pulp database migration:

sudo -u pulp PULP_SETTINGS='/etc/pulp/settings.py' DJANGO_SETTINGS_MODULE='pulpcore.app.settings' python3-django-admin migrate --no-input

Hi,
at first start i get:

System check identified some issues:

WARNINGS:
?: (guardian.W001) Guardian authentication backend is not hooked. You can add this in settings as eg: AUTHENTICATION_BACKENDS = ('django.contrib.auth.backen ds.ModelBackend', 'guardian.backends.ObjectPermissionBackend').
Operations to perform:
Apply all migrations: admin, auth, certguard, container, contenttypes, core, deb, file, guardian, pulp_2to3_migration, rpm, sessions
Running migrations:
Applying deb.0013_aptremote_ignore_missing_package_indices… OK

Then i started it again and getting:

System check identified some issues:

WARNINGS:
?: (guardian.W001) Guardian authentication backend is not hooked. You can add this in settings as eg: AUTHENTICATION_BACKENDS = ('django.contrib.auth.backends.ModelBackend', 'guardian.backends.ObjectPermissionBackend').
Operations to perform:
Apply all migrations: admin, auth, certguard, container, contenttypes, core, deb, file, guardian, pulp_2to3_migration, rpm, sessions
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won’t be applied.
Run ‘manage.py makemigrations’ to make new migrations, and then re-run ‘manage.py migrate’ to apply them.

This sounds like the latest pulp_deb migrations were not properly applied. I believe a foreman-installer call will always ensure that all migrations are installed, alternatively the command by @jeremylenz presumably also works in a more targeted way.

Background: The missing migration may well have been caused by the recent release of Katello 3.18.4, which includes a significantly newer version of python3-pulp-deb than Katello 3.18.3 did. Since this was a Z-release, a simple yum update would have updated the python3-pulp-deb package without running the necessary migrations.

1 Like

Thanks bouth of you @quba42 and @jeremylenz.
Now i am getting the next error by foreman-maintain content prepare. :frowning:

1 subtask(s) failed for task group /pulp/api/v3/task-groups/12860468-8e41-4a73-96f8-1069dceaaeec/.

Unfortunately these errors and back-traces aren’t really providing much information beyond: “A pulp task failed”. Your best bet is to check the system logs (journalctl) to find the error message and back trace from the actual Pulp task that failed. There isn’t much I can do without it. :frowning:

Can that journalctl logs help?

It sounds like the migration is trying to attempt to create multiple distributions with the same base path:

Jul 21 13:51:02 foreman.com01.comcorp.lan pulpcore-worker-3[1137]: django.db.utils.IntegrityError: duplicate key value violates unique constraint "core_basedistribution_base_path_key"
Jul 21 13:51:02 foreman.com01.comcorp.lan pulpcore-worker-3[1137]: DETAIL:  Key (base_path)=(com/Library/custom/Ubuntu_20_04/Subscription_Manager_for_Ubuntu) already exists.

I don’t immediately have any clear idea why that might be. It could be a follow on error from the earlier error caused by the missing DB migration. Perhaps, the base path was already used during that first migration attempt, but because of the error the corresponding step was not marked as completed. Now it is trying to run the same step again and running into an error because the object already exists. In that case, it might be possible to fix things by removing the distribution with the offending base path from the Pulp 3 DB. However, I can’t be sure that is it, and there is lots of potential for making things worse if you start manually removing stuff from the DB… If this is an important system, and there is a backup from before the first failed migration attempt, I would consider reverting to that, upgrading, ensuring the Pulp migrations are properly applied from the get go, and then trying again.

1 Like

I just learned there is also the option of running foreman-maintain content migration-reset to drop all 2to3 migration process, so as to start over from scratch (with a clean DB).

1 Like

Thanks.
I have try it more times and am getting now that error:

At first glance, this looks like a previously unknown edge case type bug.
I will look into it.

I have created a issue in the pulp_deb tracker to make sure this is not forgotten: Issue #9165: KeyError: 'amd64' when creating publications during 2to3 migration - Debian Support - Pulp

I have also started digging through the code. The error thrown suggests that a ReleaseArchitecture content unit for the amd64 architecture is missing from a repo version being published by the migration. I don’t yet have any ideas why this might be.

Hi…i have deleted all deb repositorys and RHEL 8.4 repo (there were some corupt rpms). After that the content prepare and the switchover went through without problems.

After successful switchover i am getting by host update (yum or dnf) :

Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 553 B/s | 69 B 00:00
Errors during downloading metadata for repository ‘rhel-8-for-x86_64-baseos-rpms’:

And when i try the Avanced sync → Validate Content Sync for RHEL 8.3 repos i am getting:

[Errno 24] Too many open files: ‘/var/lib/pulp/media/artifact/93/728095128161579fd13ef8bb0687c6a92f4f59e9b044ac056dc52fb44bf054’

I have heard some discussion in the community about issues that sound like this, but I am not a pulp_rpm expert. I recommend you open a new Forum thread for this new discussion, so others will see it.

Thanks. I have opened it: