Pulp2_to pulp3 issues ‘[ContentMigration]’

Problem: Cannot update - various errors trying various common solutions

Expected outcome: To be able to migrate from pulp2 to pulp3

Foreman and Proxy versions: forman-2.2.1 to foreman-2.3.5
katello 3.17.1 to katello-3.18.5
Foreman and Proxy plugin versions:

Distribution and version: RHEL7.9

Other relevant data:

1) grab the foreman release package ensure I am pointed at a 3.x version just outside the pulp issue version i.e. 3.18.5
2) yum update, 
3) foreman-rake katello:upgrade_check "no tasks" so to next step
4) foreman-installer
5) chmod -R g+rwX /var/lib/pulp/content
find /var/lib/pulp/content -type d -perm -g-s -exec sudo chmod g+s {} \;
chgrp -R pulp /var/lib/pulp/content
6) foreman-maintain content migration-stats output 7h 33minutes
7) foreman-maintain content prepare.
 come back and check and the issues began initially this was the errata error
8) try and run the whitelist "*option is not available in my version*"
9) use hammer to try and remove a package that has been marked as unmigratable
10) package can still be queried
11) rerun foreman-maintain content prepare - again w the same issues
12) tried skip_corrupted=True; foreman-maintain content prepare , same thing.
13)foreman-rake katello:delete_orhphaned_content
14) tried approve_corrupted_content
15) tried  foreman-rake katello:correct_repositories COMMIT=true --trace
16) tried a rollback
17) had to recreate postgres var lock dir and am back to having a working instance so will retry but any specifics to the errata issue or corrupt packages would be apprciated.

Did I understand correctly, that you are trying to run the 2to3 migration and switchover on Katello 3.17?

Recommendation: Do not run content migrations or switch to Pulp 3 until you are fully upgraded to Katello 3.18.latest! Katello 3.18 is still capable of running with Pulp 2 and includes a lot of bugfixes for the 2to3 migration. Once you are upgraded to 3.18, then you can run the migration and switchover before upgrading to Katello 4.0.

Hello thanks, the rpms were upgraded to 3.18.5 from katello-3.17.1 per the documentation everything went fine untill the foreman-maintain content prepare command that is where I keep getting stuck due to the various issues annotated in this ticket

You mentioned “errata error” did you mean that there are still non migrated errata?
I remember there was a RedHat bugzilla on this which links to a RedHat knowledge base article with a solution that worked for us (of course that requires access to the RedHat knowledge base).

Perhaps it would be helpful if you could repost exactly what your foreman-maintain content prepare is currently failing on?

ok

foreman-maintain content prepare
Running Prepare content for Pulp 3

Prepare content for Pulp 3:
Checking for valid Katello configuraton.
Starting task.
2022-03-09 22:47:08 -0800: Distribution creation 2053/4039Migration failed, You will want to investigate: https://katello.bogus.domain.net/foreman_tasks/tasks/da4dcaf5-e984-44fa-abda-a1e5d09294eb
rake aborted!
ForemanTasks::TaskError: Task da4dcaf5-e984-44fa-abda-a1e5d09294eb: Katello::Errors::Pulp3Error: 241 subtask(s) failed for task group /pulp/api/v3/task-groups/8e507aee-9154-4093-99c9-9e6b443dbc0f/.
/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.18.5/lib/katello/tasks/pulp3_migration.rake:41:in block (2 levels) in <top (required)>' /opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.3/exe/rake:27:in <top (required)>’
Tasks: TOP => katello:pulp3_migration
(See full trace by running task with --trace)
[FAIL]
Failed executing foreman-rake katello:pulp3_migration, exit status 1

Scenario [Prepare content for Pulp 3] failed.

The following steps ended up in failing state:

[content-prepare]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist=“content-prepare”

Now again there is no --whitelist=“content-prepare” option available seems that is just a place holder for a future release.
Looking at Katello in the UI if I click the raw tab and scroll down I see tons of actions i.e.

{
“pulp_tasks”: [
{
“pulp_href”: “/pulp/api/v3/tasks/744733ca-ab6b-43bf-be05-3cb92b241fd0/”,
“pulp_created”: “2022-03-09T16:35:12.730+00:00”,
“state”: “completed”,
“name”: “pulp_2to3_migration.app.tasks.migrate.migrate_from_pulp2”,
“started_at”: “2022-03-09T16:35:12.902+00:00”,
“finished_at”: “2022-03-10T06:04:38.082+00:00”,
“worker”: “/pulp/api/v3/workers/a1b5af6b-f84a-438f-a7df-c11d34e2efe3/”,
“child_tasks”: [
“/pulp/api/v3/tasks/68daf644-f5cb-441a-9b1f-d70ba858388a/”,
“/pulp/api/v3/tasks/e2399538-eb78-4aad-90aa-df5e36682915/”,
“/pulp/api/v3/tasks/ce891495-c8ed-4bf1-b5f4-570af5324689/”,
“/pulp/api/v3/tasks/4ed60f89-5e7e-4b91-8050-81540e330f4b/”,
–truncated for brevity–
–end truncation –
“/pulp/api/v3/tasks/b93644e4-587f-4463-bb27-b480d383ee15/”,
“/pulp/api/v3/tasks/db642743-6e83-4665-8569-0fb7e405c4f0/”
],
“task_group”: “/pulp/api/v3/task-groups/8e507aee-9154-4093-99c9-9e6b443dbc0f/”,
“progress_reports”: [
{
“message”: “Processing Pulp 2 repositories, importers, distributors”,
“code”: “processing.repositories”,
“state”: “completed”,
“total”: 4039,
“done”: 4039
},
{
“message”: “Pre-migrating Pulp 2 deb_component content (general info)”,
“code”: “premigrating.content.general”,
“state”: “completed”,
“total”: 0,
“done”: 0
},
{
“message”: “Pre-migrating Pulp 2 deb_component content (detail info)”,
“code”: “premigrating.content.detail”,
“state”: “completed”,
“total”: 0,
“done”: 0
},
{
“message”: “Pre-migrating Pulp 2 deb_component content (detail info)”,
“code”: “premigrating.content.detail”,
“state”: “completed”,
“total”: 0,
“done”: 0
},
{
“message”: “Pre-migrating Pulp 2 rpm content (general info)”,
“code”: “premigrating.content.general”,
“state”: “completed”,
“total”: 422103,
“done”: 422103
},
{
“message”: “Pre-migrating Pulp 2 rpm content (detail info)”,
“code”: “premigrating.content.detail”,
“state”: “completed”,
“total”: 422103,
“done”: 422103
},
{
“message”: “Pre-migrating Pulp 2 erratum content (general info)”,
“code”: “premigrating.content.general”,
“state”: “completed”,
“total”: 305556,
“done”: 305556,
},
{
“message”: “Pre-migrating Pulp 2 srpm content (general info)”,
“code”: “premigrating.content.general”,
“state”: “completed”,
“total”: 136,
“done”: 136
},
{
“message”: “Pre-migrating Pulp 2 srpm content (detail info)”,
“code”: “premigrating.content.detail”,
“state”: “completed”,
“total”: 136,
“done”: 136
},
{
“message”: “Pre-migrating Pulp 2 erratum content (detail info)”,
“code”: “premigrating.content.detail”,
“state”: “completed”,
“total”: 305556,
“done”: 305556
},
{
“message”: “Pre-migrating Pulp 2 deb content (general info)”,
“code”: “premigrating.content.general”,

    {
      "message": "Migrating package_group content to Pulp 3",
      "code": "migrating.rpm.content",
      "state": "completed",
      "total": 35737,
      "done": 35737
    },
    {
      "message": "Migrating distribution content to Pulp 3",
      "code": "migrating.rpm.content",
      "state": "completed",
      "total": 2,
      "done": 2
    }
  ],
  "created_resources": [
    "/pulp/api/v3/task-groups/8e507aee-9154-4093-99c9-9e6b443dbc0f/"
  ],
  "reserved_resources_record": [
    "pulp_2to3_migration"
  ]
}

],
“task_groups”: [
{
“pulp_href”: “/pulp/api/v3/task-groups/8e507aee-9154-4093-99c9-9e6b443dbc0f/”,
“description”: “Migration Sub-tasks”,
“all_tasks_dispatched”: true,
“waiting”: 0,
“skipped”: 0,
“running”: 0,
“completed”: 187,
“canceled”: 0,
“failed”: 241,
“group_progress_reports”: [
{
“message”: “Distribution creation”,
“code”: “create.distribution”,
“total”: 4039,
“done”: 2054
},
{
“message”: “Repo version creation”,
“code”: “create.repo_version”,
“total”: 920,
“done”: 920
}
]
}
],
“poll_attempts”: {
“total”: 3183,
“failed”: 1
}
}

I have been going in circles with some of this stuff but its fine to go step by step so any advice would be appreciated.

Additionally the other thing I notice is

/tmp/unmigratable_content-20220307-13596-cburx6/Rpm as being the possible cause initially this was just 1 rpm and I tried to get past it with hammer that did not work thus I did a migration-reset and it appears the issue has gotten worse, I have tried skipping that content but either that does not work for this version or I am not using the correct commands.
i.e. I tried

skip_corrupted=True; foreman-maintain content prepare

Again and advice would be appreciated.

ok so basically another question if I restore and retry from my prod box to a dev box say ‘/var/lib/pulp’ and then retry would this perhaps work? Or are there other dirs that need restore as well? I ask this because I am not sure this server’s yum repos were in a completely working state to start with.

@Justin_Sherrill any ideas?

Going to broaden the ping here: @katello

User needs help with retrieving the right Pulp task to actually tell us what the underlying error is here, but I am not sure how best to do that. Anyone that can weigh in would be appreciated!

ok so now I completly reinstalled the entire operating system and am attempting to redo from scratch I get an error now

{code}

2022-03-28 16:37:43 [ERROR ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: change from ‘notrun’ to [‘0’] failed: ‘/usr/sbin/foreman-rake db:migrate’ returned 1 instead of one of [0]
2022-03-28 16:37:52 [NOTICE] [configure] 1500 configuration steps out of 1704 steps complete.
{code}

the db is blank this is a new installation it failed the 1st time due to no run dir for postgres as I did not know I needed to make that its not in the docs. how do I clear any left over tasks from the command line?

rekicked started the node again and manually made postgres and foreman users will retry a fresh install any idea on exactly which version might go through with no errors? There is not data on this system at current I want a straight pulp3 installation

This ticket is essentially dead. I did a complete reinstall of the OS, grab latest and stepped through it once to see if it makes assumptions for setup. What I got was that postgres user needs to be setup ( run dir setup too i.e. var/run/postgresql) as well as the foreman user ,and also that the cert if cloning a prod system needs to have already been allowed to have some method to do dnsmasq fakery, newest version seems to be doing a reverse dns lookup on the cert. After that I did a complete reinstall again as it created a bunch of files with a diff hostname I did not want to edit. Now I have a running instance but just need to look at a few things and start reimporting yum servers for repos.