Error populating transaction, anaconda is retrying

Hi,

I’m having problem with the installation of Centos7 on my Foreman/Katello-setup.
Installing CentOS8-servers works … so it’s not totally broken.

When anaconda gives up after 10 retries of “Error populating transaction, anaconda is retrying …”, it says that it cannot download sg3_utils, blabla … No more mirrors to try. But manually wget:ting this rpm from tty2 is successfull from the specified “url” in the kickstart-file.

I’m starting to suspect the initrd & vmlinuz files in /var/lib/tftpboot/boot so I tried to replace them with older versions. Unfortenately, during the build-stage, these files are replaced back to the suspicious ones again.

What does the number-schema signify on the boot-files? Like:
centos7_base-25-vmlinuz and
centos8_base-61-vmlinuz
And how can I try other versions of them without having them rolled-over during build?

I’m on Foreman 2.1.2 & Katello 3.16.0.1 on CentOS7.8.2003 (3.10.0-1127.19.1.el7)

I’ve had Centos7-deployment working on another system but I can’t remember this particular problem.
Only change in templates where the addition of some extra repos. They are now removed.

//Br Christian

1 Like

And how can I try other versions of them without having them rolled-over during build?

Since your host didn’t provision successfully, I assume it is still in the build mode (you can see that im the host details page). Im such case simply replace the mentioned files manually and simply reboot the host.
It should use previously deployed pxeconfig and there will be no orchestration which would redeploy the files.

Anyway, i don’t think the kernel and initram disk are to be blamed.
You might try to get more detailed info about the installation if you have access to the host console:

Re: numbering schema - i’m not sure but I believe it’s just a random suffix to prevent multiple orchestrations to touch the same file (just my speculation)

1 Like

Hi again and thanks for your reply!

Yes you where right … replacing kernel&initrd didn’t help.
But that did only make things worse … So I’m abandoning that path.

Back to the log-files:
I cannot see anything “really” on the client side.
But looking in the messages-file on the server, I notice that when anaconda requests an rpm; it gets a “HTTP 206” in reply whereas when I do it manually, I get 200.

This happens during the download of the first rpm, and after that it just loops the same thing in 10 retries.
I suppose anaconda knows about subscription-keys and environments even this early in the process.

Any idea of what area I should be targeting for problem determination?

//Br Christian

These numbers are content view database IDs from Katello. Everytime a content is published, it gets new ID and kernel/initramdisk is redownloaded.

Are these responses from Pulp? Maybe someone from @katello knows more.

1 Like

Ah! So that’s when the numbers are changed on initrd&vmlinuz. Good to know.

Yes, the “http 206” is from pulp,and that’s about all the errors, about this, that I can find.
Oct 1 11:57:04 server1745 pulpcore-content: 127.0.0.1 [01/Oct/2020:09:57:04 +0000] "GET /pulp/content/<myorg>/non-Production/CentOS7/custom/CentOS7/CentOS7_Base/Packages/f/fipscheck-lib-1.4.1-6.el7.x86_64.rpm HTTP/1.1" 206 5096 "-" "urlgrabber/3.10 yum/3.4.3"

I suppose that in the context that anaconda-yum executes, It is restrained by subscription-keys, environments and so on, right? And when I just wget from the cmdline I’m above all that controll.

So I’m guessing that something must be amiss with my content setup for CentOS7.
Is there any log file one could expect to find errors like this in? There should be a “Denied!” somewhere?

Or is it just the tiresome tear-down/build-up tussle, that has to be performed?

//Br Christian

I don’t know how this works, Pulp also act like a HTTP proxy in some cases. @Justin_Sherrill perhaps can share more details about those HTTP 206 responses.

After to many hours of problem determination, I gave up and started to tear the whole CentOS7-setup down in the purpose of rebuilding it. Unfortunately there are two versions of the content-view that cannot be removed. It get stuck on 46%.

I have cancelled the delete task … and removed more Cento7-things and run it again.
Maybe that’s why I get below error … I don’t know … Or maybe this is just another sign that my content-view was in someway corrupt.

In production.log, I get:

404 Not Found: {"http_request_method": "DELETE", "exception": null, "error_message": "Missing resource(s): repository=1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27", "_href": "/pulp/api/v2/repositories/1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27/", "http_status": 404, "error": {"code": "PLP0009", "data": {"resources": {"repository": "1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27"}}, "description": "Missing resource(s): repository=1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27", "sub_errors": []}, "traceback": null, "resources": {"repository": "1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27"}}

And in foreman-ssl_access_ssl.log:

NNN.165.11.217 - admin [05/Oct/2020:13:13:02 +0200] "DELETE /pulp/api/v3/repositories/rpm/rpm/6932d509-80ca-4890-8500-b40d80704ec6/versions/29/ HTTP/1.1" 202 67 "-" "OpenAPI-Generator/3.5.0/ruby"
NNN.165.11.217 - admin [05/Oct/2020:13:13:02 +0200] "DELETE /pulp/api/v3/repositories/rpm/rpm/6932d509-80ca-4890-8500-b40d80704ec6/versions/1/ HTTP/1.1" 202 67 "-" "OpenAPI-Generator/3.5.0/ruby"
NNN.165.11.217 - admin [05/Oct/2020:13:13:02 +0200] "DELETE /pulp/api/v3/repositories/rpm/rpm/9b23ed00-561e-470e-b0fc-c9529d25fe33/versions/1/ HTTP/1.1" 404 23 "-" "OpenAPI-Generator/3.5.0/ruby"
NNN.165.11.217 - admin [05/Oct/2020:13:13:02 +0200] "GET /pulp/api/v3/tasks/045a38ac-1951-4b45-bfa9-ed21d8652c4c/ HTTP/1.1" 200 297 "-" "OpenAPI-Generator/3.4.1/ruby"
NNN.165.11.217 - admin [05/Oct/2020:13:13:02 +0200] "DELETE /pulp/api/v2/repositories/1-CentOS7-v2020_267-puppet-152909a0-3715-444f-9357-57cbe9a94f27/ HTTP/1.1" 202 172 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.5p157"

One can see a difference in in the five last calls: The last one is for pulp/api/v2.
Does that signify anything? That some things are accessed using v2 and other things using v3?

//Br Christian

Ahh! About 10 slightly different attempts (/hammer/gui/admin-user/my-ldap-admin-user/) to remove ‘allthingscentos7’ and a couple of reboots … it finally went away. Let’s rebuild and see if it works.
Thanks for now.

//Br Christian

I just assumed that i worked …
It seems that everything that is connected to my epel7 repository, after some point in time has gone sour.
I’m not able to remove the two latest content-views, containing this repo and I’m not able to remove the repository itself. Running ‘foreman-installer’ (did this trying to raise timeouts) gives:
[DEBUG 2020-10-06T11:13:32 main] =============================================
[DEBUG 2020-10-06T11:13:32 main] Upgrade Step 1/3: katello:correct_repositories. This may take a long while.
[DEBUG 2020-10-06T11:13:32 main] Processing Repository 1/49: Epel7 (4)
[DEBUG 2020-10-06T11:13:32 main] Repository 4 Missing
[DEBUG 2020-10-06T11:13:32 main] Recreating 4
[DEBUG 2020-10-06T11:13:32 main] Failed upgrade task: katello:correct_repositories, see logs for more information.

Is there a way to forcefully remove a product / content-view / repository?

[DEBUG 2020-10-06T11:13:32 main] Failed upgrade task: katello:correct_repositories, see logs for more information.

Can you check the installer log on why it failed?
It should be located somewhere in /var/log/foreman-installer

1 Like

Not sure what I’m looking for …
The excerpt above is from /var/log/foreman-installer/katello.log and that is the only updated file in this directory.

Here are som more lines in the same file:
<snip>
        [DEBUG 2020-10-06T11:13:32 main] Upgrade Step 1/3: katello:correct_repositories. This may take a long while.
        [DEBUG 2020-10-06T11:13:32 main] Processing Repository 1/49: Epel7 (4)
        [DEBUG 2020-10-06T11:13:32 main] Repository 4 Missing
        [DEBUG 2020-10-06T11:13:32 main] Recreating 4
        [DEBUG 2020-10-06T11:13:32 main] Failed upgrade task: katello:correct_repositories, see logs for more information.
        [DEBUG 2020-10-06T11:13:32 main] =============================================
        [DEBUG 2020-10-06T11:13:32 main] Upgrade Step 2/3: katello:correct_puppet_environments. This may take a long while.
        [DEBUG 2020-10-06T11:13:32 main] Processing Puppet Environment 1/2: 1-CentOS8-v2020_80-puppet-18713aff-4dd0-45c2-a51f-ad0b613a38ab (3)
        [DEBUG 2020-10-06T11:13:32 main] Processing Puppet Environment 2/2: 1-CentOS8-v2020_251-puppet-ea6e6427-073c-4fcd-99a5-b58c447cc192 (4)
        [DEBUG 2020-10-06T11:13:32 main] =============================================
        [DEBUG 2020-10-06T11:13:32 main] Upgrade Step 3/3: katello:clean_backend_objects. This may take a long while.
        [DEBUG 2020-10-06T11:13:32 main] 0 orphaned consumer id(s) found in candlepin.
        [DEBUG 2020-10-06T11:13:32 main] Candlepin orphaned consumers: []
        [DEBUG 2020-10-06T11:13:32 main] 0 orphaned consumer id(s) found in pulp.
        [DEBUG 2020-10-06T11:13:32 main] Pulp orphaned consumers: []
        [DEBUG 2020-10-06T11:13:32 main] foreman-rake upgrade:run finished successfully!
        [DEBUG 2020-10-06T11:13:32 main] Hook /usr/share/foreman-installer/hooks/post/30-upgrade.rb returned nil
        [DEBUG 2020-10-06T11:13:32 main] Hook /usr/share/foreman-installer/hooks/post/99-post_install_message.rb returned nil
        [DEBUG 2020-10-06T11:13:32 main] cdn_ssl_version already migrated, skipping
        [DEBUG 2020-10-06T11:13:32 main] Hook /usr/share/foreman-installer/katello/hooks/post/31-cdn_setting.rb returned [#<Logging::Logger:0x0000000002641458 @name="main", @parent=#<Logging::RootLogger:0x0000000001cee810 @name="root", @appenders=[], @additive=false, @caller_tracing=false, @level=0>, @appenders=[#<Logging::Appenders::RollingFile:0x0000000002637408 @roller=#<Logging::Appenders::RollingFile::Roller:0x00000000026373e0 @fn="/var/log/foreman-installer/katello{{.%d}}.log", @roll_by=:number, @filename="/var/log/foreman-installer/katello.log", @roll=false, @keep=nil, @copy_file="/var/log/foreman-installer/katello.log._copy_", @glob="/var/log/foreman-installer/katello.*.log", @number_rgxp=/\/var\/log\/foreman-installer\/katello.(\d+).log/, @format="/var/log/foreman-installer/katello.%d.log">, @size=nil, @age_fn="/var/log/foreman-installer/katello.log.age", @age_fn_mtime=nil, @age=nil, @encoding=#<Encoding:UTF-8>, @mode="a+:UTF-8", @io=#<File:/var/log/foreman-installer/katello.log>, @close_method=:close, @buffer=[], @immediate=[], @auto_flushing=1, @async=false, @async_flusher=nil, @flush_period=nil, @name="configure", @closed=false, @filters=[], @mutex=#<ReentrantMutex:0x0000000002636d28 @locker=nil>, @layout=#<Logging::Layouts::Pattern:0x0000000001d031e8 @obj_format=:string, @backtrace=true, @utc_offset=nil, @cause_depth=8, @created_at=2020-10-06 11:10:38 +0200, @date_pattern="%Y-%m-%dT%H:%M:%S", @date_method=nil, @pattern="[%5l %d %c] %m\n", @color_scheme=nil>, @level=0, @write_size=500>], @additive=true, @level=0, @caller_tracing=false>, #<Logging::Logger:0x000000000262fcd0 @name="fatal", @parent=#<Logging::RootLogger:0x0000000001cee810 @name="root", @appenders=[], @additive=false, @caller_tracing=false, @level=0>, @appenders=[#<Logging::Appenders::Stderr:0x000000000262c1c0 @io=#<IO:<STDERR>>, @close_method=:close, @buffer=[], @immediate=[], @auto_flushing=1, @async=false, @async_flusher=nil, @flush_period=nil, @name="stderr", @closed=false, @filters=[], @mutex=#<ReentrantMutex:0x0000000002623e08 @locker=nil>, @layout=#<Logging::Layouts::Pattern:0x00000000019e1b40 @obj_format=:string, @backtrace=true, @utc_offset=nil, @cause_depth=8, @created_at=2020-10-06 11:10:38 +0200, @date_pattern="%Y-%m-%dT%H:%M:%S", @date_method=nil, @pattern="[%5l %d %c] %m\n", @color_scheme=#<Logging::ColorScheme:0x00000000019e3bc0 @scheme={"date"=>"\e[34m", "logger"=>"\e[36m", "line"=>"\e[33m", "file"=>"\e[33m", "method"=>"\e[33m", "info"=>"\e[32m", "warn"=>"\e[33m", "error"=>"\e[31m", "fatal"=>"\e[37m\e[41m"}, @lines=false, @levels=true>, @name_map_0=["DEBUG", "\e[32m INFO\e[0m", "\e[33m WARN\e[0m", "\e[31mERROR\e[0m", "\e[37m\e[41mFATAL\e[0m"]>, @level=0, @encoding=nil, @write_size=500>], @additive=true, @level=4, @caller_tracing=false>]
        [DEBUG 2020-10-06T11:13:32 main] Hook /usr/share/foreman-installer/katello/hooks/post/99-version_locking.rb returned nil
        [ INFO 2020-10-06T11:13:32 main] All hooks in group post finished
        [DEBUG 2020-10-06T11:13:32 main] Exit with status code: 2 (signal was 2)
        [ERROR 2020-10-06T11:13:32 main] Errors encountered during run:
        [ERROR 2020-10-06T11:13:32 main] foreman-maintain packages is-locked --assumeyes failed! Check the output for error!
        [DEBUG 2020-10-06T11:13:32 main] Cleaning /tmp/kafo_installation20201006-6939-imnz1i
        [DEBUG 2020-10-06T11:13:32 main] Cleaning /tmp/kafo_installation20201006-6939-judmyb
        [DEBUG 2020-10-06T11:13:32 main] Cleaning /tmp/default_values.yaml
        [ INFO 2020-10-06T11:13:32 main] Installer finished in 168.181707015 seconds


I'm pretty sure that this is the output for this perticular run:

    [root@server1745 adminchristianj]# foreman-installer
    Preparing installation Done
    Executing: foreman-rake upgrade:run
    =============================================
    Upgrade Step 1/3: katello:correct_repositories. This may take a long while.
    Processing Repository 1/49: Epel7 (4)
    Repository 4 Missing
    Recreating 4
    Failed upgrade task: katello:correct_repositories, see logs for more information.
    =============================================
    Upgrade Step 2/3: katello:correct_puppet_environments. This may take a long while.
    Processing Puppet Environment 1/2: 1-CentOS8-v2020_80-puppet-18713aff-4dd0-45c2-a51f-ad0b613a38ab (3)
    Processing Puppet Environment 2/2: 1-CentOS8-v2020_251-puppet-ea6e6427-073c-4fcd-99a5-b58c447cc192 (4)
    =============================================
    Upgrade Step 3/3: katello:clean_backend_objects. This may take a long while.
    0 orphaned consumer id(s) found in candlepin.
    Candlepin orphaned consumers: []
    0 orphaned consumer id(s) found in pulp.
    Pulp orphaned consumers: []
    foreman-rake upgrade:run finished successfully!
      Success!
      * Foreman is running at https://server1745.dd.net
      * To install an additional Foreman proxy on separate machine continue by running:

          foreman-proxy-certs-generate --foreman-proxy-fqdn "$FOREMAN_PROXY" --certs-tar "/root/$FOREMAN_PROXY-certs.tar"
      * Foreman Proxy is running at https://server1745.dd.net:9090
      The full log is at /var/log/foreman-installer/katello.log

And from production.log around the same time:

2020-10-06T11:13:22 [I|app|] Rails cache backend: File
2020-10-06T11:13:25 [W|app|] Creating scope :completer_scope. Overwriting existing method Organization.completer_scope.
2020-10-06T11:13:26 [W|app|] Scoped order is ignored, it's forced to be batch order.
2020-10-06T11:13:26 [W|app|] Creating scope :completer_scope. Overwriting existing method Location.completer_scope.
2020-10-06T11:13:31 [I|kat|] GET: https://server1745.dd.net/pulp/api/v2/repositories/6793bfd7-17ff-4c40-bb46-566bd080bd51/?: {"content_type"=>"application/json", "accept"=>"application/json"}
404 Not Found: {"http_request_method": "GET", "exception": null, "error_message": "Missing resource(s): repository=6793bfd7-17ff-4c40-bb46-566bd080bd51", "_href": "/pulp/api/v2/repositories/6793bfd7-17ff-4c40-bb46-566bd080bd51/?", "http_status": 404, "error": {"code": "PLP0009", "data": {"resources": {"repository": "6793bfd7-17ff-4c40-bb46-566bd080bd51"}}, "description": "Missing resource(s): repository=6793bfd7-17ff-4c40-bb46-566bd080bd51", "sub_errors": []}, "traceback": null, "resources": {"repository": "6793bfd7-17ff-4c40-bb46-566bd080bd51"}}
2020-10-06T11:13:32 [W|app|] Failed upgrade task: katello:correct_repositories
2020-10-06T11:16:35 [I|app|2180e652] Started GET "/katello/api/repositories?page=1&per_page=1000" for 127.0.0.1 at 2020-10-06 11:16:35 +0200

I gave up and revertet to a pit before any configuration.
Let’s start over.

Thanks for now.

Yo, Were you able to get it to work after starting over again?

I have this same issue when I synced an entire ISO directory and using the sync content as operating system for provisioning. I get an error similar to what is described here (https://www.golinuxcloud.com/error-populating-transaction-retrying-rhel-7/)

It complained that certain packages are missing during anaconda installation when in fact I could find those “missing” packages among the sync content. I have a feeling the the repodata got corrupted when katello tried to sync the ISO from the remote repository.

Does any anyone know how i can resolve it?

Thanks

@lzap We were having the same issues mentioned here, from a fresh centos 7 install:

  • Linux foreman-devel 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • foreman-installer-2.1.4-1.el7.noarch
  • katello-3.16.1-1.el7.noarch

(We tried with the 2.2rc1 and Katello 3.16 and the issue still happened, we could not try latest 2.2 or 2.3 and 3.17 as per this bug: Bug #31217: installation failed with "Evaluation Error: Error while evaluating a Resource Statement, Class[Dhcp]: has no parameter named 'conf_dir_mode' - Installer - Foreman)

As mentioned by @Bugenhagen this only happens with CentOS 7, not 8.
While looking at the logs from both Foreman/Katello instance we noticed that even if the first package requested got a 206, the following ones got a 200, does not make much sense (at least to me). The following is part of the httpd logs from a clean boot of a client:

192.168.200.26 - - [13/Nov/2020:13:34:30 +0100] "GET /unattended/provision?token=4bcd8534-cccd-4387-9f8e-ea1868e34330 HTTP/1.1" 200 4735 "-" "curl/7.29.0"
192.168.200.26 - - [13/Nov/2020:13:34:32 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//.treeinfo HTTP/1.1" 200 875 "-" "curl/7.29.0"
192.168.200.26 - - [13/Nov/2020:13:34:32 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//LiveOS/squashfs.img HTTP/1.1" 200 521617408 "-" "curl/7.29.0"
192.168.200.26 - - [13/Nov/2020:13:34:33 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//images/updates.img HTTP/1.1" 404 14 "-" "curl/7.29.0"
192.168.200.26 - - [13/Nov/2020:13:34:33 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//images/product.img HTTP/1.1" 404 14 "-" "curl/7.29.0"
192.168.200.26 - - [13/Nov/2020:13:34:56 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//.treeinfo HTTP/1.1" 200 875 "-" "urlgrabber/3.10"
192.168.200.26 - - [13/Nov/2020:13:34:56 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/repodata/repomd.xml HTTP/1.1" 200 3875 "-" "CentOS (anaconda)/7 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:34:57 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//.treeinfo HTTP/1.1" 200 875 "-" "urlgrabber/3.10"
192.168.200.26 - - [13/Nov/2020:13:34:57 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/repodata/7ea6872641840c61c4f249ab656d8e4d647d223797329d3dffe947e4db777464-primary.xml.gz HTTP/1.1" 200 4272773 "-" "CentOS (anaconda)/7 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:34:57 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/repodata/9e77e5b0b24bbffe125b1a7bbd1fe2a14f3ddaff9ac6c2cf939a228b61bac1ff-comps.xml HTTP/1.1" 200 763349 "-" "CentOS (anaconda)/7 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:34:59 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base//repodata/repomd.xml HTTP/1.1" 200 3875 "-" "urlgrabber/3.10"
192.168.200.26 - - [13/Nov/2020:13:35:19 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/f/fipscheck-lib-1.4.1-6.el7.x86_64.rpm HTTP/1.1" 206 4832 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:19 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/s/sg3_utils-1.37-19.el7.x86_64.rpm HTTP/1.1" 206 42956 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:19 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 52847 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:20 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 51399 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:21 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 71671 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:23 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 62983 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:27 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 55743 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:35 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 60087 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:35:51 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 46103 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:36:23 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 51399 "-" "urlgrabber/3.10 yum/3.4.3"
192.168.200.26 - - [13/Nov/2020:13:37:27 +0100] "GET /pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/p/plymouth-core-libs-0.8.9-0.34.20140113.el7.centos.x86_64.rpm HTTP/1.1" 200 58639 "-" "urlgrabber/3.10 yum/3.4.3"

Here fipscheck-lib and sg3_utils gets the 206 but the other packages seem reachable. Still I try to curl any package from the anaconda tmux session, all can be downloaded.

I have checked the initrd.img and vmlinuz images synced in our repo with the ones in the original repo and the checksums match.

If we replace the url of the repo in the template to the one upstream, the installation works
Provisioning themplate with synced repos (this does not work)

install
url --url http://foreman-devel.scicore.unibas.ch/pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/

provisioning template with external repos (this works):

install
url --url http://linuxsoft.cern.ch/centos/7.9.2009/os/x86_64/

Then during a last test I noticed that the package mentioned as not found was not sg3 anymore but another, so during an install I did a curl against the url that gave the 200 message (plymouth-core-libs) and funny enough, on the next try of anaconda, another pakage showed up. So I tried doing a full curl of each rpm in the local repo and after that, the installation works with the local repos.

# the following was a quick solution to just "touch" all rpms
for i in {a..z}; do for j in `curl http://foreman-devel.scicore.unibas.ch/pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/$i/| grep href |awk '{print $2}'|cut -f2 -d">"|cut -f1 -d"<"`; do curl http://foreman-devel.scicore.unibas.ch/pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/$i/$j > /dev/null; done;done
for i in `curl http://foreman-devel.scicore.unibas.ch/pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/3/| grep href |awk '{print $2}'|cut -f2 -d">"|cut -f1 -d"<"`; do curl http://foreman-devel.scicore.unibas.ch/pulp/repos/sciCORE/Library/custom/CentOS_7/centos7_latest_Base/Packages/3/$i > /dev/null; done

If needed I can provide the ansible code and foreman install options used to deploy our instance if it can be useful to help troubleshoot this issue.

Let me know if I can provide more information.

Cheers

1 Like

I forgot to add that this issue happens with the Download Policy set to On Demand or even Inmediate and forcing a repo sync. Only after going over all rpms with curl makes the installation work.

FInal update (apologies for the spam). If I do a fresh provisioning and I configure the download policy in theforeman.foreman.repository ansible module to immediate then provisioning works.
So this issue seems to happen only if the initial config is done with on_demand. In my tests I switched to immediate but only after having done the initial provisioning with on_demand and then forcing a sync, which somehow seems to yield a different result than doing the initial setup with immediate.

Have a nice weekend.

1 Like

No, @pinapplebun . Fresh start didn’t help me. My scripted setup always uses “on-demand” policy on the repositories. Dang … I’ve spent so many hours searching my configuration for errors. And when it behaved exactly the same in Katello 3.17-RC2, we gave up attempting to provision Centos7.

It’s good to see that a simple, yet space consuming workaround exists.@i-mtz , is it sufficient to curl from ‘Library’; even if your server belongs to another lifecycle-environment? Or maybe your servers lives in ‘Library’?

So,we know that it works with curl. I’ve used the repositories to upgrade existing servers using yum. Is it then only anaconda-yum that fails?

The provisioned VM was configured using the Production lifecycle environment, so yes, it works with doing curl against Library.

If yum is able to work when pointing it to the repo with on_demand as sync method, maybe the issue is with anaconda-yum?
In our case CentOS 7 and 8 repos are configured the same (minus urls and the new naming scheme in 8) and the provisioning of a CentOS 8 machine worked without issues. So unless katello is doing something version specific with the repos, the other variable is really on the code that accesses the repos from the machine being provisioned.

I really don’t know what is going on within @Pulp maybe someone from the backend team could give an advice

Hi @i-mtz @pineapplebun @Bugenhagen @rplevka,

I appear to be having the same kind of trouble with RHEL repositories initiall synced using On Demand Policy.

This is using:
Foreman 2.3.3
Katello 3.18.2.1
pulp-server-2.21.5
python3-pulpcore-3.7.5

I managed to provision by creating installation media for RHEL7 using the iso. I’m assuming this is a pulp thing. I’m going to try and blow away my RHEL kickstart tress and perform the initial sync using
immediate policy. We do have a very flakey proxy here which could have caused us some problems.

Did everyone else resolve this issue?