katello-nightly-rpm-pipeline 953 failed

Katello nightly pipeline failed:

https://ci.theforeman.org/job/katello-nightly-rpm-pipeline/953/

foreman-pipeline-katello-nightly-centos8-install (failed) (remote job)
foreman-pipeline-katello-nightly-centos8-stream-install (failed) (remote job)
foreman-pipeline-katello-nightly-centos8-upgrade (passed) (remote job)
foreman-pipeline-katello-nightly-centos7-install (passed) (remote job)
foreman-pipeline-katello-nightly-centos7-upgrade (passed) (remote job)

Current failures I am seeing with fresh EL8 installs:

    not ok 34 publish and promote composite content view
    # (in test file fb-katello-content.bats, line 260)
    #   `hammer content-view publish --organization="${ORGANIZATION}" \' failed with status 70
    # Task e7934742-6dc6-4842-b34e-884151a7e874 running: 0.0/1, 0%, elapsed: 00:00:00
    # Task e7934742-6dc6-4842-b34e-884151a7e874 running: 0.21/1, 21%, 0.1/s, elapsed: 00:00:02, ETA: 00:00:08
    # Task e7934742-6dc6-4842-b34e-884151a7e874 running: 0.42/1, 42%, 0.1/s, elapsed: 00:00:04, ETA: 00:00:06
    # Task e7934742-6dc6-4842-b34e-884151a7e874 running: 0.82/1, 82%, 0.1/s, elapsed: 00:00:06, ETA: 00:00:01
    # Task e7934742-6dc6-4842-b34e-884151a7e874 error: 0.86/1, 86%, 0.1/s, elapsed: 00:00:08, ETA: 00:00:01
    # Task e7934742-6dc6-4842-b34e-884151a7e874 error: 0.86/1, 86%, 0.1/s, elapsed: 00:00:08, ETA: 00:00:01
    # Error: update or delete on table "core_reservedresource" violates foreign key constraint "core_taskreservedres_resource_id_ee0b7c62_fk_core_rese" on table "core_taskreservedresource"
    # DETAIL:  Key (pulp_id)=(69a94c39-7f0f-4f68-9816-e4254c4fc83f) is still referenced from table "core_taskreservedresource".

@Justin_Sherrill any ideas?

we saw this recently: Issue #8603: possible tasking race condition: update or delete on table "core_reservedresource" violates foreign key constraint "core_taskreservedres_resource_id_ee0b7c62_fk_core_rese" on table "core_taskreservedresource" - Pulp on pipline #943

It looks like we had a successful run after that, so this one is likely transient (but i’m not sure how often it happens). I can raise the priority with the pulp team

I’ve seen it twice in the pipeline and once locally in the past 48 hours. Every time on EL8, only for fresh installs. Perhaps we see this on EL8 more often because it tends to be all around faster allowing the race condition to trigger more? Please do raise the priority as this will get annoying :slight_smile:

Additionally we are seeing failures with our bats test not finding foreman-maintain command, doing some debugging I see:

#   `which foreman-maintain' failed
# which: no foreman-maintain in (/usr/libexec:/sbin:/bin:/usr/sbin:/usr/bin)

not ok 4 check service status
# (in test file fb-test-katello.bats, line 41)
#   `foreman-maintain service status' failed with status 127
# /tmp/bats.62678.src: line 41: foreman-maintain: command not found

However, I can confirm the package exists at the time it is ran and the command exists when I ssh onto the and run it myself:

[root@pipe-katello-server-nightly-centos8-stream vagrant]# which foreman-maintain
/bin/foreman-maintain
[root@pipe-katello-server-nightly-centos8-stream vagrant]# foreman-maintain --help
Usage:
    foreman-maintain [OPTIONS] SUBCOMMAND [ARG] ...

Parameters:
    SUBCOMMAND                    subcommand
    [ARG] ...                     subcommand arguments

Subcommands:
    health                        Health related commands
    upgrade                       Upgrade related commands
    service                       Control applicable services
    backup                        Backup server
    restore                       Restore a backup
    packages                      Lock/Unlock package protection, install, update
    advanced                      Advanced tools for server maintenance
    content                       Content related commands
    maintenance-mode              Control maintenance-mode for application

Options:
    -h, --help                    print help

This seems to happen on CentOS 8 stream locally, and both 8 stream and 8 in our CI system.

Turns out with the change to not enable katello-agent infrastructure by default, we are ensuring that qpid-tools is not present. Since the katello RPM relies on qpid-tools this caused yum to remove qpid-tools and thus katello and thus rubygem-foreman_maintain. This PR should fix this:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.