Has Anyone Successfully Installed Katello 3.15 & Up?

Problem: Katello 3.15 upgrade was a serious detriment to my system. Pupper hosts were suddenly revoked; then I come to find the content hosts were also wiped. I didn’t panic; I have backups. Upgrading did not work, so I decided to try a fresh install from 3.16 and up (yes, even 3.28-RC1). That said, no dice.

My last attempt:

yum -y localinstall https://yum.theforeman.org/releases/2.2/el7/x86_64/foreman-release.rpm;
yum -y localinstall https://fedorapeople.org/groups/katello/releases/yum/3.17/katello/el7/x86_64/katello-repos-latest.rpm;
yum -y localinstall https://yum.puppet.com/puppet6-release-el-7.noarch.rpm;
yum -y install epel-release centos-release-scl-rh;
yum -y update; # no reboot needed, in my case...
yum -y install gofer; # Installer puts this on the user, for some reason
yum -y install katello tfm-rubygem-hammer_cli_katello;
# plug-mon flag added, as recently recommended in this forum...
foreman-installer \
    --scenario katello \
    --certs-server-cert "/etc/pki/tls/certs/<signed-cert>.crt" \
    --certs-server-key "/etc/pki/tls/certs/<signed-cert-key>.key" \
    --certs-server-ca-cert "/etc/pki/tls/certs/<signed-cert-ca>.crt" \
    --no-enable-foreman-plugin-monitoring \
    --verbose -l DEBUG;

Expected outcome: A successful install

Foreman and Proxy versions: 2.2.1

Foreman and Proxy plugin versions: Not sure what this is referring to

Distribution and version: CentOS Linux release 7.9.2009 (Linux 3.10.0-1160.6.1.el7.x86_64)

Other relevant data:
/var/log/foreman-installer/katello.log (2.5 MB)
/var/tmp/foreman-debug-6xVMz.tar.xz (9.1 MB)

What I know so far:

  • rh-redis5-redis has permissions issue. This is mitigated by creating/chowning /var/log/redis before the install.
  • The foreman user’s $HOME folder, /usr/share/foreman, is owned by root.
  • There seems to be an abundance of SCL/Ruby/Gem issues. Various dependency failures are noted.
  • The official docs are misleading. A lot of people are going to find out their root partition is filled via the mongo/postgres change as a result of the SCL/RH switch.
  • foreman-rake is broken. (apipie:cache, apipie:cache:index, db:migrate, db:seed, etc.)
  • foreman-maintain is broken. (is_locked/execute/execute! issue)
  • foreman-debug is broken. (Cannot grab the bundle_list without some modification.)

Of course, this is just what I’ve found in my experience. That said, this is a fresh install–I’m literally following the official docs to a T (I mean, they’re simple enough). So besides any ideas on what could be my issue; what I’d really like to know is who, if anyone, has had a successful run with any installation or upgrade since the crawl to Pulp3 began? That, along with what distribution, release, etc.

The installer log shows that the installation passed. I’m a bit unclear on exactly what the problem was

This is odd. By default /var/opt/rh/rh-redis5/log/redis should be used. Perhaps we misconfigure this in the installer.

This is correct and I don’t see what the problem is.

Can you share them? I suspect that you may be enabling a different Ruby or changed your system Ruby in some way.

Can you share how it’s broken?

Again, I don’t know what you mean

It would be useful if you shared exactly what’s going on. This again hints that your system Ruby may be broken and foreman-debug would report that.

Looking at /var/log/messages in your debug the thing that mostly jumps out is that Pulp 3 can’t authenticate to PostgreSQL. That is a problem.

Another thing that jumped out was that hammer ping couldn’t connect to https://$FQDN:443. Perhaps there’s a firewall in between? If the service isn’t running, that might also explain it. In the debug I don’t see logs from /var/log/foreman so I can’t really say anything about that.

Weird, as this is not what foreman-installer showed me. I checked it out; and, at first, it appeared as though the --verbose -l DEBUG flags only apply to stdout, and not katello.log. That said, upon further inspection, it’s now clear that foreman-installer omits this line from the log by design; which, I think, unintentionally provides a false narrative:

EXPECTATION

Foreman 2.2 Manual > 3.2.1 Installation > Running the installer:
After it completes, the installer will print some details about where to find Foreman and the Smart Proxy. Output should be similar to this:

  * Foreman is running at https://theforeman.example.com
      Initial credentials are admin / 3ekw5xtyXCoXxS29
  * Foreman Proxy is running at https://theforeman.example.com:8443
  * The full log is at /var/log/foreman-installer/foreman-installer.log

NARRATIVE

/var/log/foreman-installer/katello.log (2.5 MB):

[DEBUG 2020-12-03T20:15:31 main]  /File[/opt/puppetlabs/puppet/cache/locales]: Adding autorequire relationship with File[/opt/puppetlabs/puppet/cache]
[DEBUG 2020-12-03T20:15:31 main]  Finishing transaction 53036520
[DEBUG 2020-12-03T20:15:31 main]  Received report to process from satellite.gtkcentral.net
[ INFO 2020-12-03T20:15:33 main] Puppet has finished, bye!
[ INFO 2020-12-03T20:15:33 main] Executing hooks in group post
[DEBUG 2020-12-03T20:15:33 main] Hook /usr/share/foreman-installer/hooks/post/30-upgrade.rb returned nil
[DEBUG 2020-12-03T20:15:33 main] Hook /usr/share/foreman-installer/hooks/post/99-post_install_message.rb returned nil
[DEBUG 2020-12-03T20:15:33 main] cdn_ssl_version already migrated, skipping
[DEBUG 2020-12-03T20:15:33 main] Hook /usr/share/foreman-installer/katello/hooks/post/31-cdn_setting.rb returned [#<Logging::Logger:0x000000 ... @caller_tracing=false>]
[DEBUG 2020-12-03T20:15:33 main] Hook /usr/share/foreman-installer/katello/hooks/post/99-version_locking.rb returned nil
[ INFO 2020-12-03T20:15:33 main] All hooks in group post finished
[DEBUG 2020-12-03T20:15:33 main] Exit with status code: 6 (signal was 6)
[DEBUG 2020-12-03T20:15:33 main] Cleaning /tmp/kafo_installation20201203-5806-1yc6h7s
[DEBUG 2020-12-03T20:15:33 main] Cleaning /tmp/kafo_installation20201203-5806-1cjg6v0
[DEBUG 2020-12-03T20:15:33 main] Cleaning /tmp/default_values.yaml
[ INFO 2020-12-03T20:15:33 main] Installer finished in 329.333699211 seconds

REALITY

STDOUT (truncated for clarity):

[ INFO 2020-12-03T20:15:33 verbose] Puppet has finished, bye!
[ INFO 2020-12-03T20:15:33 verbose] Executing hooks in group post
[DEBUG 2020-12-03T20:15:33 verbose] Hook /usr/share/foreman-installer/hooks/post/30-upgrade.rb returned nil
  Something went wrong! Check the log for ERROR-level output
  The full log is at /var/log/foreman-installer/katello.log
[DEBUG 2020-12-03T20:15:33 verbose] Hook /usr/share/foreman-installer/hooks/post/99-post_install_message.rb returned nil
[DEBUG 2020-12-03T20:15:33 verbose] cdn_ssl_version already migrated, skipping
...
[ INFO 2020-12-03T20:15:33 verbose] Installer finished in 329.333699211 seconds

That’s pretty weird. I’ve never really had to check that stuff, so I was unaware (and a bit befuddled) that the, “something went wrong” message would be omitted from it.

Either way, in my experience, the installer will always say it, finished in ###.##### seconds, whether the install was successful or not. I’ve never been given the impression that it is indicative of success, just completion. It’s like a race, one can finish arace; but a race’s completion is not indicative of one’s success in said race.



I think so. /usr/lib/systemd/system/rh-redis5-redis.service states the conf file as, /etc/opt/rh/rh-redis5/redis.conf. That conf file is installed with the following path properties (numbered for clarity):

  20  pidfile /var/opt/rh/rh-redis5/run/redis_6379.pid
  64  unixsocket /var/run/redis/redis.sock
  96  logfile /var/log/redis/redis.log
 180  dir /var/opt/rh/rh-redis5/lib/redis


I wasn’t sure about this one, to be honest. At one point, My troubles lead me to an issue similar to this one. OP received no further response; so, I made a point to bring it up in case it was accidentally lost in time.

In addition, it is normally assumed that the user in question owns it’s own $HOME folder; so, you could say it just subconsciously raised a red-flag for me, as someone on the outside looking in.



I never have a system ruby installed prior to the fresh install; nor do I use ruby version managers. Though in the distant, past of my Foreman journey (journeyman? :stuck_out_tongue:), I dabbled with rvm, and learned early on, to just avoid them all together*). Whatever version of ruby is installed as a result of foreman/katello’s dependencies is what my ruby environment ends up being.

That said, all versions of ruby are detailed in the archive I uploaded; and from the katello.log:

 16363  [ WARN 2020-12-04T11:13:29 main]  /Stage[main]/Foreman::Database/Foreman_config_entry[db_pending_seed]: Skipping because of failed dependencies
 16366  [ WARN 2020-12-04T11:13:29 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:seed]/Exec[foreman-rake-db:seed]: Skipping because of failed dependencies
 16372  [ WARN 2020-12-04T11:13:29 main]  /Service[httpd]: Skipping because of failed dependencies
 16377  [ WARN 2020-12-04T11:13:29 main]  /Stage[main]/Katello::Application/Foreman_config_entry[pulp_client_cert]: Skipping because of failed dependencies
 16379  [ WARN 2020-12-04T11:13:29 main]  /Stage[main]/Katello::Application/Foreman_config_entry[pulp_client_key]: Skipping because of failed dependencies
 16381  [ WARN 2020-12-04T11:13:29 main]  /Stage[main]/Foreman/Foreman::Rake[apipie:cache:index]/Exec[foreman-rake-apipie:cache:index]: Skipping because of failed dependencies
 16384  [ WARN 2020-12-04T11:13:29 main]  /Service[dynflow-sidekiq@worker-hosts-queue]: Skipping because of failed dependencies
 16432  [ WARN 2020-12-04T11:13:36 main]  /Service[foreman]: Skipping because of failed dependencies
 16437  [ WARN 2020-12-04T11:13:36 main]  /Stage[main]/Foreman::Service/Foreman::Dynflow::Worker[orchestrator]/File[/etc/foreman/dynflow/orchestrator.yml]: Skipping because of failed dependencies
 16439  [ WARN 2020-12-04T11:13:36 main]  /Service[dynflow-sidekiq@orchestrator]: Skipping because of failed dependencies
 16442  [ WARN 2020-12-04T11:13:36 main]  /Stage[main]/Foreman::Service/Foreman::Dynflow::Worker[worker]/File[/etc/foreman/dynflow/worker.yml]: Skipping because of failed dependencies
 16444  [ WARN 2020-12-04T11:13:36 main]  /Service[dynflow-sidekiq@worker]: Skipping because of failed dependencies
 16457  [ WARN 2020-12-04T11:13:37 main]  /Stage[main]/Foreman_proxy::Register/Foreman_smartproxy[satellite.gtkcentral.net]: Skipping because of failed dependencies
 16461  [ WARN 2020-12-04T11:13:37 main]  /Service[puppet]: Skipping because of failed dependencies
 16464  [ WARN 2020-12-04T11:13:37 main]  /Service[puppet-run.timer]: Skipping because of failed dependencies
 16466  [ WARN 2020-12-04T11:13:37 main]  /Stage[main]/Puppet::Agent::Service::Systemd/File[/etc/systemd/system/puppet-run.timer]: Skipping because of failed dependencies
 16468  [ WARN 2020-12-04T11:13:37 main]  /Stage[main]/Puppet::Agent::Service::Systemd/File[/etc/systemd/system/puppet-run.service]: Skipping because of failed dependencies
 16470  [ WARN 2020-12-04T11:13:37 main]  /Stage[main]/Puppet::Agent::Service::Systemd/Exec[systemctl-daemon-reload-puppet]: Skipping because of failed dependencies
 16473  [ WARN 2020-12-04T11:13:37 main]  /Stage[main]/Puppet::Agent::Service::Cron/Cron[puppet]: Skipping because of failed dependencies

I wasn’t sure which of these pertain to ruby dependencies or yum dependencies; though, I would guess the former, given the appropriate yum repos were installed as guided by the Official Docs.

But this is why I’m here; because, I only know what I know. For all I know, this is normal behaviour; and “dependency,” in this case, could mean, “no --enable-****** option was set for this stage,”–I don’t really know. I set the install to --trace to get a clearer picture; but, some portions remain fairly scarce in detail either way.



So yeah, this is a result of my attempts to get to the root of:

[DEBUG 2020-12-03T20:14:18 main]  Exec[foreman-rake-db:migrate](provider=posix): Executing check '/usr/sbin/foreman-rake db:abort_if_pending_migrations'
[DEBUG 2020-12-03T20:14:18 main]  Executing with uid=foreman: '/usr/sbin/foreman-rake db:abort_if_pending_migrations'
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: rake aborted!
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: LoadError: cannot load such file -- apipie/middleware/checksum_in_headers
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: /usr/share/foreman/config/application.rb:5:in `<top (required)>'
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: /usr/share/foreman/Rakefile:1:in `<top (required)>'
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: /opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.0/exe/rake:27:in `<top (required)>'
[DEBUG 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/unless: (See full trace by running task with --trace)
[DEBUG 2020-12-03T20:14:19 main]  Exec[foreman-rake-db:migrate](provider=posix): Executing '/usr/sbin/foreman-rake db:migrate'
[DEBUG 2020-12-03T20:14:19 main]  Executing with uid=foreman: '/usr/sbin/foreman-rake db:migrate'
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: rake aborted!
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: LoadError: cannot load such file -- apipie/middleware/checksum_in_headers
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: /usr/share/foreman/config/application.rb:5:in `<top (required)>'
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: /usr/share/foreman/Rakefile:1:in `<top (required)>'
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: /opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.0/exe/rake:27:in `<top (required)>'
[ WARN 2020-12-03T20:14:19 main]  /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: (See full trace by running task with --trace)

So I found the manifests that were calling, db:abort_if_pending_migrations, and db:migrate, respectively, during this phase:

  1. I created a quick script, /usr/sbin/foreman-env:
    #!/bin/bash
    
    declare STMP=`/usr/bin/date +'%y%m%d%H%M%S%N'`;
    { # ...
        declare ECO='/usr/bin/echo';
        declare WCH='/usr/bin/which';
        ${ECO} -e "USR: `$($WCH id)`";
        ${ECO} -e "PWD: `$($WCH pwd)`";
        ${ECO} -e "VER: Ruby | `$($WCH ruby) -v`";
        ${ECO} -e "VER: Gem  | `$($WCH gem) -v`";
        ${ECO} -e "\nENV: Gem"; `$WCH gem` env;
        ${ECO} -e "\nENV: Linux"; `$WCH env` | /usr/bin/sort;
        ${ECO} -e "\nENV: Procs"; `$WCH ps` -f --forest;
    } >/tmp/foreman-inst.${STMP}.env
    
  2. I edited line 25 of database.pp to:
    foreman::rake { 'db:migrate':
      unless => '/usr/sbin/foreman-env ; /usr/sbin/foreman-rake db:abort_if_pending_migrations',
    }
    
  3. I edited line 24 of rake.pp to:
      exec { "foreman-rake-${title}":
        command     => "/usr/sbin/foreman-env ; /usr/sbin/foreman-rake ${title}",
        user        => $user,
        environment => sort(join_keys_to_values(merge({'HOME' => $app_root}, $environment), '=')),
    
  4. Then I ran the installer (keep in mind, this was a previous attempt–not related to the attempt my uploaded archive relates to).
  5. What I found was:
    USR: uid=985(foreman) gid=982(foreman) groups=982(foreman)
    PWD: /root
    VER: Ruby | ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux]
    VER: Gem  | 2.7.6.2
    
    ENV: Gem
    RubyGems Environment:
      - RUBYGEMS VERSION: 2.7.6.2
      - RUBY VERSION: 2.5.5 (2019-03-15 patchlevel 157) [x86_64-linux]
      - INSTALLATION DIRECTORY: /opt/rh/rh-ruby25/root/usr/share/gems
      - USER INSTALLATION DIRECTORY: /usr/share/foreman/.gem/ruby
      - RUBY EXECUTABLE: /opt/rh/rh-ruby25/root/usr/bin/ruby
      - EXECUTABLE DIRECTORY: /opt/rh/rh-ruby25/root/usr/bin
      - SPEC CACHE DIRECTORY: /usr/share/foreman/.gem/specs
      - SYSTEM CONFIGURATION DIRECTORY: /etc/opt/rh/rh-ruby25
      - RUBYGEMS PLATFORMS:
        - ruby
        - x86_64-linux
      - GEM PATHS:
         - /opt/rh/rh-ruby25/root/usr/share/gems
         - /usr/share/foreman/.gem/ruby
         - /opt/rh/rh-ruby25/root/usr/local/share/gems
      - GEM CONFIGURATION:
         - :update_sources => true
         - :verbose => true
         - :backtrace => false
         - :bulk_threshold => 1000
         - "gem" => "--user-install --bindir /usr/share/foreman/bin"
      - REMOTE SOURCES:
         - https://rubygems.org/
      - SHELL PATH:
         - /opt/rh/rh-ruby25/root/usr/local/bin
         - /opt/rh/rh-ruby25/root/usr/bin
         - /opt/theforeman/tfm/root/usr/bin
         - /usr/local/sbin
         - /usr/local/bin
         - /usr/sbin
         - /usr/bin
         - /opt/puppetlabs/bin
         - /usr/share/foreman/bin
         - /root/bin
         - /sbin
    
    ENV: Linux
    APT_LISTBUGS_FRONTEND=none
    APT_LISTCHANGES_FRONTEND=none
    CONFIGURE_ARGS=with-pg-config=/opt/rh/rh-postgresql12/root/usr/bin/pg_config
    CPATH=/opt/theforeman/tfm/root/usr/include
    DEBIAN_FRONTEND=noninteractive
    ...
    FOREMAN_PROXY=satellite.gtkcentral.net
    FQDN=satellite.gtkcentral.net
    ...
    HOME=/usr/share/foreman
    HOSTNAME=satellite.gtkcentral.net
    ...
    LD_LIBRARY_PATH=/opt/rh/rh-ruby25/root/usr/local/lib64:/opt/rh/rh-ruby25/root/usr/lib64
    ...
    LIBRARY_PATH=/opt/theforeman/tfm/root/usr/lib64dd
    ...
    MANPATH=/opt/rh/rh-ruby25/root/usr/local/share/man:/opt/rh/rh-ruby25/root/usr/share/man:/opt/theforeman/tfm/root/usr/share/man::/opt/puppetlabs/puppet/share/man
    PATH=/opt/rh/rh-ruby25/root/usr/local/bin:/opt/rh/rh-ruby25/root/usr/bin:/opt/theforeman/tfm/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/puppetlabs/bin:/usr/share/foreman/bin:/root/bin:/sbin
    PKG_CONFIG_PATH=/opt/rh/rh-ruby25/root/usr/local/lib64/pkgconfig:/opt/rh/rh-ruby25/root/usr/lib64/pkgconfig:/opt/theforeman/tfm/root/usr/lib64/pkgconfig
    ...
    PWD=/root
    SATELLITE_DIR=/etc/satellite
    SHELL=/bin/bash
    SHLVL=4
    ...
    SSLD=/etc/ssl/certs
    ...
    XDG_DATA_DIRS=/opt/rh/rh-ruby25/root/usr/local/share:/opt/rh/rh-ruby25/root/usr/share:/usr/local/share:/usr/share
    XDG_RUNTIME_DIR=/run/user/0
    XDG_SESSION_ID=254
    X_SCLS=rh-ruby25 tfm
    
    ENV: Procs
    UID        PID  PPID  C STIME TTY          TIME CMD
    foreman  16919 15719  0 11:13 ?        00:00:00 sh -c /usr/sbin/foreman-env ; /usr/sbin/foreman-rake db:abort_if_pending_migrations
    foreman  16922 16919  0 11:13 ?        00:00:00  \_ /bin/bash /usr/sbin/foreman-env
    foreman  16959 16922  0 11:13 ?        00:00:00      \_ /usr/bin/ps -f --forest
    
    
  6. Which honestly looked good to me. Looks like this phase of the install has everything in place to successfully run those rakes. So I decided to try it myself:
    [root@satellite ~]# su -s /bin/bash foreman
    bash-4.2$ scl enable rh-ruby25 'scl enable tfm bash'
    bash-4.2$ /usr/sbin/foreman-rake db:abort_if_pending_migrations --trace --backtrace
    ** Invoke db:abort_if_pending_migrations (first_time)
    ** Invoke db:load_config (first_time)
    ** Invoke environment (first_time)
    ** Execute environment
    ** Execute db:load_config
    ** Execute db:abort_if_pending_migrations
    ** Invoke dynflow:abort_if_pending_migrations (first_time)
    ** Invoke environment
    ** Execute dynflow:abort_if_pending_migrations
    bash-4.2$ /usr/sbin/foreman-rake db:migrate --trace --backtrace
    ** Invoke db:migrate (first_time)
    ** Invoke db:load_config (first_time)
    ** Invoke environment (first_time)
    ** Execute environment
    ** Execute db:load_config
    ** Invoke plugin:refresh_migrations (first_time)
    ** Invoke environment
    ** Execute plugin:refresh_migrations
    ** Execute db:migrate
    ** Invoke db:_dump (first_time)
    ** Execute db:_dump
    ** Invoke dynflow:migrate (first_time)
    ** Invoke environment
    ** Execute dynflow:migrate
    

And it worked! Though not in the context of this post, that is, foreman-installer’s ability to run successfully; hence, it is broken.

Like a rear-view mirror (foreman-rake) whose become detached from it’s vehicle (foreman-installer). Sure, the mirror still works if I hold it in front of myself; but, it’s still broken in the context of providing me vantage while I drive. And, yeah, I know foreman-rake is more than a rear-view mirror; but, for analogy’s sake.

Well, I don’t believe this is intended:

/var/log/foreman-installer/katello.log (2.5 MB)

[DEBUG 2020-12-03T20:10:07 main] Executing: foreman-maintain packages is-locked --assumeyes
[DEBUG 2020-12-03T20:10:07 main] /usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could not find 'foreman_maintain' (>= 0) among 166 total gem(s) (Gem::LoadError)
[DEBUG 2020-12-03T20:10:07 main]        from /usr/share/rubygems/rubygems/dependency.rb:307:in `to_spec'
[DEBUG 2020-12-03T20:10:07 main]        from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
[DEBUG 2020-12-03T20:10:07 main]        from /usr/bin/foreman-maintain:22:in `<main>'
[ERROR 2020-12-03T20:10:07 main] foreman-maintain packages is-locked --assumeyes failed! Check the output for error!
[DEBUG 2020-12-03T20:10:07 main] Hook /usr/share/foreman-installer/katello/hooks/pre_commit/09-version_locking.rb returned nil

This is confirmed by, redhat Bug 1881150, and it’s foreman cohort, Bug 31135.

Again, I don’t know how everything works; but, when that’s the first error you see, and it appears to relate to how packages will be install from there on, it tends to raise some red-flags from someone on the outside looking in.



I did; I shared my foreman-debug archive. Just take a look at the bundle_list include in said archive:

COMMAND> bundle --local --gemfile=/usr/share/foreman/Gemfile

Don't run Bundler as root. Bundler can ask for sudo if it is needed, and installing your bundle as root will break this application for all non-root users on this machine.

[!] There was an error parsing `Gemfile`: No such file or directory @ rb_sysopen - /usr/share/foreman/Gemfile. Bundler cannot continue.

From what I know, that command should be, bundle --local --gemfile=/usr/share/foreman/Gemfile.in. Not sure if it was just, Gemfile, in earlier versions or not; but I’m guessing that needs to be corrected.

And there’s this (from foreman-maintain_service_status file, in the archive I posted):

COMMAND> foreman-maintain service status

/usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could not find 'foreman_maintain' (>= 0) among 180 total gem(s) (Gem::LoadError)
        from /usr/share/rubygems/rubygems/dependency.rb:307:in `to_spec'
        from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
        from /usr/bin/foreman-maintain:22:in `<main>'


Definitely. So, any suggestions as to why that may be? Bear in mind, this is a fresh install; all applicable repos are enabled; the scenario is stock. I purposely ran this fresh and stock, in order to mitigate the near impossible task of troubleshooting a rando’s custom setup.

Well, normal ping has no issue resolving this; as taken from:

1.9. Verifying DNS resolution:

Verify the full forward and reverse DNS resolution using a fully-qualified domain name to prevent issues while installing Foreman.

Procedure

  1. Ensure that the host name and local host resolve correctly:
    # ping -c1 localhost
    # ping -c1 `hostname -f` # my_system.domain.com
    

The result:

[root@satellite ~]# ping -c1 localhost
PING satellite (127.0.0.1) 56(84) bytes of data.
64 bytes from satellite (127.0.0.1): icmp_seq=1 ttl=64 time=0.051 ms

--- satellite ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms
[root@satellite ~]# ping -c1 `hostname -f` # my_system.domain.com
PING satellite.gtkcentral.net (172.20.80.222) 56(84) bytes of data.
64 bytes from satellite.gtkcentral.net (172.20.80.222): icmp_seq=1 ttl=64 time=0.044 ms

--- satellite.gtkcentral.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms

Though, as you can see below (and in the archive I posted), hammer ping is having issues because 443 is not open:

[root@satellite ~]# ss -tanl
State      Recv-Q Send-Q  Local Address:Port                 Peer Address:Port
LISTEN     0      1           127.0.0.1:8005                            *:*
LISTEN     0      10          127.0.0.1:5671                            *:*
LISTEN     0      10          127.0.0.1:5672                            *:*
LISTEN     0      128         127.0.0.1:27017                           *:*
LISTEN     0      128         127.0.0.1:6379                            *:*
LISTEN     0      50                  *:8140                            *:*
LISTEN     0      128         127.0.0.1:61613                           *:*
LISTEN     0      50                  *:5646                            *:*
LISTEN     0      50          127.0.0.1:8751                            *:*
LISTEN     0      50                  *:5647                            *:*
LISTEN     0      128                 *:22                              *:*
LISTEN     0      128         127.0.0.1:5432                            *:*
LISTEN     0      100         127.0.0.1:25                              *:*
LISTEN     0      100         127.0.0.1:8443                            *:*
LISTEN     0      128                 *:9090                            *:*
LISTEN     0      32               [::]:21                           [::]:*
LISTEN     0      128              [::]:22                           [::]:*
LISTEN     0      128              [::]:3128                         [::]:*
LISTEN     0      128              [::]:9090                         [::]:*

As a result, I am unable to access the application on my browser. Of course the reason for this is probably due to:

[root@satellite ~]# foreman-maintain service status
Running Status Services
================================================================================
Get status of applicable services:

Displaying the following service(s):
rh-mongodb34-mongod, rh-redis5-redis, postgresql, pulpcore-api, pulpcore-content, pulpcore-resource-manager, qdrouterd, qpidd, rh-redis5-redis, squid, pulp_celerybeat, pulp_resource_manager, pulp_streamer, pulp_workers, pulpcore-worker@*, tomcat, dynflow-sidekiq@orchestrator, goferd, httpd, puppetserver, dynflow-sidekiq@worker, dynflow-sidekiq@worker-hosts-queue, foreman-proxy
| displaying rh-mongodb34-mongod
● rh-mongodb34-mongod.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/usr/lib/systemd/system/rh-mongodb34-mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:10:52 MST; 16h ago
 Main PID: 7384 (mongod)
    Tasks: 77
   CGroup: /system.slice/rh-mongodb34-mongod.service
           └─7384 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run

Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index on: pulp_database.consumer_history properties: { v: 2, key: { originator: -1 }, name: "originator_-1", ns: "pulp_database.consumer_history", background: true }
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index done.  scanned 0 total records. 0 secs
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index on: pulp_database.consumer_history properties: { v: 2, key: { type: -1 }, name: "type_-1", ns: "pulp_database.consumer_history", background: true }
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index done.  scanned 0 total records. 0 secs
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index on: pulp_database.repo_sync_results properties: { v: 2, unique: true, key: { id: -1 }, name: "id_-1", ns: "pulp_database.repo_sync_results", background: true }
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index done.  scanned 0 total records. 0 secs
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index on: pulp_database.repo_group_publish_results properties: { v: 2, unique: true, key: { id: -1 }, name: "id_-1", ns: "pulp_database.repo_group_publish_results", background: true }
Dec 04 02:13:46 satellite.gtkcentral.net mongod.27017[7384]: [conn107] build index done.  scanned 0 total records. 0 secs
Dec 04 10:48:52 satellite.gtkcentral.net mongod.27017[7384]: [conn130] received client metadata from 127.0.0.1:34848 conn130: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.9" }, os: { type: "Linux", name: "CentOS Linux release 7.9.2009 (Core)", architecture: "x86_64", version: "Kernel 3.10.0-1160.6.1.el7.x86_64" } }
Dec 04 12:30:49 satellite.gtkcentral.net mongod.27017[7384]: [conn152] received client metadata from 127.0.0.1:35228 conn152: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.9" }, os: { type: "Linux", name: "CentOS Linux release 7.9.2009 (Core)", architecture: "x86_64", version: "Kernel 3.10.0-1160.6.1.el7.x86_64" } }
| displaying rh-redis5-redis
● rh-redis5-redis.service - Redis persistent key-value database
   Loaded: loaded (/usr/lib/systemd/system/rh-redis5-redis.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/rh-redis5-redis.service.d
           └─limit.conf
   Active: active (running) since Thu 2020-12-03 20:12:38 MST; 16h ago
 Main PID: 11757 (redis-server)
    Tasks: 4
   CGroup: /system.slice/rh-redis5-redis.service
           └─11757 /opt/rh/rh-redis5/root/usr/bin/redis-server 127.0.0.1:6379

Dec 03 20:12:38 satellite.gtkcentral.net systemd[1]: Starting Redis persistent key-value database...
Dec 03 20:12:38 satellite.gtkcentral.net systemd[1]: Started Redis persistent key-value database.
/ displaying postgresql
● postgresql.service - PostgreSQL database server
   Loaded: loaded (/etc/systemd/system/postgresql.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:13:27 MST; 16h ago
 Main PID: 13044 (postmaster)
    Tasks: 27
   CGroup: /system.slice/postgresql.service
           ├─13044 postmaster -D /var/opt/rh/rh-postgresql12/lib/pgsql/data
           ├─13048 postgres: logger
           ├─13050 postgres: checkpointer
           ├─13051 postgres: background writer
           ├─13052 postgres: walwriter
           ├─13053 postgres: autovacuum launcher
           ├─13054 postgres: stats collector
           ├─13055 postgres: logical replication launcher
           ├─15918 postgres: pulp pulpcore 127.0.0.1(35756) idle
           ├─15923 postgres: pulp pulpcore 127.0.0.1(35760) idle
           ├─15943 postgres: pulp pulpcore 127.0.0.1(35764) idle
           ├─15966 postgres: pulp pulpcore 127.0.0.1(35770) idle
           ├─16037 postgres: pulp pulpcore 127.0.0.1(35774) idle
           ├─16074 postgres: pulp pulpcore 127.0.0.1(35778) idle
           ├─16075 postgres: pulp pulpcore 127.0.0.1(35782) idle
           ├─16076 postgres: pulp pulpcore 127.0.0.1(35786) idle
           ├─16087 postgres: pulp pulpcore 127.0.0.1(35788) idle
           ├─16088 postgres: pulp pulpcore 127.0.0.1(35790) idle
           ├─16089 postgres: pulp pulpcore 127.0.0.1(35794) idle
           ├─17445 postgres: candlepin candlepin 127.0.0.1(38020) idle
           ├─17446 postgres: candlepin candlepin 127.0.0.1(38021) idle
           ├─17447 postgres: candlepin candlepin 127.0.0.1(38024) idle
           ├─21830 postgres: candlepin candlepin 127.0.0.1(38132) idle
           ├─21831 postgres: candlepin candlepin 127.0.0.1(38134) idle
           ├─21832 postgres: candlepin candlepin 127.0.0.1(38136) idle
           ├─21833 postgres: candlepin candlepin 127.0.0.1(38138) idle
           └─21834 postgres: candlepin candlepin 127.0.0.1(38140) idle

Dec 03 20:13:27 satellite.gtkcentral.net systemd[1]: Starting PostgreSQL database server...
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST LOG:  starting PostgreSQL 12.1 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST LOG:  listening on IPv4 address "127.0.0.1", port 5432
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST LOG:  redirecting log output to logging collector process
Dec 03 20:13:27 satellite.gtkcentral.net sh[13044]: 2020-12-03 20:13:27 MST HINT:  Future log output will appear in directory "log".
Dec 03 20:13:27 satellite.gtkcentral.net systemd[1]: Started PostgreSQL database server.
Dec 03 20:13:27 satellite.gtkcentral.net systemd[1]: Reloading PostgreSQL database server.
Dec 03 20:13:27 satellite.gtkcentral.net systemd[1]: Reloaded PostgreSQL database server.
/ displaying pulpcore-api
● pulpcore-api.service - Pulp WSGI Server
   Loaded: loaded (/etc/systemd/system/pulpcore-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:03 MST; 16h ago
 Main PID: 15922 (gunicorn)
    Tasks: 2
   CGroup: /system.slice/pulpcore-api.service
           ├─15922 /usr/bin/python3 /usr/bin/gunicorn pulpcore.app.wsgi:application --bind 127.0.0.1:24817 --access-logfile -
           └─15945 /usr/bin/python3 /usr/bin/gunicorn pulpcore.app.wsgi:application --bind 127.0.0.1:24817 --access-logfile -

Dec 03 20:15:03 satellite.gtkcentral.net systemd[1]: Started Pulp WSGI Server.
Dec 03 20:15:04 satellite.gtkcentral.net pulpcore-api[15922]: [2020-12-03 20:15:04 -0700] [15922] [INFO] Starting gunicorn 20.0.4
Dec 03 20:15:04 satellite.gtkcentral.net pulpcore-api[15922]: [2020-12-03 20:15:04 -0700] [15922] [INFO] Listening at: unix:/run/pulpcore-api.sock (15922)
Dec 03 20:15:04 satellite.gtkcentral.net pulpcore-api[15922]: [2020-12-03 20:15:04 -0700] [15922] [INFO] Using worker: sync
Dec 03 20:15:04 satellite.gtkcentral.net pulpcore-api[15922]: [2020-12-03 20:15:04 -0700] [15945] [INFO] Booting worker with pid: 15945
/ displaying pulpcore-content
● pulpcore-content.service - Pulp Content App
   Loaded: loaded (/etc/systemd/system/pulpcore-content.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:04 MST; 16h ago
 Main PID: 15948 (gunicorn)
    Tasks: 3
   CGroup: /system.slice/pulpcore-content.service
           ├─15948 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --bind 127.0.0.1:24816 --worker-class aiohttp.GunicornWebWorker -w 2 --access-logfile -
           ├─15974 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --bind 127.0.0.1:24816 --worker-class aiohttp.GunicornWebWorker -w 2 --access-logfile -
           └─16005 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --bind 127.0.0.1:24816 --worker-class aiohttp.GunicornWebWorker -w 2 --access-logfile -

Dec 03 20:15:04 satellite.gtkcentral.net systemd[1]: Started Pulp Content App.
Dec 03 20:15:05 satellite.gtkcentral.net pulpcore-content[15948]: [2020-12-03 20:15:05 -0700] [15948] [INFO] Starting gunicorn 20.0.4
Dec 03 20:15:05 satellite.gtkcentral.net pulpcore-content[15948]: [2020-12-03 20:15:05 -0700] [15948] [INFO] Listening at: unix:/run/pulpcore-content.sock (15948)
Dec 03 20:15:05 satellite.gtkcentral.net pulpcore-content[15948]: [2020-12-03 20:15:05 -0700] [15948] [INFO] Using worker: aiohttp.GunicornWebWorker
Dec 03 20:15:05 satellite.gtkcentral.net pulpcore-content[15948]: [2020-12-03 20:15:05 -0700] [15974] [INFO] Booting worker with pid: 15974
Dec 03 20:15:05 satellite.gtkcentral.net pulpcore-content[15948]: [2020-12-03 20:15:05 -0700] [16005] [INFO] Booting worker with pid: 16005
/ displaying pulpcore-resource-manager
● pulpcore-resource-manager.service - Pulp Resource Manager
   Loaded: loaded (/etc/systemd/system/pulpcore-resource-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:05 MST; 16h ago
 Main PID: 15970 (rq)
    Tasks: 1
   CGroup: /system.slice/pulpcore-resource-manager.service
           └─15970 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker -n resource-manager --pid=/var/run/pulpcore-resource-manager/resource-manager.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:09 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 09:41:18 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 10:01:27 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 10:21:36 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 10:41:45 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 11:01:54 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 11:22:03 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 11:42:12 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 12:02:21 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
Dec 04 12:22:30 satellite.gtkcentral.net pulpcore-resource-manager[15970]: pulp: rq.worker:INFO: Cleaning registries for queue: resource-manager
/ displaying qdrouterd
● qdrouterd.service - Qpid Dispatch router daemon
   Loaded: loaded (/usr/lib/systemd/system/qdrouterd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/qdrouterd.service.d
           └─90-limits.conf
   Active: active (running) since Thu 2020-12-03 20:13:47 MST; 16h ago
 Main PID: 14307 (qdrouterd)
    Tasks: 13
   CGroup: /system.slice/qdrouterd.service
           └─14307 /usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf

Dec 04 12:22:27 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:22:27.069928 -0700 SERVER (info) [C544] Accepted connection to :5647 from 172.20.80.14:53974
Dec 04 12:22:27 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:22:27.083835 -0700 SERVER (info) [C544] Connection from 172.20.80.14:53974 (to :5647) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure
Dec 04 12:24:14 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:24:14.162227 -0700 SERVER (info) [C545] Accepted connection to :5647 from 172.20.80.14:53976
Dec 04 12:24:14 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:24:14.176098 -0700 SERVER (info) [C545] Connection from 172.20.80.14:53976 (to :5647) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure
Dec 04 12:26:01 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:26:01.272420 -0700 SERVER (info) [C546] Accepted connection to :5647 from 172.20.80.14:53978
Dec 04 12:26:01 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:26:01.286666 -0700 SERVER (info) [C546] Connection from 172.20.80.14:53978 (to :5647) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure
Dec 04 12:27:48 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:27:48.376557 -0700 SERVER (info) [C547] Accepted connection to :5647 from 172.20.80.14:53980
Dec 04 12:27:48 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:27:48.391426 -0700 SERVER (info) [C547] Connection from 172.20.80.14:53980 (to :5647) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure
Dec 04 12:29:35 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:29:35.486437 -0700 SERVER (info) [C548] Accepted connection to :5647 from 172.20.80.14:53982
Dec 04 12:29:35 satellite.gtkcentral.net qdrouterd[14307]: 2020-12-04 12:29:35.500294 -0700 SERVER (info) [C548] Connection from 172.20.80.14:53982 (to :5647) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure
/ displaying qpidd
● qpidd.service - An AMQP message broker daemon.
   Loaded: loaded (/usr/lib/systemd/system/qpidd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/qpidd.service.d
           └─90-limits.conf, wait-for-port.conf
   Active: active (running) since Thu 2020-12-03 20:13:29 MST; 16h ago
     Docs: man:qpidd(1)
           http://qpid.apache.org/
 Main PID: 13320 (qpidd)
    Tasks: 14
   CGroup: /system.slice/qpidd.service
           └─13320 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf

Dec 03 20:13:28 satellite.gtkcentral.net systemd[1]: Starting An AMQP message broker daemon....
Dec 03 20:13:29 satellite.gtkcentral.net qpidd[13320]: 2020-12-03 20:13:29 [System] error Error reading socket: Encountered end of file [-5938]
Dec 03 20:13:29 satellite.gtkcentral.net qpidd[13320]: 2020-12-03 20:13:29 [System] error Error reading socket: Encountered end of file [-5938]
Dec 03 20:13:29 satellite.gtkcentral.net systemd[1]: Started An AMQP message broker daemon..
/ displaying rh-redis5-redis
● rh-redis5-redis.service - Redis persistent key-value database
   Loaded: loaded (/usr/lib/systemd/system/rh-redis5-redis.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/rh-redis5-redis.service.d
           └─limit.conf
   Active: active (running) since Thu 2020-12-03 20:12:38 MST; 16h ago
 Main PID: 11757 (redis-server)
    Tasks: 4
   CGroup: /system.slice/rh-redis5-redis.service
           └─11757 /opt/rh/rh-redis5/root/usr/bin/redis-server 127.0.0.1:6379

Dec 03 20:12:38 satellite.gtkcentral.net systemd[1]: Starting Redis persistent key-value database...
Dec 03 20:12:38 satellite.gtkcentral.net systemd[1]: Started Redis persistent key-value database.
/ displaying squid
● squid.service - Squid caching proxy
   Loaded: loaded (/usr/lib/systemd/system/squid.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:12:43 MST; 16h ago
 Main PID: 12195 (squid)
    Tasks: 3
   CGroup: /system.slice/squid.service
           ├─12195 /usr/sbin/squid -f /etc/squid/squid.conf
           ├─12197 (squid-1) -f /etc/squid/squid.conf
           └─12204 (logfile-daemon) /var/log/squid/access.log

Dec 03 20:12:43 satellite.gtkcentral.net systemd[1]: Starting Squid caching proxy...
Dec 03 20:12:43 satellite.gtkcentral.net cache_swap.sh[12186]: init_cache_dir /var/spool/squid...
Dec 03 20:12:43 satellite.gtkcentral.net squid[12195]: Squid Parent: will start 1 kids
Dec 03 20:12:43 satellite.gtkcentral.net squid[12195]: Squid Parent: (squid-1) process 12197 started
Dec 03 20:12:43 satellite.gtkcentral.net systemd[1]: Started Squid caching proxy.
/ displaying pulp_celerybeat
● pulp_celerybeat.service - Pulp's Celerybeat
   Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:13:45 MST; 16h ago
 Main PID: 13721 (celery)
    Tasks: 5
   CGroup: /system.slice/pulp_celerybeat.service
           └─13721 /usr/bin/python /usr/bin/celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler

Dec 04 10:53:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:03:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:13:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:23:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:33:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:43:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 11:53:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 12:03:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 12:13:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
Dec 04 12:23:46 satellite.gtkcentral.net pulp[13721]: celery.beat:INFO: Scheduler: Sending due task download_deferred_content (pulp.server.controllers.repository.queue_download_deferred)
/ displaying pulp_resource_manager
● pulp_resource_manager.service - Pulp Resource Manager
   Loaded: loaded (/usr/lib/systemd/system/pulp_resource_manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:13:45 MST; 16h ago
 Main PID: 13894 (celery)
    Tasks: 7
   CGroup: /system.slice/pulp_resource_manager.service
           ├─13894 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
           └─14458 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid

Dec 03 20:13:47 satellite.gtkcentral.net celery[13894]: -------------- [queues]
Dec 03 20:13:47 satellite.gtkcentral.net celery[13894]: .> resource_manager exchange=resource_manager(direct) key=resource_manager
Dec 03 20:13:47 satellite.gtkcentral.net celery[13894]: .> resource_manager@satellite.gtkcentral.net.dq2 exchange=C.dq2(direct) key=resource_manager@satellite.gtkcentral.net
Dec 03 20:13:47 satellite.gtkcentral.net pulp[14458]: pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Dec 03 20:13:47 satellite.gtkcentral.net pulp[14458]: pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Dec 03 20:13:47 satellite.gtkcentral.net pulp[13894]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Dec 03 20:13:47 satellite.gtkcentral.net pulp[13894]: celery.worker.consumer.connection:INFO: Connected to qpid://localhost:5671//
Dec 03 20:13:48 satellite.gtkcentral.net pulp[13894]: kombu.transport.qpid:INFO: Connected to qpid with SASL mechanism ANONYMOUS
Dec 03 20:13:48 satellite.gtkcentral.net pulp[13894]: celery.apps.worker:INFO: resource_manager@satellite.gtkcentral.net ready.
Dec 03 20:13:48 satellite.gtkcentral.net pulp[14458]: pulp.server.db.connection:INFO: Write concern for Mongo connection: {}
/ displaying pulp_streamer
● pulp_streamer.service - The Pulp lazy content loading streamer
   Loaded: loaded (/usr/lib/systemd/system/pulp_streamer.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:13:46 MST; 16h ago
 Main PID: 13945 (pulp_streamer)
    Tasks: 3
   CGroup: /system.slice/pulp_streamer.service
           └─13945 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac

Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: pulp.plugins.loader.manager:INFO: Loaded plugin deb_importer for types: deb,deb_component,deb_release
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: pulp.plugins.loader.manager:INFO: Loaded plugin puppet_whole_repo_profiler for types: puppet_module
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: pulp.plugins.loader.manager:INFO: Loaded plugin yum_profiler for types: rpm,erratum,modulemd
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: pulp.plugins.loader.manager:INFO: Loaded plugin yum for types: rpm
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: pulp.plugins.loader.manager:INFO: Loaded plugin rhui for types: rpm
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: [-] Log opened.
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: [-] twistd 12.2.0 (/usr/bin/python 2.7.5) starting up.
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: [-] reactor class: twisted.internet.epollreactor.EPollReactor.
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: [-] Site starting on 8751
Dec 03 20:13:47 satellite.gtkcentral.net pulp_streamer[13945]: [-] Starting factory <twisted.web.server.Site instance at 0x7f88bfc85cf8>
/ displaying pulp_workers
● pulp_workers.service - Pulp Celery Workers
   Loaded: loaded (/usr/lib/systemd/system/pulp_workers.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2020-12-03 20:13:45 MST; 16h ago
 Main PID: 13770 (code=exited, status=0/SUCCESS)
    Tasks: 0
   CGroup: /system.slice/pulp_workers.service

Dec 03 20:13:45 satellite.gtkcentral.net systemd[1]: Starting Pulp Celery Workers...
Dec 03 20:13:45 satellite.gtkcentral.net systemd[1]: Started Pulp Celery Workers.
/ displaying pulpcore-worker@*
● pulpcore-worker@5.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:01 MST; 16h ago
 Main PID: 15837 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@5.service
           └─15837 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-5/reserved-resource-worker-5.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:06 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 09:41:15 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 10:01:24 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 10:21:33 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 10:41:42 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 11:01:51 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 11:22:00 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 11:42:09 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 12:02:18 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net
Dec 04 12:22:27 satellite.gtkcentral.net pulpcore-worker-5[15837]: pulp: rq.worker:INFO: Cleaning registries for queue: 15837@satellite.gtkcentral.net

● pulpcore-worker@2.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:00 MST; 16h ago
 Main PID: 15775 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@2.service
           └─15775 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-2/reserved-resource-worker-2.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:04 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 09:41:13 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 10:01:22 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 10:21:31 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 10:41:40 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 11:01:49 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 11:21:58 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 11:42:07 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 12:02:16 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net
Dec 04 12:22:25 satellite.gtkcentral.net pulpcore-worker-2[15775]: pulp: rq.worker:INFO: Cleaning registries for queue: 15775@satellite.gtkcentral.net

● pulpcore-worker@3.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:00 MST; 16h ago
 Main PID: 15796 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@3.service
           └─15796 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-3/reserved-resource-worker-3.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:04 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 09:41:13 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 10:01:22 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 10:21:31 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 10:41:40 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 11:01:49 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 11:21:58 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 11:42:07 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 12:02:16 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net
Dec 04 12:22:25 satellite.gtkcentral.net pulpcore-worker-3[15796]: pulp: rq.worker:INFO: Cleaning registries for queue: 15796@satellite.gtkcentral.net

● pulpcore-worker@8.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:02 MST; 16h ago
 Main PID: 15899 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@8.service
           └─15899 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-8/reserved-resource-worker-8.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:08 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 09:41:16 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 10:01:25 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 10:21:34 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 10:41:43 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 11:01:52 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 11:22:01 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 11:42:10 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 12:02:19 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net
Dec 04 12:22:29 satellite.gtkcentral.net pulpcore-worker-8[15899]: pulp: rq.worker:INFO: Cleaning registries for queue: 15899@satellite.gtkcentral.net

● pulpcore-worker@6.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:01 MST; 16h ago
 Main PID: 15857 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@6.service
           └─15857 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-6/reserved-resource-worker-6.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:06 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 09:41:15 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 10:01:24 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 10:21:33 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 10:41:42 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 11:01:51 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 11:22:00 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 11:42:09 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 12:02:18 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net
Dec 04 12:22:27 satellite.gtkcentral.net pulpcore-worker-6[15857]: pulp: rq.worker:INFO: Cleaning registries for queue: 15857@satellite.gtkcentral.net

● pulpcore-worker@1.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:00 MST; 16h ago
 Main PID: 15757 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@1.service
           └─15757 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-1/reserved-resource-worker-1.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:04 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 09:41:13 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 10:01:22 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 10:21:31 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 10:41:40 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 11:01:49 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 11:21:58 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 11:42:07 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 12:02:16 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net
Dec 04 12:22:25 satellite.gtkcentral.net pulpcore-worker-1[15757]: pulp: rq.worker:INFO: Cleaning registries for queue: 15757@satellite.gtkcentral.net

● pulpcore-worker@4.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:01 MST; 16h ago
 Main PID: 15817 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@4.service
           └─15817 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-4/reserved-resource-worker-4.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:05 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 09:41:14 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 10:01:23 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 10:21:32 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 10:41:41 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 11:01:50 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 11:21:59 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 11:42:08 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 12:02:17 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net
Dec 04 12:22:26 satellite.gtkcentral.net pulpcore-worker-4[15817]: pulp: rq.worker:INFO: Cleaning registries for queue: 15817@satellite.gtkcentral.net

● pulpcore-worker@7.service - Pulp RQ Worker
   Loaded: loaded (/etc/systemd/system/pulpcore-worker@.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:02 MST; 16h ago
 Main PID: 15878 (rq)
   CGroup: /system.slice/system-pulpcore\x2dworker.slice/pulpcore-worker@7.service
           └─15878 /usr/bin/python3 /usr/bin/rq worker -w pulpcore.tasking.worker.PulpWorker --pid=/var/run/pulpcore-worker-7/reserved-resource-worker-7.pid -c pulpcore.rqconfig --disable-job-desc-logging

Dec 04 09:21:07 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 09:41:16 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 10:01:24 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 10:21:33 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 10:41:42 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 11:01:51 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 11:22:01 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 11:42:10 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 12:02:19 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
Dec 04 12:22:28 satellite.gtkcentral.net pulpcore-worker-7[15878]: pulp: rq.worker:INFO: Cleaning registries for queue: 15878@satellite.gtkcentral.net
- displaying tomcat
● tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/usr/lib/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-12-04 11:13:27 MST; 1h 17min ago
 Main PID: 16768 (java)
    Tasks: 78
   CGroup: /system.slice/tomcat.service
           └─16768 /usr/lib/jvm/jre/bin/java -Xms1024m -Xmx4096m -Djava.security.auth.login.config=/usr/share/tomcat/conf/login.config -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat/temp -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start

Dec 04 11:13:37 satellite.gtkcentral.net server[16768]: Dec 04, 2020 11:13:37 AM com.google.inject.internal.ProxyFactory <init>
Dec 04 11:13:37 satellite.gtkcentral.net server[16768]: WARNING: Method [public org.candlepin.model.Persisted org.candlepin.model.RulesCurator.create(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@190ff8ba]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Dec 04 11:13:37 satellite.gtkcentral.net server[16768]: Dec 04, 2020 11:13:37 AM com.google.inject.internal.ProxyFactory <init>
Dec 04 11:13:37 satellite.gtkcentral.net server[16768]: WARNING: Method [public void org.candlepin.model.EntitlementCertificateCurator.delete(org.candlepin.model.Persisted)] is synthetic and is being intercepted by [com.google.inject.persist.jpa.JpaLocalTxnInterceptor@190ff8ba]. This could indicate a bug.  The method may be intercepted twice, or may not be intercepted at all.
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: Dec 04, 2020 11:13:43 AM org.apache.catalina.startup.HostConfig deployDirectory
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: INFO: Deployment of web application directory /var/lib/tomcat/webapps/candlepin has finished in 14,593 ms
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: Dec 04, 2020 11:13:43 AM org.apache.coyote.AbstractProtocol start
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: INFO: Starting ProtocolHandler ["http-bio-127.0.0.1-8443"]
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: Dec 04, 2020 11:13:43 AM org.apache.catalina.startup.Catalina start
Dec 04 11:13:43 satellite.gtkcentral.net server[16768]: INFO: Server startup in 14639 ms
- displaying dynflow-sidekiq@orchestrator
● dynflow-sidekiq@orchestrator.service - Foreman jobs daemon - orchestrator on sidekiq
   Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2020-12-02 18:57:05 MST; 1 day 17h ago
     Docs: https://theforeman.org
 Main PID: 27544 (code=killed, signal=TERM)

Dec 02 18:56:52 satellite.gtkcentral.net systemd[1]: Starting Foreman jobs daemon - orchestrator on sidekiq...
Dec 02 18:56:54 satellite.gtkcentral.net dynflow-sidekiq@orchestrator[27544]: 2020-12-03T01:56:54.521Z 27544 TID-76po0 INFO: GitLab reliable fetch activated!
Dec 02 18:56:54 satellite.gtkcentral.net dynflow-sidekiq@orchestrator[27544]: 2020-12-03T01:56:54.521Z 27544 TID-b954w INFO: Booting Sidekiq 5.2.7 with redis options {:id=>"Sidekiq-server-PID-27544", :url=>"redis://localhost:6379/0"}
Dec 02 18:57:00 satellite.gtkcentral.net dynflow-sidekiq@orchestrator[27544]: /usr/share/foreman/lib/foreman.rb:8: warning: already initialized constant Foreman::UUID_REGEXP
Dec 02 18:57:00 satellite.gtkcentral.net dynflow-sidekiq@orchestrator[27544]: /usr/share/foreman/lib/foreman.rb:8: warning: previous definition of UUID_REGEXP was here
Dec 02 18:57:02 satellite.gtkcentral.net dynflow-sidekiq@orchestrator[27544]: Apipie cache enabled but not present yet. Run apipie:cache rake task to speed up API calls.
Dec 02 18:57:05 satellite.gtkcentral.net systemd[1]: Stopped Foreman jobs daemon - orchestrator on sidekiq.
- displaying goferd
● goferd.service - Gofer Agent
   Loaded: loaded (/usr/lib/systemd/system/goferd.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Dec 02 15:11:56 satellite.gtkcentral.net systemd[1]: Started Gofer Agent.
Dec 02 15:11:56 satellite.gtkcentral.net goferd[4256]: [INFO][Thread-1] gofer.rmi.store:108 - Using: /var/lib/gofer/messaging/pending/demo
Dec 02 15:11:56 satellite.gtkcentral.net goferd[4256]: [WARNING][MainThread] gofer.agent.plugin:647 - plugin:demo, DISABLED
Dec 02 15:11:56 satellite.gtkcentral.net goferd[4256]: [INFO][MainThread] gofer.agent.main:92 - agent started.
Dec 02 18:57:05 satellite.gtkcentral.net systemd[1]: Stopping Gofer Agent...
Dec 02 18:57:05 satellite.gtkcentral.net systemd[1]: Stopped Gofer Agent.
- displaying httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/httpd.service.d
           └─limits.conf, satellite.conf
   Active: inactive (dead)
     Docs: man:httpd(8)
           man:apachectl(8)

Dec 02 18:17:53 satellite.gtkcentral.net pulp[4262]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Dec 02 18:17:53 satellite.gtkcentral.net pulp[4263]: gofer.messaging.adapter.qpid.connection:INFO: opened: qpid+ssl://localhost:5671
Dec 02 18:17:53 satellite.gtkcentral.net pulp[4263]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Dec 02 18:17:53 satellite.gtkcentral.net pulp[4264]: gofer.messaging.adapter.qpid.connection:INFO: opened: qpid+ssl://localhost:5671
Dec 02 18:17:53 satellite.gtkcentral.net pulp[4264]: gofer.messaging.adapter.connect:INFO: connected: qpid+ssl://localhost:5671
Dec 02 18:57:01 satellite.gtkcentral.net systemd[1]: Stopping The Apache HTTP Server...
Dec 02 18:57:03 satellite.gtkcentral.net pulp[4262]: gofer.messaging.adapter.qpid.connection:INFO: closed: qpid+ssl://localhost:5671
Dec 02 18:57:03 satellite.gtkcentral.net pulp[4263]: gofer.messaging.adapter.qpid.connection:INFO: closed: qpid+ssl://localhost:5671
Dec 02 18:57:03 satellite.gtkcentral.net pulp[4264]: gofer.messaging.adapter.qpid.connection:INFO: closed: qpid+ssl://localhost:5671
Dec 02 18:57:05 satellite.gtkcentral.net systemd[1]: Stopped The Apache HTTP Server.
- displaying puppetserver
● puppetserver.service - puppetserver Service
   Loaded: loaded (/usr/lib/systemd/system/puppetserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:25 MST; 16h ago
 Main PID: 16048 (java)
    Tasks: 58 (limit: 4915)
   CGroup: /system.slice/puppetserver.service
           └─16048 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:OnOutOfMemoryError="kill -9 %p" -XX:ErrorFile=/var/log/puppetlabs/puppetserver/puppetserver_err_pid%p.log -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/facter.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter

Dec 03 20:15:05 satellite.gtkcentral.net systemd[1]: Starting puppetserver Service...
Dec 03 20:15:25 satellite.gtkcentral.net systemd[1]: Started puppetserver Service.
- displaying dynflow-sidekiq@worker
● dynflow-sidekiq@worker.service - Foreman jobs daemon - worker on sidekiq
   Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2020-12-02 18:57:00 MST; 1 day 17h ago
     Docs: https://theforeman.org
 Main PID: 27559 (code=killed, signal=TERM)

Dec 02 18:56:53 satellite.gtkcentral.net systemd[1]: Starting Foreman jobs daemon - worker on sidekiq...
Dec 02 18:56:55 satellite.gtkcentral.net dynflow-sidekiq@worker[27559]: 2020-12-03T01:56:55.221Z 27559 TID-4cp5j INFO: GitLab reliable fetch activated!
Dec 02 18:56:55 satellite.gtkcentral.net dynflow-sidekiq@worker[27559]: 2020-12-03T01:56:55.222Z 27559 TID-8c2un INFO: Booting Sidekiq 5.2.7 with redis options {:id=>"Sidekiq-server-PID-27559", :url=>"redis://localhost:6379/0"}
Dec 02 18:57:00 satellite.gtkcentral.net systemd[1]: Stopped Foreman jobs daemon - worker on sidekiq.
- displaying dynflow-sidekiq@worker-hosts-queue
● dynflow-sidekiq@worker-hosts-queue.service - Foreman jobs daemon - worker-hosts-queue on sidekiq
   Loaded: loaded (/usr/lib/systemd/system/dynflow-sidekiq@.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Wed 2020-12-02 18:57:00 MST; 1 day 17h ago
     Docs: https://theforeman.org
 Main PID: 27622 (code=killed, signal=TERM)

Dec 02 18:56:55 satellite.gtkcentral.net systemd[1]: Starting Foreman jobs daemon - worker-hosts-queue on sidekiq...
Dec 02 18:56:57 satellite.gtkcentral.net dynflow-sidekiq@worker-hosts-queue[27622]: 2020-12-03T01:56:57.371Z 27622 TID-5jw12 INFO: GitLab reliable fetch activated!
Dec 02 18:56:57 satellite.gtkcentral.net dynflow-sidekiq@worker-hosts-queue[27622]: 2020-12-03T01:56:57.371Z 27622 TID-9ljo2 INFO: Booting Sidekiq 5.2.7 with redis options {:id=>"Sidekiq-server-PID-27622", :url=>"redis://localhost:6379/0"}
Dec 02 18:57:00 satellite.gtkcentral.net systemd[1]: Stopped Foreman jobs daemon - worker-hosts-queue on sidekiq.
- displaying foreman-proxy
● foreman-proxy.service - Foreman Proxy
   Loaded: loaded (/usr/lib/systemd/system/foreman-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-03 20:15:31 MST; 16h ago
 Main PID: 16451 (ruby)
    Tasks: 10
   CGroup: /system.slice/foreman-proxy.service
           └─16451 ruby /usr/share/foreman-proxy/bin/smart-proxy --no-daemonize

Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: 167.248.133.39 - - [04/Dec/2020:10:13:28 MST] "GET /api/v1/labels HTTP/1.1" 404 27
Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: - -> /api/v1/labels
Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: 167.248.133.39 - - [04/Dec/2020:10:13:28 MST] "GET /api/v1/label/__name__/values HTTP/1.1" 404 27
Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: - -> /api/v1/label/__name__/values
Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: 167.248.133.39 - - [04/Dec/2020:10:13:28 MST] "GET /api/v1/label/__name__/values HTTP/1.1" 404 27
Dec 04 10:13:28 satellite.gtkcentral.net smart-proxy[16451]: - -> /api/v1/label/__name__/values
Dec 04 10:13:30 satellite.gtkcentral.net smart-proxy[16451]: 167.248.133.39 - - [04/Dec/2020:10:13:30 MST] "GET / HTTP/1.1" 404 27
Dec 04 10:13:30 satellite.gtkcentral.net smart-proxy[16451]: - -> /
Dec 04 10:13:30 satellite.gtkcentral.net smart-proxy[16451]: 167.248.133.39 - - [04/Dec/2020:10:13:30 MST] "GET / HTTP/1.1" 404 27
Dec 04 10:13:30 satellite.gtkcentral.net smart-proxy[16451]: - -> /
- All services displayed                                              [FAIL]
Some services are not running (dynflow-sidekiq@orchestrator, goferd, httpd, dynflow-sidekiq@worker, dynflow-sidekiq@worker-hosts-queue)
--------------------------------------------------------------------------------
Scenario [Status Services] failed.

The following steps ended up in failing state:

  [service-status]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="service-status"

Which shows the following as not started:

  • dynflow-sidekiq@orchestrator
  • dynflow-sidekiq@worker
  • dynflow-sidekiq@worker-hosts-queue
  • goferd
    • This always bothered me. It seems to be required no matter what.
    • Though, it is not included as a dependency of foreman or katello.
    • One, then, must install along with katello, and be sure to enable it on systemd.
    • My guess is that it is a requirement only to CentOS or something; and thus, omitted? Even then, these packages reflect the distribution; so, I don’t know if that theory has any legs to stand on.
  • httpd
  • smart_proxy_dynflow_core

This should have shown up in the archive I uploaded, within the, foreman-maintain_service_status file; but, foreman-debug is broken, in that, the command failed as noted earlier.



To be clear, everything I am mentioning only pertains to the install process. Whatever initially lead me to this install matters; but, not at the moment, as I can’t even get this thing back up and running and attempt to replicate (trust me, there’s more to that initial crash that I’d love to delve into). All in all, this whole experience veered my focus towards the install process itself, in order to move forward with any issues regarding general use crashes/bugs. Because foreman-installer, for all inents and purposes, is a huge part of the forman/katello experience; and, it is important that it’s process is as smooth and straightforwardas the docs show.

I implore anyone to really dig into that archive. Any insights on what is happening here would help immensly, and either reveal my own idiocy (fingers crosssed that it’s my own failings), or help improve foreman-installer.

It does indeed look like I missed one path so we end up logging to an incorrect path.

https://github.com/voxpupuli/puppet-redis/pull/379

This is a very different error and thus a completely unrelated bug. The bug you linked is that it reported the exit code was non-zero, which can be an expected outcome. That means it shouldn’t log an error. Yours is indicates your Ruby is busted in some way.

That is for bundler based installs, which isn’t really relevant on CentOS.

No idea, but given that your Ruby installation appears broken, it may be related to that.

I would check if there’s anything odd in your env output (without activating any SCL manually). I’d also check if all the rh-ruby25 packages are correctly installed. For example, rpm -qa | grep rh-ruby25 | xargs -n 1 rpm -qV.

Thanks for the response. Again, I need to reiterate; my Ruby installation was done by foreman-installer, not me. Every attempt I made was always from a VM snapshot of a fresh CentOS 7.9.2009 install. It’s anyone’s guess as to what happened there at this point.

That said, I definitely see why forklift is the preferred method of install, as that seems to be the best way to do it. That said, though the docs call it the “recommended” method, they do so in a rather nonchalant way. I think putting forklift front and center of the install/updates section is something that should be looked at in the future. Though its documentation is minimal, I’ve figured out some pretty decent deployment strategies during my time with it.

Perhaps when I have some time away from work I can write something up (how-to, recipes, et al) and possibly get that ball rolling, if that’s something anyone’s interested in.

Thanks again for the help.

1 Like