I know this thread is a little older now, but I really wanted to share a summary of my setup and the common resource usage, in the case it may still be useful to this community.
The below is my Foreman and Katello version. I know at this point, these are already “old”. My challenge currently is that I’m in the middle of a large deployment of CentOS to 5000+ hosts in a retail environment. It is a very slow deployment (such is the nature of retail), so I’m trying to not do any upgrades until completed.
- Foreman 2.2.3
- Katello 3.17.3
Foreman host data at this very moment:
- Hosts: 896
- Content Hosts: 896
System CPU summary:
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
Stepping: 0
CPU MHz: 2693.671
BogoMIPS: 5387.34
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid xsaveopt arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
System memory summary:
# free
total used free shared buff/cache available
Mem: 65808684 20394692 34458596 1808904 10955396 43073212
Swap: 2097148 341692 1755456
Top 20 CPU intensive processes (Note the biggest offender xagt
is not always this CPU hungry and will drop shortly):
# ps aux --sort -pcpu | head -n 20
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 4164 23.6 0.4 1680316 312752 ? SLl Dec02 8886:18 /opt/fireeye/bin/xagt --mode Eventor --iofd 3 --cmname 12 --log INFO --logfd 4
puppet 30469 3.8 2.3 6039376 1557304 ? Sl Dec27 62:40 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:OnOutOfMemoryError=kill -9 %p -XX:ErrorFile=/var/log/puppetlabs/puppetserver/puppetserver_err_pid%p.log -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/facter.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter
tomcat 30285 1.2 1.6 8128324 1083008 ? Ssl Dec27 20:40 /usr/lib/jvm/jre/bin/java -Xms1024m -Xmx4096m -Djava.security.auth.login.config=/usr/share/tomcat/conf/login.config -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat/temp -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start
foreman 31025 1.2 1.1 1134780 763928 ? Sl Dec27 19:55 puma: cluster worker 0: 30291 [foreman]
qdroute+ 29844 1.1 2.5 2119100 1694816 ? Ssl Dec27 18:24 /usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf
foreman 31028 1.0 1.1 1136520 786280 ? Sl Dec27 17:37 puma: cluster worker 1: 30291 [foreman]
root 4727 0.4 6.5 6617028 4312048 ? Sl Dec02 163:52 /var/IBM/ITM/lx8266/lz/bin/klzagent
mongodb 29783 0.4 6.0 5047564 3987272 ? Sl Dec27 7:59 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run
root 878 0.2 0.0 349376 3588 ? Ssl Dec02 85:14 /usr/bin/vmtoolsd
apache 29921 0.2 0.1 708460 80392 ? Ssl Dec27 3:37 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache 29936 0.2 0.1 708484 78372 ? Ssl Dec27 3:26 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --pidfile=/var/run/pulp/reserved_resource_worker-0.pid
apache 29939 0.2 0.1 708436 78364 ? Ssl Dec27 3:28 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-1@%h -A pulp.server.async.app -c 1 --events --pidfile=/var/run/pulp/reserved_resource_worker-1.pid
apache 29942 0.2 0.1 708452 78376 ? Ssl Dec27 3:26 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-2@%h -A pulp.server.async.app -c 1 --events --pidfile=/var/run/pulp/reserved_resource_worker-2.pid
apache 29950 0.2 0.1 708436 80416 ? Ssl Dec27 3:26 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-3@%h -A pulp.server.async.app -c 1 --events --pidfile=/var/run/pulp/reserved_resource_worker-3.pid
foreman 30975 0.2 0.6 733484 436164 ? Ssl Dec27 3:29 sidekiq 5.2.7 [0 of 5 busy]
root 3834 0.1 0.1 320772 121960 ? Sl Dec02 62:18 splunkd -p 8089 start
postgres 14930 0.1 0.1 855632 87272 ? Ss 11:45 0:00 postgres: foreman foreman [local] idle
postgres 15713 0.1 0.0 863412 44628 ? Ss 11:48 0:00 postgres: foreman foreman [local] idle
redis 29811 0.1 0.0 156616 8028 ? Ssl Dec27 2:29 /opt/rh/rh-redis5/root/usr/bin/redis-server 127.0.0.1:6379
Top 20 memory intensive processes:
# ps aux --sort -pmem | head -n 20
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 4727 0.4 6.5 6618148 4313168 ? Sl Dec02 163:53 /var/IBM/ITM/lx8266/lz/bin/klzagent
mongodb 29783 0.4 6.0 5047564 3987268 ? Sl Dec27 7:59 /opt/rh/rh-mongodb34/root/usr/bin/mongod -f /etc/opt/rh/rh-mongodb34/mongod.conf run
qdroute+ 29844 1.1 2.5 2119100 1694948 ? Ssl Dec27 18:25 /usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf
puppet 30469 3.8 2.3 6039376 1557304 ? Sl Dec27 62:41 /usr/bin/java -Xms2G -Xmx2G -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger -XX:OnOutOfMemoryError=kill -9 %p -XX:ErrorFile=/var/log/puppetlabs/puppetserver/puppetserver_err_pid%p.log -cp /opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar:/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/facter.jar:/opt/puppetlabs/server/data/puppetserver/jars/* clojure.main -m puppetlabs.trapperkeeper.main --config /etc/puppetlabs/puppetserver/conf.d --bootstrap-config /etc/puppetlabs/puppetserver/services.d/,/opt/puppetlabs/server/apps/puppetserver/config/services.d/ --restart-file /opt/puppetlabs/server/data/puppetserver/restartcounter
tomcat 30285 1.2 1.6 8128324 1083008 ? Ssl Dec27 20:41 /usr/lib/jvm/jre/bin/java -Xms1024m -Xmx4096m -Djava.security.auth.login.config=/usr/share/tomcat/conf/login.config -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat/temp -Djava.util.logging.config.file=/usr/share/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start
foreman 31028 1.0 1.1 1149320 786516 ? Sl Dec27 17:38 puma: cluster worker 1: 30291 [foreman]
foreman 31025 1.2 1.1 1142972 764056 ? Sl Dec27 19:56 puma: cluster worker 0: 30291 [foreman]
qpidd 29847 0.1 0.8 936964 566816 ? Ssl Dec27 2:31 /usr/sbin/qpidd --config /etc/qpid/qpidd.conf
postgres 29827 0.0 0.6 835064 450892 ? Ss Dec27 0:18 postgres: checkpointer
foreman 30975 0.2 0.6 733484 436164 ? Ssl Dec27 3:29 sidekiq 5.2.7 [0 of 5 busy]
foreman 30288 0.1 0.6 718404 418076 ? Ssl Dec27 2:36 sidekiq 5.2.7 [0 of 1 busy]
foreman 30979 0.1 0.6 731924 413748 ? Ssl Dec27 1:55 sidekiq 5.2.7 [0 of 5 busy]
foreman 30291 0.0 0.5 694196 355864 ? Ssl Dec27 0:26 puma 4.3.3 (tcp://127.0.0.1:3000) [foreman]
root 4164 23.6 0.4 1680316 312272 ? SLl Dec02 8886:28 /opt/fireeye/bin/xagt --mode Eventor --iofd 3 --cmname 12 --log INFO --logfd 4
foreman+ 30985 0.0 0.2 2424340 144568 ? Ssl Dec27 0:14 ruby /usr/share/foreman-proxy/bin/smart-proxy --no-daemonize
root 3834 0.1 0.1 320772 122348 ? Sl Dec02 62:18 splunkd -p 8089 start
pulp 29865 0.0 0.1 1984764 117848 ? Sl Dec27 0:48 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --bind 127.0.0.1:24816 --worker-class aiohttp.GunicornWebWorker -w 2 --access-logfile -
pulp 29869 0.0 0.1 1984896 117832 ? Sl Dec27 0:48 /usr/bin/python3 /usr/bin/gunicorn pulpcore.content:server --bind 127.0.0.1:24816 --worker-class aiohttp.GunicornWebWorker -w 2 --access-logfile -
root 535 0.0 0.1 187500 115276 ? Ss Dec02 7:29 /usr/lib/systemd/systemd-journald
I hope this data helps in some way. If there is any additional information that I can provide that would be beneficial, please just let me know!