[katello] Katello 2.2 java/ruby invoked oom-killer

Hi,

I'm running katello-2.2.1. This has been upgraded from Katello 2.1.

My setup is with 8GB RAM. This is dedicated to only running Katello.
There's a few additional applications like splunk agent and PBIS for AD
integration. We have the main katello server, with a remote capsule in one
remote office and a number of virt-who instances reporting back on the ESXi
environments we have. All in all around 20-30 clients are added to this
system. These are split over 4 organizations each syncing with RedHat only.

Over the last few days candlepin has been dying and when I've run
katello-service status it comes back with:

qpidd dead but pid file exists

So I look in /var/log/messages and can see the following about the
oom-killer being executed for java and ruby:

Sep 8 21:29:25 KATSERV pulp: celery.worker.job:INFO: Task
pulp.server.managers.consumer.applicability.regenerate_applicability_for_consumers[920b6bf5-44d4-48c5-99b8-fa2cd1f9c06f]
succeeded in 0.70295603s: None
Sep 8 21:29:25 KATSERV pulp: celery.worker.job:INFO: Task pulp.server.a
sync.tasks._release_resource[f4517b8b-fe89-419e-b5f6-4a24693f2bff]
succeeded in 0.155693524997s: None
Sep 8 21:30:18 KATSERV kernel: java invoked oom-killer: gfp_mask=0x201da,
order=0, oom_adj=0, oom_score_adj=0
Sep 8 21:30:18 KATSERV kernel: java cpuset=/ mems_allowed=0
Sep 8 21:30:18 KATSERV kernel: Pid: 53218, comm: java Not tainted
2.6.32-504.23.4.el6.x86_64 #1
Sep 8 21:30:18 KATSERV kernel: Call Trace:
Sep 8 21:30:18 KATSERV kernel: [<ffffffff810d4241>] ?
cpuset_print_task_mems_allowed+0x91/0xb0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff81127500>] ?
dump_header+0x90/0x1b0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8122ee7c>] ?
security_real_capable_noaudit+0x3c/0x70
Sep 8 21:30:18 KATSERV kernel: [<ffffffff81127982>] ?
oom_kill_process+0x82/0x2a0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff811278c1>] ?
select_bad_process+0xe1/0x120
Sep 8 21:30:18 KATSERV kernel: [<ffffffff81127dc0>] ?
out_of_memory+0x220/0x3c0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff811346ff>] ?
__alloc_pages_nodemask+0x89f/0x8d0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8116c9aa>] ?
alloc_pages_current+0xaa/0x110
Sep 8 21:30:18 KATSERV kernel: [<ffffffff811248f7>] ?
__page_cache_alloc+0x87/0x90
Sep 8 21:30:18 KATSERV kernel: [<ffffffff811242de>] ?
find_get_page+0x1e/0xa0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff81125897>] ?
filemap_fault+0x1a7/0x500
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8114ed04>] ? __do_fault+0x54/0x530
Sep 8 21:30:18 KATSERV kernel: [<ffffffff810b3536>] ?
futex_wait+0x1e6/0x310
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8114f2d7>] ?
handle_pte_fault+0xf7/0xb00
Sep 8 21:30:18 KATSERV kernel: [<ffffffff810a3ef4>] ?
hrtimer_start_range_ns+0x14/0x20
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8114ff79>] ?
handle_mm_fault+0x299/0x3d0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8104d096>] ?
__do_page_fault+0x146/0x500
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8100bc0e>] ?
apic_timer_interrupt+0xe/0x20
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8100bc0e>] ?
apic_timer_interrupt+0xe/0x20
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8153001e>] ?
do_page_fault+0x3e/0xa0
Sep 8 21:30:18 KATSERV kernel: [<ffffffff8152d3d5>] ? page_fault+0x25/0x30
Sep 8 21:30:18 KATSERV kernel: Mem-Info:Sep 8 21:30:18 KATSERV kernel:
Node 0 DMA per-cpu:
Sep 8 21:30:18 KATSERV kernel: CPU 0: hi: 0, btch: 1 usd: 0Sep
8 21:30:18 KATSERV kernel: CPU 1: hi: 0, btch: 1 usd: 0
Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32 per-cpu:Sep 8 21:30:18
KATSERV kernel: CPU 0: hi: 186, btch: 31 usd: 31
Sep 8 21:30:18 KATSERV kernel: CPU 1: hi: 186, btch: 31 usd: 23
Sep 8 21:30:18 KATSERV kernel: Node 0 Normal per-cpu:
Sep 8 21:30:18 KATSERV kernel: CPU 0: hi: 186, btch: 31 usd: 6
Sep 8 21:30:18 KATSERV kernel: CPU 1: hi: 186, btch: 31 usd: 30
Sep 8 21:30:18 KATSERV kernel: active_anon:1617794 inactive_anon:322664
isolated_anon:0
Sep 8 21:30:18 KATSERV kernel: active_file:161 inactive_file:333
isolated_file:0
Sep 8 21:30:18 KATSERV kernel: unevictable:0 dirty:5 writeback:0 unstable:0
Sep 8 21:30:18 KATSERV kernel: free:25731 slab_reclaimable:4539
slab_unreclaimable:9278
Sep 8 21:30:18 KATSERV kernel: mapped:3525 shmem:4203
pagetables:13671bounce:0
Sep 8 21:30:18 KATSERV kernel: Node 0 DMA free:15420kB min:120kB low:148kB
high:180kB active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
present:15024kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
all_unreclaimable? yes
Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 3000 8050 8050
Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32 free:45228kB min:25140kB
low:31424kB high:37708kB active_anon:2182908kB inactive_anon:560684kB
active_file:0kB inactive_file:52kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB pre
sent:3072096kB mlocked:0kB dirty:0kB writeback:0kB mapped:3180kB
shmem:3268kB slab_reclaimable:756kB slab_unreclaimable:500kB
kernel_stack:168kB pagetables:4004kB unstable:0kB bounce:0kB
writeback_tmp:0kB pages_scanned:256 all_unreclaimable? no
Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 0 5050 5050
Sep 8 21:30:18 KATSERV kernel: Node 0 Normal free:42276kB min:42316kB
low:52892kB high:63472kB active_anon:4288268kB inactive_anon:729972kB
active_file:684kB inactive_file:1280kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB present:5171200kB mlocked:0kB dirty:20kB writeback:0kB
mapped:10920kB shmem:13544kB slab_reclaimable:17400kB
slab_unreclaimable:36612kB kernel_stack:4432kB pagetables:50680kB
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1536
all_unreclaimable? no
Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 0 0 0
Sep 8 21:30:18 KATSERV kernel: Node 0 DMA: 14kB 18kB 116kB 132kB
064kB 0128kB 0256kB 0512kB 11024kB 12048kB 34096kB = 15420kB
Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32: 95
4kB 768kB 10916kB
3632kB 1664kB 11128kB 12256kB 8512kB 151024kB 82048kB 04096kB =
45228kB
Sep 8 21:30:18 KATSERV kernel: Node 0 Normal: 13554kB 9338kB 57116kB
271
32kB 8964kB 28128kB 5256kB 0512kB 11024kB 02048kB 0*4096kB =
42276kB
Sep 8 21:30:18 KATSERV kernel: 26640 total pagecache pages
Sep 8 21:30:18 KATSERV kernel: 21862 pages in swap cache
Sep 8 21:30:18 KATSERV kernel: Swap cache stats: add 3150046, delete
3128184, find 1422899/1587533
Sep 8 21:30:18 KATSERV kernel: Free swap = 0kB
Sep 8 21:30:18 KATSERV kernel: Total swap = 3170300kB
Sep 8 21:30:18 KATSERV kernel: 2097136 pages RAM
Sep 8 21:30:18 KATSERV kernel: 83744 pages reserved
Sep 8 21:30:18 KATSERV kernel: 20821 pages shared
Sep 8 21:30:18 KATSERV kernel: 1976405 pages non-shared
Sep 8 21:30:18 KATSERV kernel: [ pid ] uid tgid total_vm rss cpu
oom_adj oom_score_adj name
Sep 8 21:30:18 KATSERV kernel: [ 415] 0 415 2769 30
-17 -1000 udevd
Sep 8 21:30:18 KATSERV kernel: [ 709] 0 709 2769 31
-17 -1000 udevd
Sep 8 21:30:18 KATSERV kernel: [ 1244] 0 1244 1540
40 0 0 portreserve
Sep 8 21:30:18 KATSERV kernel: [ 1252] 0 1252 62799
710 0 0 rsyslogd
Sep 8 21:30:18 KATSERV kernel: [ 1282] 0 1282 2707
350 0 0 irqbalance
Sep 8 21:30:18 KATSERV kernel: [ 1302] 32 1302 4744
151 0 0 rpcbind
Sep 8 21:30:18 KATSERV kernel: [ 1324] 29 1324 5837
40 0 0 rpc.statd
Sep 8 21:30:18 KATSERV kernel: [ 1351] 0 1351 116627
50 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1364] 0 1364 273941
1620 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1425] 0 1425 141865
221 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1516] 0 1516 172493
1021 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1541] 0 1541 206822
50 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1558] 0 1558 389734
3351 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1587] 0 1587 160296
51 0 0 lwsmd
Sep 8 21:30:18 KATSERV kernel: [ 1651] 0 1651 5773
51 0 0 rpc.idmapd
Sep 8 21:30:18 KATSERV kernel: [ 1762] 81 1762 8826
70 0 0 dbus-daemon
Sep 8 21:30:18 KATSERV kernel: [ 1778] 0 1778 1020
30 0 0 acpid
Sep 8 21:30:18 KATSERV kernel: [ 1789] 68 1789 13460
1281 0 0 hald
Sep 8 21:30:18 KATSERV kernel: [ 1790] 0 1790 5100
61 0 0 hald-runner
Sep 8 21:30:18 KATSERV kernel: [ 1823] 0 1823 5630
50 0 0 hald-addon-inpu
Sep 8 21:30:18 KATSERV kernel: [ 1838] 68 1838 4502
51 0 0 hald-addon-acpi
Sep 8 21:30:18 KATSERV kernel: [ 1848] 0 1848 2768 30
-17 -1000 udevd
Sep 8 21:30:18 KATSERV kernel: [ 1854] 0 1854 96534
380 0 0 automount
Sep 8 21:30:18 KATSERV kernel: [ 1871] 0 1871 1570
30 0 0 mcelog
Sep 8 21:30:18 KATSERV kernel: [ 1899] 0 1899 16554 270
-17 -1000 sshd
Sep 8 21:30:18 KATSERV kernel: [ 1910] 38 1910 11151
411 0 0 ntpd
Sep 8 21:30:18 KATSERV kernel: [ 1947] 26 1947 53994 450
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2009] 26 2009 44747 220
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2020] 26 2020 54048 13650
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2021] 26 2021 53994 550
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2022] 26 2022 54131 1220
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2023] 26 2023 44849 970
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [ 2068] 0 2068 20215
250 0 0 master
Sep 8 21:30:18 KATSERV kernel: [ 2079] 89 2079 23183
200 0 0 qmgr
Sep 8 21:30:18 KATSERV kernel: [ 2178] 0 2178 28661
50 0 0 abrtd
Sep 8 21:30:18 KATSERV kernel: [ 2197] 0 2197 28131
131 0 0 abrt-dump-oops
Sep 8 21:30:18 KATSERV kernel: [ 3428] 0 3428 29216
271 0 0 crond
Sep 8 21:30:18 KATSERV kernel: [ 3790] 0 3790 5276
70 0 0 atd
Sep 8 21:30:18 KATSERV kernel: [ 3847] 0 3847 34577
94770 0 0 puppet
Sep 8 21:30:18 KATSERV kernel: [ 3868] 0 3868 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [ 3870] 0 3870 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [ 3872] 0 3872 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [ 3874] 0 3874 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [ 3876] 0 3876 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [ 3878] 0 3878 1016
40 0 0 mingetty
Sep 8 21:30:18 KATSERV kernel: [52620] 494 52620 164833
902810 0 0 qdrouterd
Sep 8 21:30:18 KATSERV kernel: [52652] 498 52652 2047509
14610211 0 0 qpidd
Sep 8 21:30:18 KATSERV kernel: [52859] 91 52859 916392
840641 0 0 java
Sep 8 21:30:18 KATSERV kernel: [52911] 26 52911 54806 1320
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53059] 48 53059 164130
35800 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53075] 48 53075 137084
14951 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53096] 48 53096 163972
17600 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53147] 48 53147 222195
36071 0 0 celery
Sep 8 21:30:18 KATSERV kernel: [53164] 48 53164 103198
3941 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53203] 496 53203 428853
462670 0 0 java
Sep 8 21:30:18 KATSERV kernel: [53318] 495 53318 40092
11841 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [53350] 184 53350 2202850
61201 0 0 mongod
Sep 8 21:30:18 KATSERV kernel: [53462] 48 53462 164026
23151 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53522] 48 53522 144997
23490 0 0 python
Sep 8 21:30:18 KATSERV kernel: [53604] 497 53604 387412
270001 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [53606] 497 53606 107696
4211 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [53670] 26 53670 54439 4030
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53678] 26 53678 54370 3350
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53679] 26 53679 54507 470
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53689] 26 53689 54380 3761
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53706] 26 53706 54677 5820
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [53771] 0 53771 42050
2510 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53773] 0 53773 53517 71
-17 -1000 PassengerWatchd
Sep 8 21:30:18 KATSERV kernel: [53777] 0 53777 263367
1811 0 0 PassengerHelper
Sep 8 21:30:18 KATSERV kernel: [53785] 99 53785 56901
141 0 0 PassengerLoggin
Sep 8 21:30:18 KATSERV kernel: [53792] 48 53792 261831
46811 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53793] 48 53793 75403
5451 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53794] 48 53794 75383
6080 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53795] 48 53795 75385
6140 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53796] 48 53796 46555
5430 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53797] 48 53797 46571
4861 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53798] 48 53798 75319
5480 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53799] 48 53799 75417
6051 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [53800] 48 53800 46560
5481 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [54103] 48 54103 75319
4471 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [54335] 48 54335 75416
5441 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [54338] 48 54338 75404
5571 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [54775] 26 54775 54507 850
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [56623] 52 56623 43052
145381 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [59778] 48 59778 46557
5141 0 0 httpd
Sep 8 21:30:18 KATSERV kernel: [22036] 0 22036 28190
130 0 0 sshd
Sep 8 21:30:18 KATSERV kernel: [22048] 419432521 22048 28190 42
0 0 0 sshd
Sep 8 21:30:18 KATSERV kernel: [22049] 419432521 22049 31641 84
1 0 0 bash
Sep 8 21:30:18 KATSERV kernel: [24036] 497 24036 235850
731600 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [24042] 26 24042 54420 8531
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [24052] 26 24052 54901 15121
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [25995] 0 25995 28190
140 0 0 sshd
Sep 8 21:30:18 KATSERV kernel: [26008] 419432521 26008 28190 42
1 0 0 sshd
Sep 8 21:30:18 KATSERV kernel: [26009] 419432521 26009 31641 84
1 0 0 bash
Sep 8 21:30:18 KATSERV kernel: [26639] 89 26639 23166
220 0 0 pickup
Sep 8 21:30:18 KATSERV kernel: [29153] 26 29153 55342 19611
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30057] 26 30057 55637 24751
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30099] 26 30099 54547 8170
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30100] 26 30100 57454 43731
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30542] 26 30542 54408 7581
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30543] 26 30543 55150 19760
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30544] 26 30544 54426 7870
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30731] 26 30731 54400 7591
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30732] 26 30732 55637 24420
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30733] 26 30733 56218 30980
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30858] 26 30858 54474 7001
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [30859] 26 30859 54474 7000
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [31094] 26 31094 54474 7001
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [31095] 26 31095 54474 6990
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [31135] 26 31135 54474 7010
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [31173] 497 31173 2308
430 0 0 ruby193-ruby
Sep 8 21:30:18 KATSERV kernel: [31182] 497 31182 1017
230 0 0 scl
Sep 8 21:30:18 KATSERV kernel: [31183] 497 31183 2308
441 0 0 bash
Sep 8 21:30:18 KATSERV kernel: [31186] 497 31186 101375
432500 0 0 ruby
Sep 8 21:30:18 KATSERV kernel: [31224] 26 31224 54545 12000
-17 -1000 postmaster
Sep 8 21:30:18 KATSERV kernel: [31243] 497 31243 39197
1811 0 0 crond
Sep 8 21:30:18 KATSERV kernel: [31244] 497 31244 39197
1811 0 0 crond
Sep 8 21:30:18 KATSERV kernel: [31253] 497 31253 26524
421 0 0 sh
Sep 8 21:30:18 KATSERV kernel: [31255] 497 31255 26524
431 0 0 sh
Sep 8 21:30:18 KATSERV kernel: [31256] 497 31256 27050
481 0 0 foreman-rake
Sep 8 21:30:18 KATSERV kernel: [31257] 497 31257 27050
481 0 0 foreman-rake
Sep 8 21:30:18 KATSERV kernel: [31262] 497 31262 26524
430 0 0 ruby193-rake
Sep 8 21:30:18 KATSERV kernel: [31263] 497 31263 26524
430 0 0 ruby193-rake
Sep 8 21:30:18 KATSERV kernel: [31268] 497 31268 1017
240 0 0 scl
Sep 8 21:30:18 KATSERV kernel: [31269] 497 31269 1017
240 0 0 scl
Sep 8 21:30:18 KATSERV kernel: [31270] 4

So it seems qpidd process is constantly growing until it's consumes all the
system memory.

The only thing I can see is Katello is running 2.2.1 and the 2 capsules
connected to it are running 2.2.3.

I plan to upgrade the katello server to 2.2.3 shortly as well.

Just wondered if this was a known issue?

If I restart the capsules then the qpidd process drops memory usage for a
short period of time.

I'm running on RedHat 6.7 and the system has 16GB RAM. Eventually all that
will be consumed and qpidd process will be killed by oom-killer.

The capsules seem to be able to connect as I can see all the ESXi hosts and
the RedHat clients with all the errata available.

Thanks

Marc

Well it may just be luck but it appears if I restart the services on the
main katello server but don't then restart the main processes on the
capsules then qpidd begins consuming large amounts of memory.

If I restart the katello-services on all the proxies once I restart katello
then the qpidd process seems stable in the amount of memory it's consuming.

It's been restarted now for 20 minutes and the memory consumption is around
223M. Normally it would be around 1G by now so hopefully it's resolved.
I'll still plan to upgrade to 2.2.3 on the main katello server soon.

Thanks

Marc

Hi Stephen,

We have a total of 59 content hosts. Roughly half of these are ESXi hosts
as we are using virt-who to map ESXi ->RedHat VMs. We're really only
starting the deployment so not too many just yet

We have 5 different organizations, for each geographic region we are in (3
in US and 2 in UK). The capsules serve 2 sites in the US and have a total
of 21 content hosts between them.

Currently we only have 3 repositories that are being syned. The main RedHat
6 repository and the RH 6 optional and common repositores as well.

This only seemed to start when we had a network interruption for several
hours whilst we performed maintenance for a network cutover.

Thanks

Marc

How many clients do you have? You may be hitting this memory leak in
qpid dispatch router:
https://issues.jboss.org/browse/ENTMQ-1149

··· On Wed, Sep 09, 2015 at 05:05:58AM -0700, mfarr1981@gmail.com wrote: > Hi, > > I'm running katello-2.2.1. This has been upgraded from Katello 2.1. > > My setup is with 8GB RAM. This is dedicated to only running Katello. > There's a few additional applications like splunk agent and PBIS for AD > integration. We have the main katello server, with a remote capsule in one > remote office and a number of virt-who instances reporting back on the ESXi > environments we have. All in all around 20-30 clients are added to this > system. These are split over 4 organizations each syncing with RedHat only. > > Over the last few days candlepin has been dying and when I've run > katello-service status it comes back with: > > qpidd dead but pid file exists > > So I look in /var/log/messages and can see the following about the > oom-killer being executed for java and ruby: > > Sep 8 21:29:25 KATSERV pulp: celery.worker.job:INFO: Task > pulp.server.managers.consumer.applicability.regenerate_applicability_for_consumers[920b6bf5-44d4-48c5-99b8-fa2cd1f9c06f] > succeeded in 0.70295603s: None > Sep 8 21:29:25 KATSERV pulp: celery.worker.job:INFO: Task pulp.server.a > sync.tasks._release_resource[f4517b8b-fe89-419e-b5f6-4a24693f2bff] > succeeded in 0.155693524997s: None > Sep 8 21:30:18 KATSERV kernel: java invoked oom-killer: gfp_mask=0x201da, > order=0, oom_adj=0, oom_score_adj=0 > Sep 8 21:30:18 KATSERV kernel: java cpuset=/ mems_allowed=0 > Sep 8 21:30:18 KATSERV kernel: Pid: 53218, comm: java Not tainted > 2.6.32-504.23.4.el6.x86_64 #1 > Sep 8 21:30:18 KATSERV kernel: Call Trace: > Sep 8 21:30:18 KATSERV kernel: [] ? > cpuset_print_task_mems_allowed+0x91/0xb0 > Sep 8 21:30:18 KATSERV kernel: [] ? > dump_header+0x90/0x1b0 > Sep 8 21:30:18 KATSERV kernel: [] ? > security_real_capable_noaudit+0x3c/0x70 > Sep 8 21:30:18 KATSERV kernel: [] ? > oom_kill_process+0x82/0x2a0 > Sep 8 21:30:18 KATSERV kernel: [] ? > select_bad_process+0xe1/0x120 > Sep 8 21:30:18 KATSERV kernel: [] ? > out_of_memory+0x220/0x3c0 > Sep 8 21:30:18 KATSERV kernel: [] ? > __alloc_pages_nodemask+0x89f/0x8d0 > Sep 8 21:30:18 KATSERV kernel: [] ? > alloc_pages_current+0xaa/0x110 > Sep 8 21:30:18 KATSERV kernel: [] ? > __page_cache_alloc+0x87/0x90 > Sep 8 21:30:18 KATSERV kernel: [] ? > find_get_page+0x1e/0xa0 > Sep 8 21:30:18 KATSERV kernel: [] ? > filemap_fault+0x1a7/0x500 > Sep 8 21:30:18 KATSERV kernel: [] ? __do_fault+0x54/0x530 > Sep 8 21:30:18 KATSERV kernel: [] ? > futex_wait+0x1e6/0x310 > Sep 8 21:30:18 KATSERV kernel: [] ? > handle_pte_fault+0xf7/0xb00 > Sep 8 21:30:18 KATSERV kernel: [] ? > hrtimer_start_range_ns+0x14/0x20 > Sep 8 21:30:18 KATSERV kernel: [] ? > handle_mm_fault+0x299/0x3d0 > Sep 8 21:30:18 KATSERV kernel: [] ? > __do_page_fault+0x146/0x500 > Sep 8 21:30:18 KATSERV kernel: [] ? > apic_timer_interrupt+0xe/0x20 > Sep 8 21:30:18 KATSERV kernel: [] ? > apic_timer_interrupt+0xe/0x20 > Sep 8 21:30:18 KATSERV kernel: [] ? > do_page_fault+0x3e/0xa0 > Sep 8 21:30:18 KATSERV kernel: [] ? page_fault+0x25/0x30 > Sep 8 21:30:18 KATSERV kernel: Mem-Info:Sep 8 21:30:18 KATSERV kernel: > Node 0 DMA per-cpu: > Sep 8 21:30:18 KATSERV kernel: CPU 0: hi: 0, btch: 1 usd: 0Sep > 8 21:30:18 KATSERV kernel: CPU 1: hi: 0, btch: 1 usd: 0 > Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32 per-cpu:Sep 8 21:30:18 > KATSERV kernel: CPU 0: hi: 186, btch: 31 usd: 31 > Sep 8 21:30:18 KATSERV kernel: CPU 1: hi: 186, btch: 31 usd: 23 > Sep 8 21:30:18 KATSERV kernel: Node 0 Normal per-cpu: > Sep 8 21:30:18 KATSERV kernel: CPU 0: hi: 186, btch: 31 usd: 6 > Sep 8 21:30:18 KATSERV kernel: CPU 1: hi: 186, btch: 31 usd: 30 > Sep 8 21:30:18 KATSERV kernel: active_anon:1617794 inactive_anon:322664 > isolated_anon:0 > Sep 8 21:30:18 KATSERV kernel: active_file:161 inactive_file:333 > isolated_file:0 > Sep 8 21:30:18 KATSERV kernel: unevictable:0 dirty:5 writeback:0 unstable:0 > Sep 8 21:30:18 KATSERV kernel: free:25731 slab_reclaimable:4539 > slab_unreclaimable:9278 > Sep 8 21:30:18 KATSERV kernel: mapped:3525 shmem:4203 > pagetables:13671bounce:0 > Sep 8 21:30:18 KATSERV kernel: Node 0 DMA free:15420kB min:120kB low:148kB > high:180kB active_anon:0kB inactive_anon:0kB active_file:0kB > inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB > present:15024kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB > slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? yes > Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 3000 8050 8050 > Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32 free:45228kB min:25140kB > low:31424kB high:37708kB active_anon:2182908kB inactive_anon:560684kB > active_file:0kB inactive_file:52kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB pre > sent:3072096kB mlocked:0kB dirty:0kB writeback:0kB mapped:3180kB > shmem:3268kB slab_reclaimable:756kB slab_unreclaimable:500kB > kernel_stack:168kB pagetables:4004kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:256 all_unreclaimable? no > Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 0 5050 5050 > Sep 8 21:30:18 KATSERV kernel: Node 0 Normal free:42276kB min:42316kB > low:52892kB high:63472kB active_anon:4288268kB inactive_anon:729972kB > active_file:684kB inactive_file:1280kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:5171200kB mlocked:0kB dirty:20kB writeback:0kB > mapped:10920kB shmem:13544kB slab_reclaimable:17400kB > slab_unreclaimable:36612kB kernel_stack:4432kB pagetables:50680kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1536 > all_unreclaimable? no > Sep 8 21:30:18 KATSERV kernel: lowmem_reserve[]: 0 0 0 0 > Sep 8 21:30:18 KATSERV kernel: Node 0 DMA: 1*4kB 1*8kB 1*16kB 1*32kB > 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15420kB > Sep 8 21:30:18 KATSERV kernel: Node 0 DMA32: 95*4kB 76*8kB 109*16kB > 36*32kB 16*64kB 11*128kB 12*256kB 8*512kB 15*1024kB 8*2048kB 0*4096kB = > 45228kB > Sep 8 21:30:18 KATSERV kernel: Node 0 Normal: 1355*4kB 933*8kB 571*16kB > 271*32kB 89*64kB 28*128kB 5*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = > 42276kB > Sep 8 21:30:18 KATSERV kernel: 26640 total pagecache pages > Sep 8 21:30:18 KATSERV kernel: 21862 pages in swap cache > Sep 8 21:30:18 KATSERV kernel: Swap cache stats: add 3150046, delete > 3128184, find 1422899/1587533 > Sep 8 21:30:18 KATSERV kernel: Free swap = 0kB > Sep 8 21:30:18 KATSERV kernel: Total swap = 3170300kB > Sep 8 21:30:18 KATSERV kernel: 2097136 pages RAM > Sep 8 21:30:18 KATSERV kernel: 83744 pages reserved > Sep 8 21:30:18 KATSERV kernel: 20821 pages shared > Sep 8 21:30:18 KATSERV kernel: 1976405 pages non-shared > Sep 8 21:30:18 KATSERV kernel: [ pid ] uid tgid total_vm rss cpu > oom_adj oom_score_adj name > Sep 8 21:30:18 KATSERV kernel: [ 415] 0 415 2769 30 > -17 -1000 udevd > Sep 8 21:30:18 KATSERV kernel: [ 709] 0 709 2769 31 > -17 -1000 udevd > Sep 8 21:30:18 KATSERV kernel: [ 1244] 0 1244 1540 > 40 0 0 portreserve > Sep 8 21:30:18 KATSERV kernel: [ 1252] 0 1252 62799 > 710 0 0 rsyslogd > Sep 8 21:30:18 KATSERV kernel: [ 1282] 0 1282 2707 > 350 0 0 irqbalance > Sep 8 21:30:18 KATSERV kernel: [ 1302] 32 1302 4744 > 151 0 0 rpcbind > Sep 8 21:30:18 KATSERV kernel: [ 1324] 29 1324 5837 > 40 0 0 rpc.statd > Sep 8 21:30:18 KATSERV kernel: [ 1351] 0 1351 116627 > 50 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1364] 0 1364 273941 > 1620 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1425] 0 1425 141865 > 221 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1516] 0 1516 172493 > 1021 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1541] 0 1541 206822 > 50 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1558] 0 1558 389734 > 3351 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1587] 0 1587 160296 > 51 0 0 lwsmd > Sep 8 21:30:18 KATSERV kernel: [ 1651] 0 1651 5773 > 51 0 0 rpc.idmapd > Sep 8 21:30:18 KATSERV kernel: [ 1762] 81 1762 8826 > 70 0 0 dbus-daemon > Sep 8 21:30:18 KATSERV kernel: [ 1778] 0 1778 1020 > 30 0 0 acpid > Sep 8 21:30:18 KATSERV kernel: [ 1789] 68 1789 13460 > 1281 0 0 hald > Sep 8 21:30:18 KATSERV kernel: [ 1790] 0 1790 5100 > 61 0 0 hald-runner > Sep 8 21:30:18 KATSERV kernel: [ 1823] 0 1823 5630 > 50 0 0 hald-addon-inpu > Sep 8 21:30:18 KATSERV kernel: [ 1838] 68 1838 4502 > 51 0 0 hald-addon-acpi > Sep 8 21:30:18 KATSERV kernel: [ 1848] 0 1848 2768 30 > -17 -1000 udevd > Sep 8 21:30:18 KATSERV kernel: [ 1854] 0 1854 96534 > 380 0 0 automount > Sep 8 21:30:18 KATSERV kernel: [ 1871] 0 1871 1570 > 30 0 0 mcelog > Sep 8 21:30:18 KATSERV kernel: [ 1899] 0 1899 16554 270 > -17 -1000 sshd > Sep 8 21:30:18 KATSERV kernel: [ 1910] 38 1910 11151 > 411 0 0 ntpd > Sep 8 21:30:18 KATSERV kernel: [ 1947] 26 1947 53994 450 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2009] 26 2009 44747 220 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2020] 26 2020 54048 13650 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2021] 26 2021 53994 550 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2022] 26 2022 54131 1220 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2023] 26 2023 44849 970 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [ 2068] 0 2068 20215 > 250 0 0 master > Sep 8 21:30:18 KATSERV kernel: [ 2079] 89 2079 23183 > 200 0 0 qmgr > Sep 8 21:30:18 KATSERV kernel: [ 2178] 0 2178 28661 > 50 0 0 abrtd > Sep 8 21:30:18 KATSERV kernel: [ 2197] 0 2197 28131 > 131 0 0 abrt-dump-oops > Sep 8 21:30:18 KATSERV kernel: [ 3428] 0 3428 29216 > 271 0 0 crond > Sep 8 21:30:18 KATSERV kernel: [ 3790] 0 3790 5276 > 70 0 0 atd > Sep 8 21:30:18 KATSERV kernel: [ 3847] 0 3847 34577 > 94770 0 0 puppet > Sep 8 21:30:18 KATSERV kernel: [ 3868] 0 3868 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [ 3870] 0 3870 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [ 3872] 0 3872 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [ 3874] 0 3874 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [ 3876] 0 3876 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [ 3878] 0 3878 1016 > 40 0 0 mingetty > Sep 8 21:30:18 KATSERV kernel: [52620] 494 52620 164833 > 902810 0 0 qdrouterd > Sep 8 21:30:18 KATSERV kernel: [52652] 498 52652 2047509 > 14610211 0 0 qpidd > Sep 8 21:30:18 KATSERV kernel: [52859] 91 52859 916392 > 840641 0 0 java > Sep 8 21:30:18 KATSERV kernel: [52911] 26 52911 54806 1320 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53059] 48 53059 164130 > 35800 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53075] 48 53075 137084 > 14951 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53096] 48 53096 163972 > 17600 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53147] 48 53147 222195 > 36071 0 0 celery > Sep 8 21:30:18 KATSERV kernel: [53164] 48 53164 103198 > 3941 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53203] 496 53203 428853 > 462670 0 0 java > Sep 8 21:30:18 KATSERV kernel: [53318] 495 53318 40092 > 11841 0 0 ruby > Sep 8 21:30:18 KATSERV kernel: [53350] 184 53350 2202850 > 61201 0 0 mongod > Sep 8 21:30:18 KATSERV kernel: [53462] 48 53462 164026 > 23151 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53522] 48 53522 144997 > 23490 0 0 python > Sep 8 21:30:18 KATSERV kernel: [53604] 497 53604 387412 > 270001 0 0 ruby > Sep 8 21:30:18 KATSERV kernel: [53606] 497 53606 107696 > 4211 0 0 ruby > Sep 8 21:30:18 KATSERV kernel: [53670] 26 53670 54439 4030 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53678] 26 53678 54370 3350 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53679] 26 53679 54507 470 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53689] 26 53689 54380 3761 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53706] 26 53706 54677 5820 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [53771] 0 53771 42050 > 2510 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53773] 0 53773 53517 71 > -17 -1000 PassengerWatchd > Sep 8 21:30:18 KATSERV kernel: [53777] 0 53777 263367 > 1811 0 0 PassengerHelper > Sep 8 21:30:18 KATSERV kernel: [53785] 99 53785 56901 > 141 0 0 PassengerLoggin > Sep 8 21:30:18 KATSERV kernel: [53792] 48 53792 261831 > 46811 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53793] 48 53793 75403 > 5451 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53794] 48 53794 75383 > 6080 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53795] 48 53795 75385 > 6140 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53796] 48 53796 46555 > 5430 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53797] 48 53797 46571 > 4861 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53798] 48 53798 75319 > 5480 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53799] 48 53799 75417 > 6051 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [53800] 48 53800 46560 > 5481 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [54103] 48 54103 75319 > 4471 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [54335] 48 54335 75416 > 5441 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [54338] 48 54338 75404 > 5571 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [54775] 26 54775 54507 850 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [56623] 52 56623 43052 > 145381 0 0 ruby > Sep 8 21:30:18 KATSERV kernel: [59778] 48 59778 46557 > 5141 0 0 httpd > Sep 8 21:30:18 KATSERV kernel: [22036] 0 22036 28190 > 130 0 0 sshd > Sep 8 21:30:18 KATSERV kernel: [22048] 419432521 22048 28190 42 > 0 0 0 sshd > Sep 8 21:30:18 KATSERV kernel: [22049] 419432521 22049 31641 84 > 1 0 0 bash > Sep 8 21:30:18 KATSERV kernel: [24036] 497 24036 235850 > 731600 0 0 ruby > Sep 8 21:30:18 KATSERV kernel: [24042] 26 24042 54420 8531 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [24052] 26 24052 54901 15121 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [25995] 0 25995 28190 > 140 0 0 sshd > Sep 8 21:30:18 KATSERV kernel: [26008] 419432521 26008 28190 42 > 1 0 0 sshd > Sep 8 21:30:18 KATSERV kernel: [26009] 419432521 26009 31641 84 > 1 0 0 bash > Sep 8 21:30:18 KATSERV kernel: [26639] 89 26639 23166 > 220 0 0 pickup > Sep 8 21:30:18 KATSERV kernel: [29153] 26 29153 55342 19611 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30057] 26 30057 55637 24751 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30099] 26 30099 54547 8170 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30100] 26 30100 57454 43731 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30542] 26 30542 54408 7581 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30543] 26 30543 55150 19760 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30544] 26 30544 54426 7870 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30731] 26 30731 54400 7591 > -17 -1000 postmaster > Sep 8 21:30:18 KATSERV kernel: [30732] 26 30732 55637

Sounds like you were able to create a re-producible scenario that would
cause the qpidd memory to increase? If so, would you mind filing an issue
in our Redmine so that we can take a look at it?

Thanks,
Eric

··· On Mon, Sep 14, 2015 at 4:26 PM, wrote:

Well it may just be luck but it appears if I restart the services on the
main katello server but don’t then restart the main processes on the
capsules then qpidd begins consuming large amounts of memory.

If I restart the katello-services on all the proxies once I restart
katello then the qpidd process seems stable in the amount of memory it’s
consuming.

It’s been restarted now for 20 minutes and the memory consumption is
around 223M. Normally it would be around 1G by now so hopefully it’s
resolved. I’ll still plan to upgrade to 2.2.3 on the main katello server
soon.

Thanks

Marc


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

Actually just read your attachment. We actually see that error on all the
clients when they can't connect for some time. We've ended up implementing
a splunk monitor for that one. On the clients we look for things like "ModelError:
maximum recursion depth exceeded while calling a Python object"

It could be the qdrouterd I guess, it wasn't that process that consumed
large memory but the qpidd process, but I guess qdrouterd does forward to
qpidd?

We did get some errors on the capsules, mainly these:

katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 20:39:00 2015]
[error] (110)Connection timed out: proxy: HTTPS: attempt to connect to
IP:443 (HOSTNAME) failed
katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 20:39:00 2015]
[error] ap_proxy_connect_backend disabling worker for (HOSTNAME)
katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 20:54:01 2015]
[error] (110)Connection timed out: proxy: HTTPS: attempt to connect to
10.122.1.99:443 (HOSTNAME) failed
katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 20:54:01 2015]
[error] ap_proxy_connect_backend disabling worker for (HOSTNAME)
katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 21:09:01 2015]
[error] (110)Connection timed out: proxy: HTTPS: attempt to connect to
10.122.1.99:443 (HOSTNAME) failed
katello-reverse-proxy_error_ssl.log-20150913:[Sat Sep 12 21:09:01 2015]
[error] ap_proxy_connect_backend disabling worker for (HOSTNAME)