Out of memory - Foreman ver. 3.13.0-1

Problem:
May 27 03:06:08 foremanprd kernel: Out of memory: Killed process 1994344 (pulpcore-api) total-vm:2199176kB, anon-rss:2144756kB, file-rss:384kB, shmem-rss:0kB, UID:977 pgtables:4340kB oom_score_adj:0
Jun 3 03:06:34 foremanprd kernel: Out of memory: Killed process 1613 (java) total-vm:10333940kB, anon-rss:2184648kB, file-rss:0kB, shmem-rss:0kB, UID:53 pgtables:5144kB oom_score_adj:0
Jun 4 03:50:15 foremanprd kernel: Out of memory: Killed process 3448896 (pulpcore-api) total-vm:7381228kB, anon-rss:7257028kB, file-rss:384kB, shmem-rss:0kB, UID:977 pgtables:14480kB oom_score_adj:0
Jun 5 03:45:28 foremanprd kernel: Out of memory: Killed process 3603741 (pulpcore-api) total-vm:5213120kB, anon-rss:5141352kB, file-rss:256kB, shmem-rss:0kB, UID:977 pgtables:10240kB oom_score_adj:0

We have suffered lately from a few OOM events, are there a memory management tuning recommendation somewhere that can help us better manage the memory usage or are we having something configured wrongly?

Expected outcome:
no OOM events

Foreman and Proxy versions:
foreman-3.13.0-1.el9.noarch

Foreman and Proxy plugin versions:
katello-4.15.0-1.el9.noarch
Distribution and version:

Other relevant data:

As a temporary mitigation, we have added a swap to the system, previously server was running without it…

[root@foremanprd ~]# free -h
total used free shared buff/cache available
Mem: 31Gi 18Gi 577Mi 612Mi 12Gi 12Gi
Swap: 15Gi 9.8Gi 6.2Gi

If you need more data, please let me know, I will supply all I can.

Thanks!

Regards
Jan

what process is eating up the ram ?

have you seen the tuning guide ?

Hello ikinia, sadly I was OOO and have limited information, only those messages from the log and few words from colleagues. the web service was restarted after the swap was added.

I am looking into this document:

https://docs.theforeman.org/nightly/Tuning_Performance/index-katello.html

I hope it is the right place.

I wanted to add, that we have only ca. 2800 hosts connected to the server and we have no Smart proxies at the moment.

that’s a good guide as a starter, you can do some basic profiling of the box, but working out where the memory usage is, is it in pulp and content, is it in the puppet master, or the database etc.

There are some built in metrics you could pull, or use system system tools like sar etc

I am reading that tuning guide, thansk…

but also I looked at the situation from the last time when OOM occurred, it is happening at the time when we run the daily sync plans

[root@foremanprd ~]# sar -r -f /var/log/sa/sa05
Linux 5.14.0-570.12.1.el9_6.x86_64 (foremanprd) 06/05/2025 x86_64 (8 CPU)

12:00:00 AM kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
.
.
12430440 16981492 312
02:40:00 AM 1347644 10359736 20466224 62.78 20 9901040 28006216 85.92 12431376 16983892 196
02:50:00 AM 1342028 10356664 20469396 62.79 20 9903724 28007068 85.92 12435896 16986536 224
03:00:00 AM 546004 4986804 25832184 79.25 20 5374928 33529340 102.86 14202248 15981952 71140
03:10:00 AM 1536452 4455568 26359628 80.86 0 3904872 34092100 104.58 16135460 13126608 37828
03:20:00 AM 841652 5472068 25346312 77.76 0 5615156 32948348 101.08 15361916 14627096 352
03:30:00 AM 1719764 6808084 24010920 73.66 0 6068248 31572004 96.85 14037776 15054260 848
03:40:00 AM 1291032 4321916 26496076 81.28 0 4029208 34118840 104.67 16213524 13344844 520
03:50:00 AM 10251532 10503852 20369952 62.49 0 1606548 28485796 87.39 11227792 9937564 192