Hi @heyitsgary ,
I found some good debugging steps from another post. Take a look at these two messages and consider trying what they’re suggesting. Then, we can go from there:
Can you tell from production.log what are the controllers that are being used? What’s the workload? If you are running a monitoring system, ideally, can you tell when a process saw a spike so we can correlate this with logs in the production.log?
We really need to narrow this down a bit, Foreman as many other Ruby apps do have memory leaks, we are pretty successful with finding them and fixing them. But there must be something special with your setup, 3.1 is going through heavy testing at the m…
Cheap trick to find out which controller/action was slow as the first (typically when there is some problem with e.g. accidentally loading all records into the DB the offender is also very slow) is to restart, tail production.log and grep records with NNNNNms (5+ digits milliseconds). There are typically the offenders consuming all the memory.