We are looking to upgrade to foreman 2.5. By upgrade I don’t mean actually updating the packages, but installing a new 2.5 instance. Previously we where using passenger with Foreman 2.2 with no issues. With 2.5 I’ve been unable to get the webserver up and running. The logs show:
[Wed Sep 22 20:03:04.306313 2021] [proxy:error] [pid 281] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed
[Wed Sep 22 20:03:04.306376 2021] [proxy:error] [pid 281] AH00959: ap_proxy_connect_backend disabling worker for (127.0.0.1) for 0s
[Wed Sep 22 20:03:04.306384 2021] [proxy_http:error] [pid 281] [client ::1:54452] AH01114: HTTP: failed to make connection to backend: 127.0.0.1
Whenever anything tries to access the web front end. I cannot figure out where this 3000 port is coming from but I show that in my 05-foreman-ssl.conf on the 2.2 host there is this bit:
With 2.5 we started using Puma app server that in this version may have been listening on the port 3000. I was under the impression that 2.5 already used unix socket though. Anyway, Puma is a separate service, it does not start automatically with httpd like passenger. Try systemctl start foreman which starts the Puma server. Apache should “only” proxy requests to it.
Ahh good to know. We run Foreman in a docker container so I don’t have access to systemctl (or probably the DBus it’s always yelling at me about) but I can run:
How was this ever http://127.0.0.1:3000 in my config? Was this changed on some version? Perhaps I built with an old version of foreman accidentally, but it’s strange that when I ran the rails server manually in the container and saw it working ,then went to rebuild the container the value changed. I can easily sed it to work, or if you have some ideas why it’s failing to connect to the unix domain socket that’d work as well. (though likely I suspect some systemd needs here)
As for why we use Docker, we use docker for everything, nothing in our environment runs bare on machines and it’s all containerized. We’ve been using Foreman this way for about 5+ years. We run the installer in the build container to get most of what is needed, then run a start.sh script that fixes up any configs with ENV values that make sense for the deployed location.
We changed it to a unix socket. Initially for security: if you’re on localhost you can MITM Foreman by sending spoofed headers that suggest you’re authenticated. We use a unix socket that only Apache can write to prevent this. Later it turned out to solve some reliability issues as well.
Interesting, but I don’t think we ever considered this use case for the installer. We heavily assume system packages and services are used. Not saying you can’t run Foreman inside Docker, just that the use with the installer is surprising.
I’ve also had to change the ProxyPass a bit more. The default was just timing out connections, so whenever an EC2 instance was attempted, the connection / proxy would drop waiting for the instance to come on line to run the finish scripts.
I was getting errors like:
Backtrace for 'Action failed' error (ActiveRecord::ConnectionTimeoutError): could not obtain a connection from the pool within 5.000 seconds (waited 5.117 seconds); all pooled connections were in use
and
html><head>
<title>502 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
<p>The proxy server received an invalid
response from an upstream server.<br />
The proxy server could not handle the request <em><a href="/api/v2/hosts">POST /api/v2/hosts</a></em>.<p>
Reason: <strong>Error reading from remote server</strong></p></p>
</body></html>
So modified the 05-foreman.conf and 05-foreman-ssl.conf with:
I’ve been noticing some pretty heavy memory consumption and slow performance with puma, and wondering how to configure it in single mode vs cluster mode to see if it makes much of a difference in performance / and memory consumption.
Started foreman with setting FOREMAN_PUMA_WORKERS=0 and that seems to have worked. So far not seeing a gradual loss of memory and nothing has gone into swap.
There is something really off with this version. Running 2.2, a 2.5 in cluster mode and 2.5 in single mode. And you can see the slow memory loss on the boxes running 2.5, vs the one running 2.2 with a pretty solid line of how much memory it’s consuming. The green line is when I restarted that box with the changes to cluster mode to single mode, the orange (yellowish?) line is the 2.2 box and the blue is the other 2.5 box.
Perhaps a new thread would make sense on this. As this is less unable to get foreman to talk on the correct port, and more getting foreman to not forget how memory works.
I think the above issue, plus Bug #33639: Reduce allocations in FactImporter#update_facts - Foreman, should be sufficient for now, and would ask you to re-try after all the relevant patches have been released – if you then still have memory concerns, we can re-look on the new codebase.
I am not an authoritative source for that, but so far the patches weren’t huge, so I’d think they are definitely backportable. Whether the release managers think that’s a good idea (in terms of possible other regressions), is another question.
The fact import patches are probably safe for picking into 2.5, the settings one might be more complicated since there were alot of changes to the settings registry since 2.5 came out. If you are able to manually pick them and test with 2.5 that would be great indication regarding the viability.
cd /usr/share/foreman
curl https://github.com/theforeman/foreman/pull/8820.patch | patch -p1
systemctl restart foreman.service
Just skip the test files (which we don’t ship in production). I’m going to see how memory changes over time. Last time it took about 2 days to reach a stable level.
I’ll note that I upgraded straight from 2.4 to 3.0 so I don’t have numbers for 2.5.