Foreman install 2.5, HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed

We are looking to upgrade to foreman 2.5. By upgrade I don’t mean actually updating the packages, but installing a new 2.5 instance. Previously we where using passenger with Foreman 2.2 with no issues. With 2.5 I’ve been unable to get the webserver up and running. The logs show:

[Wed Sep 22 20:03:04.306313 2021] [proxy:error] [pid 281] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed
[Wed Sep 22 20:03:04.306376 2021] [proxy:error] [pid 281] AH00959: ap_proxy_connect_backend disabling worker for (127.0.0.1) for 0s
[Wed Sep 22 20:03:04.306384 2021] [proxy_http:error] [pid 281] [client ::1:54452] AH01114: HTTP: failed to make connection to backend: 127.0.0.1

Whenever anything tries to access the web front end. I cannot figure out where this 3000 port is coming from but I show that in my 05-foreman-ssl.conf on the 2.2 host there is this bit:

PassengerPreStart https://localhost:443

on the 2.5 host in the same file I see the

  ProxyPass / http://127.0.0.1:3000/ retry=0
  ProxyPassReverse / http://127.0.0.1:3000/

But I don’t think anything is running on that port. Previously we’d start foreman web server with a

/usr/sbin/httpd -DFOREGROUND

This is Foreman 2.5 on Centos 7

With 2.5 we started using Puma app server that in this version may have been listening on the port 3000. I was under the impression that 2.5 already used unix socket though. Anyway, Puma is a separate service, it does not start automatically with httpd like passenger. Try systemctl start foreman which starts the Puma server. Apache should “only” proxy requests to it.

Ahh good to know. We run Foreman in a docker container so I don’t have access to systemctl (or probably the DBus it’s always yelling at me about) but I can run:

FOREMAN_ENV=production 
FOREMAN_BIND=tcp://0.0.0.0:3000

and then

/usr/share/foreman/bin/rails server --environment $FOREMAN_ENV

And now I’m getting the login prompt (well the whole site really, was able to login just fine) without issue. Good to know, much appreciate the help.

1 Like

Well shoot. I’ve changed something in the Foreman install and now the 05-foreman-ssl.conf ProxyPass settings are different:

  ProxyPass / unix:///run/foreman.sock|http://foreman/ retry=0 timeout=900
  ProxyPassReverse / unix:///run/foreman.sock|http://foreman/

I can get this working with the

  ProxyPass / http://127.0.0.1:3000/ retry=0
  ProxyPassReverse / http://127.0.0.1:3000/

and then restarting the httpd but with the settings now (unix::slight_smile: I get

[Thu Sep 23 20:20:30.558559 2021] [proxy:error] [pid 270] (111)Connection refused: AH02454: HTTP: attempt to connect to Unix domain socket /run/foreman.sock (foreman) failed
[Thu Sep 23 20:20:30.558602 2021] [proxy:error] [pid 270] AH00959: ap_proxy_connect_backend disabling worker for (foreman) for 0s
[Thu Sep 23 20:20:30.558610 2021] [proxy_http:error] [pid 270] [client ::1:59058] AH01114: HTTP: failed to make connection to backend: httpd-UDS

Any ideas on what in the foreman-installer determines what / how the ProxyPass gets laid down?

It is hard coded here:

We’ve done this since the module isn’t really aimed at supporting your use case.

The actual class to configure Apache can work but it doesn’t support IPA.

Why do you use a Docker container with our regular installer?

How was this ever http://127.0.0.1:3000 in my config? Was this changed on some version? Perhaps I built with an old version of foreman accidentally, but it’s strange that when I ran the rails server manually in the container and saw it working ,then went to rebuild the container the value changed. I can easily sed it to work, or if you have some ideas why it’s failing to connect to the unix domain socket that’d work as well. (though likely I suspect some systemd needs here)

As for why we use Docker, we use docker for everything, nothing in our environment runs bare on machines and it’s all containerized. We’ve been using Foreman this way for about 5+ years. We run the installer in the build container to get most of what is needed, then run a start.sh script that fixes up any configs with ENV values that make sense for the deployed location.

We changed it to a unix socket. Initially for security: if you’re on localhost you can MITM Foreman by sending spoofed headers that suggest you’re authenticated. We use a unix socket that only Apache can write to prevent this. Later it turned out to solve some reliability issues as well.

Interesting, but I don’t think we ever considered this use case for the installer. We heavily assume system packages and services are used. Not saying you can’t run Foreman inside Docker, just that the use with the installer is surprising.

I’ve also had to change the ProxyPass a bit more. The default was just timing out connections, so whenever an EC2 instance was attempted, the connection / proxy would drop waiting for the instance to come on line to run the finish scripts.

I was getting errors like:

Backtrace for 'Action failed' error (ActiveRecord::ConnectionTimeoutError): could not obtain a connection from the pool within 5.000 seconds (waited 5.117 seconds); all pooled connections were in use

and

html><head>
<title>502 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
<p>The proxy server received an invalid
response from an upstream server.<br />
The proxy server could not handle the request <em><a href="/api/v2/hosts">POST&nbsp;/api/v2/hosts</a></em>.<p>
Reason: <strong>Error reading from remote server</strong></p></p>
</body></html>

So modified the 05-foreman.conf and 05-foreman-ssl.conf with:

ProxyPass / http://127.0.0.1:3000/ retry=1 acquire=3000 timeout=600 Keepalive=On

And that seems to have taken care of the timeouts and disconnects. Mostly just sharing my experiences in case anyone else runs into this.

I’ve been noticing some pretty heavy memory consumption and slow performance with puma, and wondering how to configure it in single mode vs cluster mode to see if it makes much of a difference in performance / and memory consumption.

Started foreman with setting FOREMAN_PUMA_WORKERS=0 and that seems to have worked. So far not seeing a gradual loss of memory and nothing has gone into swap.

There is something really off with this version. Running 2.2, a 2.5 in cluster mode and 2.5 in single mode. And you can see the slow memory loss on the boxes running 2.5, vs the one running 2.2 with a pretty solid line of how much memory it’s consuming. The green line is when I restarted that box with the changes to cluster mode to single mode, the orange (yellowish?) line is the 2.2 box and the blue is the other 2.5 box.

Wondering if this is related to https://bugzilla.redhat.com/show_bug.cgi?id=1976051

Perhaps a new thread would make sense on this. As this is less unable to get foreman to talk on the correct port, and more getting foreman to not forget how memory works.

Please open a new thread since it’s indeed different. I know @evgeni is also looking at memory consumption.

The memory consumption is the new SettingsRegistry, tracked in Bug #33640: Improve caching of SettingRegistry#load_values - Foreman

evgeni do you still need a new thread on this with what we are seeing, or should I just track the bug and see how this goes?

I think the above issue, plus Bug #33639: Reduce allocations in FactImporter#update_facts - Foreman, should be sufficient for now, and would ask you to re-try after all the relevant patches have been released – if you then still have memory concerns, we can re-look on the new codebase.

Is there any plans for backporting the fixes into a 2.5 release or will these only be resolved in the 3.x versions?

I am not an authoritative source for that, but so far the patches weren’t huge, so I’d think they are definitely backportable. Whether the release managers think that’s a good idea (in terms of possible other regressions), is another question.

Maybe @tbrisker or @upadhyeammit can chime in on that.

The fact import patches are probably safe for picking into 2.5, the settings one might be more complicated since there were alot of changes to the settings registry since 2.5 came out. If you are able to manually pick them and test with 2.5 that would be great indication regarding the viability.

For 3.0 I’ve prepared this PR:

I’ve just applied it to my server:

cd /usr/share/foreman
curl https://github.com/theforeman/foreman/pull/8820.patch | patch -p1
systemctl restart foreman.service

Just skip the test files (which we don’t ship in production). I’m going to see how memory changes over time. Last time it took about 2 days to reach a stable level.

I’ll note that I upgraded straight from 2.4 to 3.0 so I don’t have numbers for 2.5.

1 Like