Foreman Installer (v2.0) fails with external database

Problem: When using the foreman-installer to install a fresh katello scenario with an external database, the installer fails during DB migrations.
When using a dump of a local installation (also without data; directly after foreman-installer) on the target database server, the installer is capable to migrate over the existing data and use the external database server.
It also creates some tables which indicates that there is no permission problem.

Expected outcome: Installer migrates and seeds the database successfully

Foreman and Proxy versions: Foreman 2.0 / Katello 3.15 / PostgresDB 10 or 11 / otherwise defaults from Installer

Foreman and Proxy plugin versions: Defaults from installer

Distribution and version: CentOS 7.8

Other relevant data:
I’ve opened a bug here: Bug #29689: Katello Scenario with external database fails during migration - Installer - Foreman
There you can find the complete log output.

I’m running the database using Azure Databases for Postgres. Latency between the database and Foreman VM might be the cause - I will investigate this next…

We have seen this error intermittently in katello tests.
Bug #27286: katello tests on foreman prs fail with ERROR: relation "http_proxies" does not exist - Katello - Foreman might also be related.

Running the database on a different host using e.g. a Docker container, the migration and seeding works as expected. I’ve also tried Postgres 11 and 12 - both work using Docker.

So I guess this is a specific problem with Azure Databases for Postgres. Either latency related or some weird default configuration.

I will continue to investigate and report back anyways - even though this does not seem to be a problem of the installer.

Looks like there is a validation that tries to load the HttpProxy table which might not yet be present during migration here:

That validation should get an early return if the HttpProxy table isn’t there yet - PRs welcome :wink:

After some trial and error it seems like this problem is the result of high network latency to the database.

If I use the exact same configuration for Azure Datebases for Postgres but use a location near to the Foreman VM (with latency of about 8ms), the migration works fine. As soon as the latency to the database is relatively high (about ~80ms), the migrations fail.

So I’m wondering if a fix is even needed? Assuming a high network latency, the performance would be bad anyway…

Would be good to fix anyways as we’re sometimes seeing the same error on CI runs. We can’t assume the HttpProxy table already exists when running the settings initializations.