Foreman 3.0 - Redis Remote URL not honored for orchestrator

Problem:
When configuring redis - we are trying to use a remote shared cluster as we have many nodes. We have configured both:
dynflow_redis_url and rails_cache_store (we use the puppet modules for foreman)

The “worker” services seem to work correctly - utilizing our remote URL and starting up and working without issue.
The “orchestrator” service however - ignores our clustered URL

settings.yaml

[me@myserver ~]$ sudo cat /etc/foreman/settings.yaml | grep redis
  :redis_url: redis://my.correct.reds.url:6379/6
  :type: redis
    - redis://my.correct.reds.url:6379/0
[me@myserver~]$

It “looks” like both the worker and the orchestrator utilize the below systemd unit file. This file has a localhost default. the “workers” correctly override using the URL in settings.yaml - but the “orchestrator” does not. Furthermore - the orchestrator will then fail to start because redis isn’t even installed locally when the URL is specified: https://github.com/theforeman/puppet-foreman/blob/d84fb8edb340904f9afcbf869fa66ea391f267ed/manifests/config.pp#L10-L16

[Unit]
Description=Foreman jobs daemon - %i on sidekiq
Documentation=https://theforeman.org
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=notify
User=foreman
TimeoutSec=300
PrivateTmp=true
Environment=RAILS_ENV=production
Environment=DYNFLOW_SIDEKIQ_SCRIPT=/usr/share/foreman/extras/dynflow-sidekiq.rb
Environment=DYNFLOW_REDIS_URL=redis://localhost:6379/0
Environment=REDIS_PROVIDER=DYNFLOW_REDIS_URL
# Greatly reduce Ruby memory fragmentation and heap usage
# https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
Environment=MALLOC_ARENA_MAX=2
WorkingDirectory=/usr/share/foreman
ExecStart=/usr/libexec/foreman/sidekiq-selinux -e ${RAILS_ENV} -r ${DYNFLOW_SIDEKIQ_SCRIPT} -C /etc/foreman/dynflow/%i.yml
ExecReload=/usr/bin/kill -TSTP $MAINPID

SyslogIdentifier=dynflow-sidekiq@%i

# if we crash, restart
RestartSec=1
Restart=on-failure

[Install]
WantedBy=multi-user.target

I’m not clear on:

  • Can orchestrator be configured to use a remote redis cluster, or is there some technical reason why it must be local? Worker seems to spin up using remote redis just fine.

I’m toying with (for now) simply overwriting/overriding the Environment=DYNFLOW_REDIS_URL=redis://localhost:6379/0 line in the unit file with the correct default - but want to confirm im not gonna break something by having all my orchestrator sidekiq services start using a single redis URL similar to the worker-nodes…

Expected outcome:
redis remote configuration should work when specified.

Foreman and Proxy versions:
Foreman 3.0
Foreman and Proxy plugin versions:
Foreman 3.0
Distribution and version:
RHEL8.4
Other relevant data:

It definitely can be done, or at least by hand. There is not real reason why it wouldn’t be possible. I’m not sure how the installer handles this, but apparently not too well

This is rather odd, I’ll have to look into it. I’d expect both to behave the same.

The solution will be something along the lines of this, but instead of touching the original file, I’d suggest putting in an override drop-in.

# /etc/systemd/system/dynflow-sidekiq@.service.d/override.conf
[Service]
Environment=DYNFLOW_REDIS_URL=redis://redis.cluster.address:6379/0

All the dynflow-sidekiq service instances should be using the same redis instance, so you definitely won’t break things by that.

Hey @aruzicka - thanks for the reply!

Good to know im not completely crazy!

One other clarifying question i “think” i know the answer to:
I should be able to point “all worker and orchestrator nodes” to the same remote address - something like: redis://redis.cluster.address:6379/2 correct? I don’t need the “worker” and “orchestrator” nodes pointed to different namespaces for each server. I believe “no” because that’s what redis is “for” but wanted to be sure.
I see that sidekiq strongly recommends a “persistent” instance for orchestrator/worker - while rails would prefer a “non persistent” instance. This would translate to me having two redis instances - which can still be on the same infrastructure if i’m reading it properly?

That’s correct, just point everything to the same url.

Sounds about right. AFAIK sidekiq recommends persistent instance so your jobs won’t get lost.