Foreman 3.0 - Redis Remote URL not honored for orchestrator

When configuring redis - we are trying to use a remote shared cluster as we have many nodes. We have configured both:
dynflow_redis_url and rails_cache_store (we use the puppet modules for foreman)

The “worker” services seem to work correctly - utilizing our remote URL and starting up and working without issue.
The “orchestrator” service however - ignores our clustered URL


[me@myserver ~]$ sudo cat /etc/foreman/settings.yaml | grep redis
  :redis_url: redis://my.correct.reds.url:6379/6
  :type: redis
    - redis://my.correct.reds.url:6379/0

It “looks” like both the worker and the orchestrator utilize the below systemd unit file. This file has a localhost default. the “workers” correctly override using the URL in settings.yaml - but the “orchestrator” does not. Furthermore - the orchestrator will then fail to start because redis isn’t even installed locally when the URL is specified:

Description=Foreman jobs daemon - %i on sidekiq

# Greatly reduce Ruby memory fragmentation and heap usage
ExecStart=/usr/libexec/foreman/sidekiq-selinux -e ${RAILS_ENV} -r ${DYNFLOW_SIDEKIQ_SCRIPT} -C /etc/foreman/dynflow/%i.yml
ExecReload=/usr/bin/kill -TSTP $MAINPID


# if we crash, restart


I’m not clear on:

  • Can orchestrator be configured to use a remote redis cluster, or is there some technical reason why it must be local? Worker seems to spin up using remote redis just fine.

I’m toying with (for now) simply overwriting/overriding the Environment=DYNFLOW_REDIS_URL=redis://localhost:6379/0 line in the unit file with the correct default - but want to confirm im not gonna break something by having all my orchestrator sidekiq services start using a single redis URL similar to the worker-nodes…

Expected outcome:
redis remote configuration should work when specified.

Foreman and Proxy versions:
Foreman 3.0
Foreman and Proxy plugin versions:
Foreman 3.0
Distribution and version:
Other relevant data:

It definitely can be done, or at least by hand. There is not real reason why it wouldn’t be possible. I’m not sure how the installer handles this, but apparently not too well

This is rather odd, I’ll have to look into it. I’d expect both to behave the same.

The solution will be something along the lines of this, but instead of touching the original file, I’d suggest putting in an override drop-in.

# /etc/systemd/system/dynflow-sidekiq@.service.d/override.conf

All the dynflow-sidekiq service instances should be using the same redis instance, so you definitely won’t break things by that.

Hey @aruzicka - thanks for the reply!

Good to know im not completely crazy!

One other clarifying question i “think” i know the answer to:
I should be able to point “all worker and orchestrator nodes” to the same remote address - something like: redis://redis.cluster.address:6379/2 correct? I don’t need the “worker” and “orchestrator” nodes pointed to different namespaces for each server. I believe “no” because that’s what redis is “for” but wanted to be sure.
I see that sidekiq strongly recommends a “persistent” instance for orchestrator/worker - while rails would prefer a “non persistent” instance. This would translate to me having two redis instances - which can still be on the same infrastructure if i’m reading it properly?

That’s correct, just point everything to the same url.

Sounds about right. AFAIK sidekiq recommends persistent instance so your jobs won’t get lost.