Very slow reports generation from templates

Problem:
Good afternoon, dear community.
I am facing the problem of slow report generation from templates. Choosing Monitor - Report Templates - Host-Status. - Generate. The browser shows the message “Report Host - Statuses is now being generated, the download will start once it’s done. You can come to this page later to get the results. The result is available for 24 hours.”.
And it takes a very significant amount of time (so much that the user’s session timed out).
Where is the bottleneck in this procedure?
What can be optimized?
Number of hosts - about 200 VMs (for testing purposes)
The server has 20GB of RAM and 16 processors.

PS on a test bench with a smaller number of managed hosts (about 10-20), generating this report takes less than a minute.

Thanks in advance!
Expected outcome:
fast reports generation.
Foreman and Proxy versions:
Foreman 2.1.3
Foreman and Proxy plugin versions:

Distribution and version:

Other relevant data:

Hello, 200 VMs should be nothing. The Host - Status Report results in just the table of hosts and all their sub-statuses. It woud be interesting to turn on debug log level, enable the SQL logger, restart the foreman server and dynflowd and watch production.log or dynflow logs for when you render it. It may be that the includes :host_statuses optimization does not work for some reason.

However testing this on my dev setup, I see it only creates 2 SQL queries. Your Foreman version is quite old, but the code didn’t change since 2.1.

Given smaller batch works fine, can you isolate it to specific host that makes it slow? Or does that grown linear with the number of hosts?

Thank you very much for your reply. I checked the logs (production.log at /var/log/foreman) and found entries there like:

2021-07-14T09:56:11 [I|app|227678d3] Started GET "/api/v2/report_templates/125-Host%20-%20Statuses/report_data/b351ff9c-70be-40de-aced-aa756e8ac69a" for 127.0.0.1 at 2021-07-14 09:56:11 +0300
2021-07-14T09:56:11 [I|app|227678d3] Processing by Api::V2::ReportTemplatesController#report_data as JSON
2021-07-14T09:56:11 [I|app|227678d3]   Parameters: {"apiv"=>"v2", "id"=>"125-Host - Statuses", "job_id"=>"b351ff9c-70be-40de-aced-aa756e8ac69a"}
2021-07-14T09:56:11 [I|app|227678d3] Completed 204 No Content in 64ms (ActiveRecord: 29.5ms | Allocations: 15698)

What could be the reason for error # 204

The response is ok, it just informs your browser the report was not yet generated. See Foreman :: Manual for how to enable debug log level and enable sql logger. The look at the log again (after restarting the services I mentioned)

thanks for the explanation.
I did what you indicated above and got the following messages in the log:

2021-07-15T12:42:55 [I|app|1d512c24] Started GET "/api/v2/report_templates/126-User%20-%20Registered%20Users/report_data/8c1cfd0a-b94c-44ad-ae30-e4bda2f2de75" for 127.0.0.1 at 2021-07-15 12:42:55 +0300
2021-07-15T12:42:55 [I|app|1d512c24] Processing by Api::V2::ReportTemplatesController#report_data as JSON
2021-07-15T12:42:55 [I|app|1d512c24]   Parameters: {"apiv"=>"v2", "id"=>"126-User - Registered Users", "job_id"=>"8c1cfd0a-b94c-44ad-ae30-e4bda2f2de75"}
2021-07-15T12:42:55 [D|sql|1d512c24]   User Load (0.7ms)  SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT $2  [["id", 4], ["LIMIT", 1]]
2021-07-15T12:42:55 [D|sql|1d512c24]   AuthSource Load (0.2ms)  SELECT "auth_sources".* FROM "auth_sources" WHERE "auth_sources"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
2021-07-15T12:42:55 [D|tax|1d512c24] Current location set to none
2021-07-15T12:42:55 [D|sql|1d512c24]   Organization Load (0.2ms)  SELECT "taxonomies".* FROM "taxonomies" WHERE "taxonomies"."type" = $1 AND "taxonomies"."id" = $2 ORDER BY "taxonomies"."title" ASC LIMIT $3  [["type", "Organization"], ["id", 67], ["LIMIT", 1]]
2021-07-15T12:42:55 [D|tax|1d512c24] Current organization set to PC
2021-07-15T12:42:55 [D|sql|1d512c24]   Location Load (0.2ms)  SELECT "taxonomies".* FROM "taxonomies" WHERE "taxonomies"."type" = $1 ORDER BY "taxonomies"."title" ASC  [["type", "Location"]]
2021-07-15T12:42:55 [D|sql|1d512c24]    (0.2ms)  SELECT templates.id FROM "templates"
2021-07-15T12:42:55 [D|sql|1d512c24]    (0.3ms)  SELECT "taxonomies"."id" FROM "taxonomies" WHERE (("taxonomies"."ancestry" LIKE '67/%' OR "taxonomies"."ancestry" = '67') OR "taxonomies"."id" = 67) ORDER BY "taxonomies"."title" ASC
2021-07-15T12:42:55 [D|sql|1d512c24]    (0.2ms)  SELECT DISTINCT "taxable_taxonomies"."taxable_id" FROM "taxable_taxonomies" WHERE "taxable_taxonomies"."taxable_type" = $1 AND "taxable_taxonomies"."taxonomy_id" = $2  [["taxable_type", "ReportTemplate"], ["taxonomy_id", 67]]
2021-07-15T12:42:55 [D|sql|1d512c24]   CACHE Location Load (0.0ms)  SELECT "taxonomies".* FROM "taxonomies" WHERE "taxonomies"."type" = $1 ORDER BY "taxonomies"."title" ASC  [["type", "Location"]]
2021-07-15T12:42:55 [D|sql|1d512c24]   CACHE  (0.0ms)  SELECT templates.id FROM "templates"
2021-07-15T12:42:55 [D|sql|1d512c24]   CACHE  (0.0ms)  SELECT "taxonomies"."id" FROM "taxonomies" WHERE (("taxonomies"."ancestry" LIKE '67/%' OR "taxonomies"."ancestry" = '67') OR "taxonomies"."id" = 67) ORDER BY "taxonomies"."title" ASC
2021-07-15T12:42:55 [D|sql|1d512c24]   CACHE  (0.0ms)  SELECT DISTINCT "taxable_taxonomies"."taxable_id" FROM "taxable_taxonomies" WHERE "taxable_taxonomies"."taxable_type" = $1 AND "taxable_taxonomies"."taxonomy_id" = $2  [["taxable_type", "ReportTemplate"], ["taxonomy_id", 67]]
2021-07-15T12:42:55 [D|sql|1d512c24]   ReportTemplate Exists? (0.3ms)  SELECT 1 AS one FROM "templates" WHERE (templates.id IN (124,125,126,128)) AND "templates"."type" = $1 AND (templates.id IN (124,125,126,128)) LIMIT $2  [["type", "ReportTemplate"], ["LIMIT", 1]]
2021-07-15T12:42:55 [D|sql|1d512c24]   ReportTemplate Load (0.4ms)  SELECT "templates".* FROM "templates" WHERE (templates.id IN (124,125,126,128)) AND "templates"."type" = $1 AND (templates.id IN (124,125,126,128)) AND "templates"."id" = $2 ORDER BY templates.name LIMIT $3  [["type", "ReportTemplate"], ["id", 126], ["LIMIT", 1]]
2021-07-15T12:42:55 [D|app|1d512c24] Body:
2021-07-15T12:42:55 [I|app|1d512c24] Completed 204 No Content in 17ms (ActiveRecord: 2.7ms | Allocations: 6815)
2021-07-15T12:42:55 [D|sql|1d512c24]    (0.1ms)  BEGIN
2021-07-15T12:42:55 [D|sql|1d512c24]   ActiveRecord::SessionStore::Session Update (0.3ms)  UPDATE "sessions" SET "data" = $1, "updated_at" = $2 WHERE "sessions"."id" = $3  [["data", "BAh7DkkiCXVzZXIGOgZFRmkJSSIUb3JnYW5pemF0aW9uX2lkBjsAVGlISSIQ\nbG9jYXRpb25faWQGOwBUSSIABjsAVEkiD2V4cGlyZXNfYXQGOwBGbCsHLxHw\nYEkiEmxvZ2dpbmdfdG9rZW4GOwBUSSIpMzRiYjE0NzctNmMzNi00NmI4LThh\nYTYtN2FmMzQ2OWU2MjNjBjsARkkiC2xvY2FsZQY7AEYiB3J1SSIQX2NzcmZf\ndG9rZW4GOwBGSSIxenlhL2FGS1Nka2dGN0dsNzVsQmhGM1oxeE5NZkVZd0F4\nQ1MyNlRHd09lcz0GOwBGSSIecmVkaXJlY3RfdG9fdXJsX2xvY2F0aW9ucwY7\nAFQiI2h0dHBzOi8vdXByLW1haW4uY3JwLnJ6ZC9ob3N0c0kiInJlZGlyZWN0\nX3RvX3VybF9vcmdhbml6YXRpb25zBjsAVCIjaHR0cHM6Ly91cHItbWFpbi5j\ncnAucnpkL2hvc3Rz\n"], ["updated_at", "2021-07-15 09:42:55.468502"], ["id", 14011385]]
2021-07-15T12:42:55 [D|sql|1d512c24]    (0.7ms)  COMMIT

at the same time, the “job-id” filter produces a bunch of records of the form:

2021-07-15T13:00:58 [I|app|741cd5ae] Started GET "/api/v2/report_templates/126-User%20-%20Registered%20Users/report_data/8c1cfd0a-b94c-44ad-ae30-e4bda2f2de75" for 127.0.0.1 at 2021-07-15 13:00:58 +0300
2021-07-15T13:00:58 [I|app|741cd5ae]   Parameters: {"apiv"=>"v2", "id"=>"126-User - Registered Users", "job_id"=>"8c1cfd0a-b94c-44ad-ae30-e4bda2f2de75"}

where to look next?

ok, that does not indicate anything too bad so far, let’s look at /var/log/foreman/dynflow_executor.*, which I think is the right log file on your older version. You may need to service dynflowd restart first so that it applies the same logging configuration.

Thanks for your reply, Marek!
I was found a lot of records like this in the log file:

2021-07-19T09:29:45 [E|dyn|] Pruned 1 envelopes for invalidated world ff2f6cbb-fd9c-4252-a63e-5909f7e7316a
2021-07-19T09:29:45 [E|dyn|] invalid worlds found {"ff2f6cbb-fd9c-4252-a63e-5909f7e7316a"=>:invalidated, "5cb3b6af-2340-43d8-8e07-d174b920b5e4"=>:                                                valid, "e7e10cf0-32e3-4721-a76a-82f4188148f1"=>:valid, "e15f3cd3-20e6-4404-8117-8757179eaa35"=>:valid, "7418c1c6-e769-41c1-8045-0c12144108ae"=>:va                                                lid, "eb425af9-07d1-48fa-a5bf-c514bc2ab7f9"=>:valid, "802d3d1d-122c-4752-b658-611c7582bd33"=>:valid, "ff2c4faf-945c-404b-abb9-8774b3d8c819"=>:vali                                                d, "3ce6e550-672a-410f-8d09-e5a7c752335d"=>:valid, "a1d9e8eb-51bb-4269-9637-cb39c9a7d571"=>:valid}

and also journalctl -u dynflow-sidekiq@worker.service
have in output a lot of records like this

 2021-07-17T05:57:21.326Z 536432 TID-gs9amykp4 ERROR: Heartbeat thread error: Connection lost (ECONNRESET)

What can this mean, and in what direction to continue research?

Thanks in advance for the answers

That’s interesting. It could be related, this is the background processing tool in which we render the report. It seems that it for some reason restarts or crashes. @aruzicka do you have ideas where we’d find more information about this? Is there some guard that would reboot dynflow e.g. if it eats too much memory?

1 Like

rebooting dynflow, jr whole server does not affect(

Starting with 2.0, the sidekiq based implementation should be used. If you restared dynflowd (the old implementation), turn it off.

The logs from the processing engine are split between /var/log/foreman/production.log and whatever ends up in journalctl -u 'dynflow-sidekiq@*'

No, there’s nothing like that.

There is a thread inside the worker process which wakes up every now and then and lets the framework (sidekiq) know that it is still alive. Looking at the implementation. If it happens ocasionally, it should not matter on its own.

My guess would be report generation is CPU heavy, up to the point where it prevents the heartbeat from working reliably. It feels like it is just a symptom of reporting being slow, not a cause.

Could you run tail -f /var/log/foreman/production.log and journalctl -fu 'dynflow-sidekiq@*', render a report (the one that takes a long time) and then post logs generated by the two commands?

Thanks a lot for your reply, @aruzicka !

I am just run journalctl -u ‘dynflow-sidekiq@*’ and even without running the report, I received a lot of records like:

Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:01.093Z 2297423 TID-gmxw01f33 ERROR: Heartbeat thread error: Connection lost (ECONNRESET)
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:01.650Z 2297423 TID-gmxvonx8b INFO: Running in ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux]
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:01.650Z 2297423 TID-gmxvonx8b INFO: See LICENSE and the LGPL-3.0 for licensing details.
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:01.650Z 2297423 TID-gmxvonx8b INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: Connection lost (ECONNRESET)
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:267:in `rescue in io'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:259:in `io'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:271:in `read'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:125:in `block in call'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:241:in `block (2 levels) in process'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:384:in `ensure_connected'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:231:in `block in process'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:319:in `logging'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:230:in `process'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:125:in `call'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:109:in `block in connect'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:306:in `with_reconnect'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:105:in `connect'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:381:in `ensure_connected'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:231:in `block in process'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:319:in `logging'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:230:in `process'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis/client.rb:125:in `call'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis.rb:286:in `block in info'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis.rb:52:in `block in synchronize'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/ruby/monitor.rb:226:in `mon_synchronize'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis.rb:52:in `synchronize'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/redis-4.1.2/lib/redis.rb:285:in `info'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:115:in `block in redis_info'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:97:in `block in redis'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/connection_pool-2.2.2/lib/connection_pool.rb:65:in `block (2 levels) in with'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/connection_pool-2.2.2/lib/connection_pool.rb:64:in `handle_interrupt'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/connection_pool-2.2.2/lib/connection_pool.rb:64:in `block in with'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/connection_pool-2.2.2/lib/connection_pool.rb:61:in `handle_interrupt'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/connection_pool-2.2.2/lib/connection_pool.rb:61:in `with'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:94:in `redis'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq.rb:108:in `redis_info'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/lib/sidekiq/cli.rb:73:in `run'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/share/gems/gems/sidekiq-5.2.7/bin/sidekiq:12:in `<top (required)>'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/bin/sidekiq:23:in `load'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: /usr/bin/sidekiq:23:in `<main>'
Jul 20 11:23:01 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: I, [2021-07-20T11:23:01.683112 #2297423]  INFO -- /default_dead_letter_handler: got dead letter #<Concurrent::Actor::Envelope:46935639579400> @message=[:start_termination, #<Concurrent::Promises::ResolvableFuture:0x000055601c3fbbd8 pending>], @sender=#<Thread:0x000055601c485ec8@io-worker-6@/usr/share/gems/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:471 sleep>, @address=#<Concurrent::Actor::Reference:0x0000556011aa5f20 /parallel-executor-core (Dynflow::Executors::Sidekiq::Core)>>
Jul 20 11:23:02 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:02.094Z 2297423 TID-gmxw01f33 ERROR: Heartbeat thread error: Connection lost (ECONNRESET)
Jul 20 11:23:03 srv.test.dev dynflow-sidekiq@orchestrator[2297423]: 2021-07-20T08:23:03.096Z 2297423 TID-gmxw01f33 ERROR: Heartbeat thread error: Connection lost (ECONNRESET)

That doesn’t look healthy. Anything in /var/log/redis/redis.log?

Now redis.log is clean, but previous file have the same records:

1532:C 16 Jul 2021 11:15:13.092 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1532:C 16 Jul 2021 11:15:13.092 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1532, just started
1532:C 16 Jul 2021 11:15:13.092 # Configuration loaded
1532:C 16 Jul 2021 11:15:13.092 * supervised by systemd, will signal readiness
1532:M 16 Jul 2021 11:15:13.115 * Running mode=standalone, port=6379.
1532:M 16 Jul 2021 11:15:13.115 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower valu                                       e of 128.
1532:M 16 Jul 2021 11:15:13.115 # Server initialized
1532:M 16 Jul 2021 11:15:13.115 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.ov                                       ercommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1532:M 16 Jul 2021 11:15:13.115 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage                                        issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local                                        in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1532:M 16 Jul 2021 11:15:13.117 * DB loaded from disk: 0.003 seconds
1532:M 16 Jul 2021 11:15:13.117 * Ready to accept connections
1532:M 16 Jul 2021 11:15:13.117 * The server is now ready to accept connections at /var/run/redis/redis.sock
1532:M 16 Jul 2021 11:30:14.061 * 1 changes in 900 seconds. Saving...
1532:M 16 Jul 2021 11:30:14.064 * Background saving started by pid 9740
9740:C 16 Jul 2021 11:30:14.129 * DB saved on disk
9740:C 16 Jul 2021 11:30:14.132 * RDB: 4 MB of memory used by copy-on-write
1532:M 16 Jul 2021 11:30:14.168 * Background saving terminated with success
1532:M 16 Jul 2021 11:34:47.339 # User requested shutdown...
1532:M 16 Jul 2021 11:34:47.358 * Saving the final RDB snapshot before exiting.
1532:M 16 Jul 2021 11:34:48.234 * DB saved on disk
1532:M 16 Jul 2021 11:34:48.240 * Removing the pid file.
1532:M 16 Jul 2021 11:34:48.244 * Removing the unix socket file.
1532:M 16 Jul 2021 11:34:48.247 # Redis is now ready to exit, bye bye...

The cause of the error was found.
It turned out to be in the antivirus settings and policies.
After disabling the antivirus, reports began to build correctly and quickly.
Colleagues, thank you very much for responding to my question, communicating with you give me a new experience in setting up.
The topic can be closed.

How an antivirus can cause this? Do you mean firewall instead? Something was closing Redis connections. Anyway, interesting stuff.

Yes, of course, we are talking about a network traffic control tool as part of a comprehensive information security solution. But out of habit simply called an antivirus. I apologize because of this there could be a misunderstanding.

1 Like