Slow products page on 2.0

Could you try running the following SQL and see if it improves the state:

CREATE INDEX temp_tasks_test ON foreman_tasks_locks (resource_type, task_id);

Afterwards you can remove it using

DROP INDEX temp_tasks_test;

That helped ALOT, output:

Unique (cost=145.34…145.49 rows=4 width=272) (actual time=62.299…62.314 rows=8 loops=1)
-> Sort (cost=145.34…145.35 rows=4 width=272) (actual time=62.299…62.299 rows=16 loops=1)
Sort Key:, foreman_tasks_tasks.label, foreman_tasks_tasks.started_at, foreman_tasks_tasks.ended_at, foreman_tasks_tasks.state, foreman_tasks_tasks.result, foreman_tasks_tasks.external_id, foreman_tasks_tasks.parent_task_id, foreman_tasks_tasks.start_at, foreman_tasks_tasks.start_befor
e, foreman_tasks_tasks.action, foreman_tasks_tasks.state_updated_at, foreman_tasks_tasks.user_id, foreman_tasks_locks.resource_id
Sort Method: quicksort Memory: 33kB
-> Nested Loop (cost=62.62…145.30 rows=4 width=272) (actual time=37.792…62.245 rows=16 loops=1)
Join Filter: (foreman_tasks_locks.resource_id = locks_foreman_tasks_tasks.resource_id)
Rows Removed by Join Filter: 24
-> Nested Loop (cost=62.20…70.10 rows=2 width=272) (actual time=37.778…62.171 rows=8 loops=1)
-> GroupAggregate (cost=44.44…44.46 rows=1 width=12) (actual time=34.151…34.648 rows=8 loops=1)
Group Key: foreman_tasks_locks.resource_id
-> Sort (cost=44.44…44.45 rows=1 width=12) (actual time=34.078…34.312 rows=3170 loops=1)
Sort Key: foreman_tasks_locks.resource_id
Sort Method: quicksort Memory: 245kB
-> Nested Loop (cost=8.88…44.43 rows=1 width=12) (actual time=0.998…33.560 rows=3170 loops=1)
-> Bitmap Heap Scan on foreman_tasks_tasks foreman_tasks_tasks_1 (cost=4.44…12.28 rows=2 width=24) (actual time=0.803…3.458 rows=5614 loops=1)
Recheck Cond: (((type)::text = ‘ForemanTasks::Task::DynflowTask’::text) AND ((label)::text = ‘Actions::Katello::Repository::Sync’::text))
Heap Blocks: exact=1088
-> Bitmap Index Scan on index_foreman_tasks_tasks_on_type_and_label (cost=0.00…4.43 rows=2 width=0) (actual time=0.681…0.681 rows=5614 loops=1)
Index Cond: (((type)::text = ‘ForemanTasks::Task::DynflowTask’::text) AND ((label)::text = ‘Actions::Katello::Repository::Sync’::text))
-> Bitmap Heap Scan on foreman_tasks_locks (cost=4.45…16.07 rows=1 width=20) (actual time=0.004…0.004 rows=1 loops=5614)
Recheck Cond: (((resource_type)::text = ‘Katello::Repository’::text) AND (task_id =
Filter: (resource_id = ANY (’{59,516,517,518,520,519,8203,8241}’::integer))
Rows Removed by Filter: 1
Heap Blocks: exact=5623
-> Bitmap Index Scan on temp_tasks_test (cost=0.00…4.45 rows=3 width=0) (actual time=0.003…0.003 rows=2 loops=5614)
Index Cond: (((resource_type)::text = ‘Katello::Repository’::text) AND (task_id =
-> Bitmap Heap Scan on foreman_tasks_tasks (cost=17.76…25.60 rows=2 width=268) (actual time=3.437…3.437 rows=1 loops=8)
Recheck Cond: ((started_at = (max(foreman_tasks_tasks_1.started_at))) AND ((type)::text = ‘ForemanTasks::Task::DynflowTask’::text))
Heap Blocks: exact=8
-> BitmapAnd (cost=17.76…17.76 rows=2 width=0) (actual time=3.433…3.433 rows=0 loops=8)
-> Bitmap Index Scan on index_foreman_tasks_tasks_on_started_at (cost=0.00…6.65 rows=315 width=0) (actual time=0.003…0.003 rows=1 loops=8)
Index Cond: (started_at = (max(foreman_tasks_tasks_1.started_at)))
-> Bitmap Index Scan on index_foreman_tasks_tasks_on_type (cost=0.00…10.78 rows=315 width=0) (actual time=3.429…3.429 rows=63028 loops=8)
Index Cond: ((type)::text = ‘ForemanTasks::Task::DynflowTask’::text)
-> Index Scan using index_foreman_tasks_locks_on_task_id on foreman_tasks_locks locks_foreman_tasks_tasks (cost=0.42…30.63 rows=558 width=20) (actual time=0.005…0.006 rows=5 loops=8)
Index Cond: (task_id =
Planning Time: 0.742 ms
Execution Time: 62.399 ms
(38 rows)

1 Like

That is great! I’ve opened up a PR to add this index in the tasks plugin :slight_smile:


Question can i keep it on for now and can it cause any issues or any cautions i should take when i am going to upgrade to next version?

you can keep it for now and remove it when upgrading. even if you forget and it is left after the upgrade it shouldn’t cause too much issues - you’ll just have a duplicate index.

Thanks, i will keep it in mind. About the other issue that i was unable to perform “foreman_tasks:cleanup” task, should i open another ticket for this?

[root@ foreman]# foreman-rake foreman_tasks:cleanup TASK_SEARCH=‘label ~ *’ AFTER=‘1d’ VERBOSE=true
rake aborted!
The Dynflow world was not initialized yet. If your plugin uses it, make sure to call Rails.application.dynflow.require! in some initializer
/opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-1.4.3/lib/dynflow/rails.rb:75:in world' /opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-1.1.1/lib/foreman_tasks/cleaner.rb:80:in initialize’
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-1.1.1/lib/foreman_tasks/cleaner.rb:8:in new' /opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-1.1.1/lib/foreman_tasks/cleaner.rb:8:in run’
/opt/theforeman/tfm/root/usr/share/gems/gems/foreman-tasks-1.1.1/lib/foreman_tasks/tasks/cleanup.rake:37:in block (3 levels) in <top (required)>' /opt/rh/rh-ruby25/root/usr/share/gems/gems/rake-12.3.0/exe/rake:27:in <top (required)>’
Tasks: TOP => foreman_tasks:cleanup => foreman_tasks:cleanup:run
(See full trace by running task with --trace)

That does look like an unrelated issue, is dynflow running?

A suggestion for even improving the index further was brought up on IRC, can you run

DROP INDEX temp_tasks_test;
CREATE INDEX temp_tasks_test ON foreman_tasks_locks (task_id, resource_type, resource_id);

And check if it still leads to same results or perhaps even better ones?

According to “foreman-maintain service status” all services are running. I assume dynflow

Its hard to say if its better, i would say its about the same:

CREATE INDEX temp_tasks_test ON foreman_tasks_locks (resource_type, task_id);


Planning Time: 0.742 ms
Execution Time: 62.399 ms

CREATE INDEX temp_tasks_test ON foreman_tasks_locks (task_id, resource_type, resource_id);


Planning Time: 0.680 ms
Execution Time: 60.201 ms

how long does the subscription page take to load now?
I would suggest opening a new thread about the task cleanup, not sure why that’s failing and it isn’t related directly to this issue.

Subscription page loads about the same or bit faster with resource_id
without resource_id

2020-06-11T13:08:21 [I|app|fa3ebe3d]   Rendered /opt/theforeman/tfm/root/usr/share/gems/gems/katello- within katello/api/v2/layouts/collection (194.4ms)

with resource_id

2020-06-11T13:09:10 [I|app|5dd8a328]   Rendered /opt/theforeman/tfm/root/usr/share/gems/gems/katello- within katello/api/v2/layouts/collection (145.6ms)

One of the pages i see that has more significant load time is repository view in products page. Takes about ~14seconds with 9 repositories to load(url: /products/10/repositories?page=1&per_page=40)

Is the 14 seconds a new slowness after the added index, or was that already occurring previously and is just another place we need to investigate and improve?

Or actually i just realized i was asking for the wrong page - i meant to ask about the impact on the product page, not the subscriptions page.

As far as i can remember it has always been this slow so its not anything new. This is just something that can be improved but its not a major issue as it was with this products page.

I would say that products page with or without products_id is same, tested it multiple times and the difference is 50-100ms, sometimes its in favor of products_id and sometimes its not.
with products_id

2020-06-11T14:07:50 [I|app|5706c61d]   Rendered /opt/theforeman/tfm/root/usr/share/gems/gems/katello- within katello/api/v2/layouts/collection (459.6ms)

without products_id

2020-06-11T14:11:32 [I|app|849f5dd3]   Rendered /opt/theforeman/tfm/root/usr/share/gems/gems/katello- within katello/api/v2/layouts/collection (443.6ms)
1 Like