Problem:
I’m on High Speed WiFi with medium to slow internet speeds (500Kb/s to 1.8Mb/s when downloading). I’m running a VMware Cluster and KVM server with about 250 total virtual machines in my HomeLab. I wanted to use Katello/Foreman to sync packages from various repositories to a local server so I’m not constantly banging on my WiFi.
I expected downloads to go reasonably fast. I was able to use reposync to pull down all the CentOS 7, EPEL, ElasticSearch, and Kubernetes packages in about 24 hours. I also had a Spacewalk server that did pull down repos relatively quickly. I went to Katello both because Spacewalk doesn’t support CentOS 8 streams which I also need practice on.
As such, after adding in all the repositories, I set them in a sync schedule over the course of a week and expected they’d all be in place by the end of the week. However it’s been 2 weeks now and while I see the drive using space, a few kilobytes a second, it’s certainly not as fast as expected.
Because I used reposync, I was going to try and point them to the local repos I’d retrieved however I can’t seem to stop or cancel the currently running sync processes. When I cancel, it holds the locks and recommends letting the sync complete with no clear way to remove these Pending/Warning/Error conditions.
What is the correct way to shut down sync’s, remove tasks, and basically clean up the environment short of totally rebuilding the server. That I can do but I don’t want to deploy this into an environment where I have to rebuild the server every time something weird happens even though it’s a Terraform/Ansible build.
Expected outcome:
Sync processes complete in a timely manner or there’s a clear process to stop and then remove the tasks and locks so I can try again.
Foreman and Proxy versions:
2.3.5
Foreman and Proxy plugin versions:
foreman-tasks 3.0.6
foreman_remote_execution 4.2.2
katello 3.18.4
Distribution and version:
I’m guessing OS. CentOS 7 most current (7.9 2009 I believe).
Other relevant data:
As you can see here, over the course of 28 seconds, the amount of space used is less than 100 bytes.
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518912 354505936 26% /var
/dev/mapper/vg00-var 494237464 119518916 354505932 26% /var
/dev/mapper/vg00-var 494237464 119518924 354505924 26% /var
/dev/mapper/vg00-var 494237464 119518924 354505924 26% /var
/dev/mapper/vg00-var 494237464 119518932 354505916 26% /var
/dev/mapper/vg00-var 494237464 119518932 354505916 26% /var
/dev/mapper/vg00-var 494237464 119518932 354505916 26% /var
/dev/mapper/vg00-var 494237464 119518932 354505916 26% /var
/dev/mapper/vg00-var 494237464 119518936 354505912 26% /var
/dev/mapper/vg00-var 494237464 119518940 354505908 26% /var
/dev/mapper/vg00-var 494237464 119518944 354505904 26% /var
/dev/mapper/vg00-var 494237464 119518944 354505904 26% /var
/dev/mapper/vg00-var 494237464 119518952 354505896 26% /var
/dev/mapper/vg00-var 494237464 119518964 354505884 26% /var
/dev/mapper/vg00-var 494237464 119518964 354505884 26% /var
/dev/mapper/vg00-var 494237464 119518972 354505876 26% /var
/dev/mapper/vg00-var 494237464 119518976 354505872 26% /var
/dev/mapper/vg00-var 494237464 119518976 354505872 26% /var
/dev/mapper/vg00-var 494237464 119518976 354505872 26% /var
/dev/mapper/vg00-var 494237464 119518980 354505868 26% /var
/dev/mapper/vg00-var 494237464 119518980 354505868 26% /var
/dev/mapper/vg00-var 494237464 119518980 354505868 26% /var
/dev/mapper/vg00-var 494237464 119518984 354505864 26% /var