Can I safely remove tmpfiles?

Problem:
Excessive tmp files in /var/lib/pulp.

My total pulp filesystem usage:

du -hs /var/lib/pulp/
658G /var/lib/pulp/

Breaking that down I have:

du -hs /var/lib/pulp/*/
3.1M /var/lib/pulp/assets/
147G /var/lib/pulp/content/
0 /var/lib/pulp/exports/
0 /var/lib/pulp/gpg-home/
0 /var/lib/pulp/imports/
0 /var/lib/pulp/katello-export/
271G /var/lib/pulp/media/
0 /var/lib/pulp/packages/
974M /var/lib/pulp/published/
0 /var/lib/pulp/static/
0 /var/lib/pulp/sync_imports/
148G /var/lib/pulp/tmp/
0 /var/lib/pulp/upload/
0 /var/lib/pulp/uploads/

So 148G in /var/lib/tmp seems like quite a lot, then you add in a truck load of tmpfiles which are sat in /var/lib/pulp itself which are mainly rpm files but all named tmpblahblah:

ls /var/lib/pulp/tmp???* | wc -l
56370
ls -FaGl /var/lib/pulp/tmp???* | awk ‘{ total += $4 }; END { print total }’
97588514502

Expected outcome:
tmpfiles are temporary and get cleaned up in some way. I have had a high number of reposync failures for one reason or another. Could that be the cause? In which case can we look at cleaning that up following a filed sync. (I’m only guessing here at a possible cause)

Foreman and Proxy versions:
Foreman 2.3.3

Foreman and Proxy plugin versions:
Katello 3.18.1

Distribution and version:
CentOS 7.9.2009

1 Like

I posted a link to this thread on the Pulp IRC channel, and somebody suggested this may in fact be a Pulp bug, where the working directory is not cleaned up under certain failure conditions.

Did you experience any failed syncs leading up to this?

Perhaps keep an eye on this ticket: Issue #8295: Disc Usage during Repository Sync - Pulp

1 Like

Copious failures due to a shonky proxy server.

OK, so I took a snapshot of the server, deleted the tmpfiles and everything carried on working…

One failed sync and I now have 20G of tmpblah files in /var/lib/pulp which are rpm and xml files. It seems that these should really be in /var/cache/pulp shouldn’t it? and it should certainly get cleaned up on a failed sync.

have the same problem. you would of thought there would be at least an official work around.
Might just have to be a weekly thing to carry out ?