PG::SequenceGeneratorLimitExceeded: ERROR: nextval: reached maximum value of sequence "logs_id_seq" (2147483647)

I thought I had covered most edge cases for our Foreman infrastructure, although one I didnt consider was the number of log files that 70,000 hosts will create. After a year in business, we’ve finally hit that. I am seeing this in our logs:

2025-11-25T17:52:33 [E|app|e6e863ae] Error importing log messages for report ID: 26772210: PG::SequenceGeneratorLimitExceeded: ERROR:  nextval: reached maximum value of sequence "logs_id_seq" (2147483647)
e6e863ae |
2025-11-25T17:52:33 [E|app|e6e863ae] Error processing normal report for host: qaregauto.ssnc.global: PG::SequenceGeneratorLimitExceeded: ERROR:  nextval: reached maximum value of sequence "logs_id_seq" (2147483647)
e6e863ae |
2025-11-25T17:52:33 [E|app|e6e863ae] Error importing report for qaregauto.ssnc.global: PG::SequenceGeneratorLimitExceeded: ERROR:  nextval: reached maximum value of sequence "logs_id_seq" (2147483647)
e6e863ae |
2025-11-25T17:52:33 [E|app|e6e863ae] Failed to import reports: PG::SequenceGeneratorLimitExceeded: ERROR:  nextval: reached maximum value of sequence "logs_id_seq" (2147483647)
e6e863ae |
2025-11-25T17:52:33 [E|bac|e6e863ae] PG::SequenceGeneratorLimitExceeded: ERROR:  nextval: reached maximum value of sequence "logs_id_seq" (2147483647)
e6e863ae |  (ActiveRecord::StatementInvalid)

We seem to have hit the max number of logs allowed in the table. I’ve tried to run:
foreman-rake reports:expire days=10 But that doesnt actually seem to delete the rows of logs associated with reports?

Can I just set logs_id_seq to bigint and call it a day?

2 Likes

The answer is yes. Flipping it to bigint, then restarting the services, fixed it.

ALTER SEQUENCE logs_id_seq
  AS bigint;

SELECT setval(
  'logs_id_seq',
  COALESCE((SELECT MAX(id) FROM logs), 1),
  true
);

70,000 minions all running 1-5 times a day = lots of report lines. :slight_smile:

Although Im still questioning if using the rake command removes them from the DB, because we ran it down to removing reports from just the last day and the max int never went down in the DB.

3 Likes

Hi @singularity001 ,

Would you mind filing a bug about this here? Foreman issues

We should be using bigint for all ID fields really by this point. It is the default now in postgres anyway.

Updating this to bigint should be okay. If (when) there is a future migration that does this, it should just become a ‘noop’.

1 Like

No worries, I just created a new bug report here. Thanks for finding + fixing this issue, Jeff!

2 Likes

Oh thank you so much. Was just following up with this post and saw you made one. Thanks!

1 Like

In Satellite we’ve found a category of these bugs. https://issues.redhat.com/browse/SAT-39074 tracks them. https://issues.redhat.com/browse/SAT-35478 in particular mentions a neat trick:

ALTER SEQUENCE katello_host_installed_packages_id_seq RESTART WITH 2147482000;

That looks a big more natural than SELECT setval(), but doesn’t allow selecting the current maximum value. It just show 2**32.

I’m not that familiar with this area, but isn’t PostgreSQL smart enough to keep the next value when converting from int to bigint?

1 Like