The primary key should be there automatically, that’s part of the DB schema and should be created via activerecord or should have been generated through the import of your dump.
I would assume the cause for why it’s broken is that you have duplicate id’s in that table, with id being the primary_key field. That table itself does not hold any valuable data, but it is referenced by the filterings table which in turn is a “database glue layer” between the filters table (which are the permissions on roles) and the permissions table you looked at.
Maybe you can piece it back together manually by comparing those tables and the roles table, see what id’s are missing in the permissions table, and figure out which id’s that should be. But I’m afraid I can’t help you very much from here.
thank you so much @areyus
I can go back to version 10, its not better ? I can enable it again and start version 10…
They new agents are signed to they will send again and foreman will add them to the database or not ?
If the hosts register to Foreman through Puppet and no manual changes like ENC parameters have been made to those hosts, the hosts should register back just fine.
Since you have updated Foreman after switching to postgresql 12, you will need to do DB migrations from hand.
Make sure you have a backup at hand before.
After switching back to postgres 10, you will need to run:
foreman-rake db:migrate
foreman-rake db:seed
This should apply all pending migrations to the postgres DB. If you then start Foreman, you should at least be back up and running (I hope).
I have backup of version 10 and 12.
I get below error
# foreman-rake db:migrate --trace
** Invoke db:migrate (first_time)
** Invoke db:load_config (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute db:load_config
** Invoke plugin:refresh_migrations (first_time)
** Invoke environment
** Execute plugin:refresh_migrations
** Execute db:migrate
** Invoke db:_dump (first_time)
** Execute db:_dump
** Invoke dynflow:migrate (first_time)
** Invoke environment
** Execute dynflow:migrate
rake aborted!
Sequel::Migrator::Error: More than 1 row in migrator table
/usr/share/foreman/vendor/ruby/2.7.0/gems/sequel-5.71.0/lib/sequel/extensions/migration.rb:635:in `schema_dataset'
/usr/share/foreman/vendor/ruby/2.7.0/gems/sequel-5.71.0/lib/sequel/extensions/migration.rb:466:in `initialize'
/usr/share/foreman/vendor/ruby/2.7.0/gems/sequel-5.71.0/lib/sequel/extensions/migration.rb:530:in `initialize'
/usr/share/foreman/vendor/ruby/2.7.0/gems/sequel-5.71.0/lib/sequel/extensions/migration.rb:413:in `new'
/usr/share/foreman/vendor/ruby/2.7.0/gems/sequel-5.71.0/lib/sequel/extensions/migration.rb:413:in `run'
/usr/share/foreman/vendor/ruby/2.7.0/gems/dynflow-1.7.0/lib/dynflow/persistence_adapters/sequel.rb:285:in `migrate_db'
/usr/share/foreman/lib/tasks/dynflow.rake:23:in `block (2 levels) in <top (required)>'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `block in execute'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `each'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `execute'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `synchronize'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:188:in `invoke'
/usr/share/foreman/lib/tasks/dynflow.rake:42:in `block (2 levels) in <top (required)>'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `block in execute'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `each'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `execute'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `synchronize'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:188:in `invoke'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:160:in `invoke_task'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `each'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block in top_level'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:125:in `run_with_threads'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:110:in `top_level'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:83:in `block in run'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:186:in `standard_exception_handling'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:80:in `run'
/usr/share/foreman/vendor/ruby/2.7.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/usr/bin/rake:23:in `load'
/usr/bin/rake:23:in `<main>'
Tasks: TOP => dynflow:migrate
but when I start then foreman it works like it should
but when I run db:migrate i get above error
and the new agents pull data but I dont see the entry for them in the foreman, do I have add them manually to foreman ?
Never seen such an error before, and I also don’t know how to recover from this.
Maybe they are without organization/location, try with any/any view.
In general, you seem to be having a lot of DB problems. I cannot tell you how to recover from this or even if any of these problems were caused by whatever all has happened during this thread or if at least some of them stem from before. To be completely honest with you, if it works now I recommend running with it for now and consider building a new stack from scratch and then migrating the hosts over.
Thats my plan!
but one last question please, how foreman create the entry for the agents ? automatically or I have to do it manually ? e.g. I add yesterday evening some hosts, and I sign it via foreman, now I can see the certificates in foreman is signed, but when I click on hosts and change organization/location to any/any, I still dont see it, what can I do in this case ?
/etc/puppetlabs/puppet/node.rb srv1.local return all informations but I cannot see it on UI
and many many thanks
The systems should be created via fact import. Maybe you have turned that off in the settings?
If that does not work, you will probably need to recreate those hosts by hand. I assume these are VMs, so just re-creating them might lead to errors that the VM already exist. So you might want to create them as non-vms and then do an association to the VM via the compute resource.
yes remove the key on puppetserver/foreman and I create new key on client. I signed the key so far so good and when I run puppet agent -r for second time I get below error, also related to node
>puppet agent -t
Info: Using environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node srv1.local: Failed to find srv1.local via exec: Execution of 'srv1.local' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I know this is supposed to work somehow, but I guess you are seeing a chicken and egg problem here. The hosts can not be found in Foreman, so Puppet fails, which causes the facts to not be uploaded to create the host. Re-creating the hosts by hand is probably the easiest way out.
if I go back to yesterday morning full machine backup and keep the backup now the ssl folder for keys in puppet. after recovery I can copy them back right ? because I see a lot of errors now in porduction.log
I guess this should work, but it would probably be safer to just regenerate the certs on the affected clients.