Setting in memory, only changed values in database

While I was trying to come up with an easier way to extend setting in present category from a plugin, I came up with an idea, that the settings doesn’t need to be in the database.

My proposal is to keep only name, category and value in database and rest would live in code and would be loaded in memory at application startup.

Keeping all the setting information in database is not actually the best approach. AFAIK only good reason to keep all the information about setting in database is, that it works great with our search. It makes our lives easier while searching, but we have to keep the database info in sync with what we have in code.

This effort is not mandatory to make the settings extendable, but it is a great opportunity to make such change.

I made a draft PR of a change, it is by no means complete, it still needs the search to be fixed, but it is showing the path towards this goal.

2 Likes

How would you handle multi host deployments? If you change a setting one one instance, would you need to restart the whole cluster?

1 Like

If I understand the proposed PR correctly, the overridden values would be still stored in DB. Just the defaults, type, description etc would remain in the setting definition file - meaning on FS. I can see a mismatch if you e.g. load balance two different minor versions of Foreman, but that could cause many other problems.

@ezr-ondrej I like the idea (if I got it right). How does that help plugins? Do you have also some plugin PR to demonstrate that? Or is that unrelated and a side effect thought during the plugin work? :slight_smile:

How do you do that today? I mean, settings are written without any explicit expiration time, therefore they will never expire. Unless you set some global option or configure your redis/memcached to auto expiry everything coming from Foreman on a global level.

You got that right, but I was thinking loading the values to the memory as well on startup and then we would hit the issue Timo is mentioning. But that can be solved later and current approach for reading stored values (newly only the values changed by user) can be kept.

This is just an side effect, but it ease up the registration of new setting, as we don’t need to know about the setting at any point on startup, we don’t care at what point plugin hooks up and adds its setting to the category.
Comparing to current approach, where we need to know about all the settings before the defaults are being updated to the database, what is very soon (on initialize).

Hm right, even today the Rails cache get’s invalidated only on the server that is saving the record, right?

I think that’s why you use a shared cache like memcache, or more recently Redis.

1 Like

Yes, that’s how we do it. I do wonder if the caching brings a significant performance improvement at all. The db queries should be quite fast already (and are cached by rails).

Hmm, that’s interesting idea. If we would avoid AR by format_setting_value(Setting.where(:name => ).pluck(:value).first) it could be quite fast :thinking:
But it would depend how often we are querying settings.

Well, in my testing the other day (the patch I mention) it was actually the root cause and I saw huge improvement by order of magnitude. So it does matter quite a lot, we use settings extensively in some loops (parsing etc).

Yes, and I think that’s fine. I am wondering if some kind of simple memoization that caches the settings per request in memory is faster than always using redis or some other network cache to retrieve the value. There might be no difference if the file store cache is used on a single node deployment and Linux caches the file system access. But if it’s a network service, I believe we could also query the db.

I believe we could be ok with just something like: https://github.com/steveklabnik/request_store storing the used values just for the time of one request.

I mentioned this in an offline discussion, but I think that we’ll still need migration helpers. Let’s say a user changed some value, but the setting changes. This should be reflected. Rather than copying the code, like we do now in various forms, @Shimon_Shtein suggested a SettingsMigration class with helper methods.

That would help the situation for sure.
I still feel that keeping something you’ve got in code in database is wrong though. It’s just the concept that irritate me, I don’t have the data to back up it would be better to not have it in database.

That may be useful, but iiuc there will only be three types of changes that will need migration after this. :

  • Change setting name - very straight forward, Setting.where(name: $old_name).update_all(name: $new_name)
  • Drop setting - also simple, Setting.where(name: $name).delete_all
  • Change setting type - this will depend on the type change and be a tad bit more tricky. for example, changing an integer to array could look something like this:
setting = Setting.find_by_name($name)
return unless setting.present?
setting.value = [setting.value]
setting.save(validate: false)

This assumes serialization is done by the setting model. In some cases it may be a bit more complicated (e.g. if you want to validate the user defined value matches some new validation) but i’m not sure a helper can be generic enough to cover these without being too complex.

I like this, when used carefully. Together with normal Rails caching this should work nicely. However make sure to do some benchmarks, it might not matter at all and then we would simply not bother introducing this into the codebase.