>
>> Hello,
>>
>> as a recent was started on pull request [1], I wanted to move it back to
>> the mailing list, in order to restart the discussion.
>>
>
> Thanks, I think it's much easier to discuss things here 
>
>
>> I would like to propose a notification/callback from the proxy to
>> foreman, initially to address the following needs:
>>
>> 1. a way to invalidate the smart proxy related cache (in 1.11 a lot of
>> data is now showed on the smart proxy show page, most of it is being cached
>> to avoid multiple API calls to the proxy).
>> 2. a way to detect if new feature are enabled on the proxy.
>>
>> The user stories i would like to address are:
>>
>> - as a user, I've restarted my proxy and I see out of date data in
>> the UI/API. (covered with 1)
>> - as a user, I've ran the installer (e.g. added the remote execution
>> plugin) yet I have no idea i need to go to the smart proxies and click
>> refresh. (covered partly with 2, more below)
>> - as a user i might want to see older errors from the proxy yet know
>> they are from pre-restart (related to 1 and @lzap
>> <https://github.com/lzap> work on logs at [2]).
>>
>> lets break each on a bit down:
>> for 1. i could see one implementation, by simply keeping a timestamp of
>> when the proxy was started, and provide that timestamp in one of the api
>> calls (for example in one of the ping/status calls).
>>
> When it was started? Surely you want a timestamp of when the cache was
> last refreshed? This is what we've done for years on the certificates page,
> and I see no harm in using that mechanism again. I do think a button to
> clear the cache would be good - today you can clear it by appending a
> parameter to the URL, but it's a hidden feature.
>
no, if the time stamp is newer than the cache => restart happened, cache is
invalidated.
there is no longer a url for it, git pull please 
also, the refresh button on the proxy now invalidates the cache as well.
>
>
>> for 2. i think foreman needs to know a restart happened, what it does
>> with it can be handled differently, for example:
>> if a new feature was added, i think its fair to simply add it and report
>> it in the ui.
>> if a feature was removed, we can probably detect if it was due to
>> configuration error (via the plugin init loggin [3]) or not, in either case
>> we don't remove it for the user, rather report it (e.g. in the status icon).
>>
> I still disagree with any kind of automatic update here, the potential for
> affecting users networks is huge. If you're going to the effort of adding
> some kind of notification or status report for the user about removed
> features, you may as well do it for added features too. This leads to
> consistent behaviour, and the user can choose to refresh features at that
> point.
>
I'm not sure I'm following, today we refresh the features for you from the
installer, regardless if you wanted it or not (including removing a
feature).
I think adding a feature to a proxy is a no issue, it allows to report the
feature initalization and provide useful feedback even before its
configured.
we would like to show / indicate that in both the proxy index and show
pages. and later on, potentially via notifications.
>
> As an extension, perhaps we should re-use the puppetclass import page, and
> allow the user to select which features are added/removed when refreshing
> the features?
>
+1, ATM its all or nothing.
>
> On the subject of restarts, firstly, knowing about a restart isn't enough
> - it assumes the proxy came back up successfully. Dmitri suggests a
> heartbeat instead, this seems much better, and could be driven from the UI
> side easily (see network topology, below). Even if we have just some basic
> ping (absolute worst - hit /features once per minute) it still could scale
> to hundreds of proxies without impact on the UI. So I agree with Dmitri
> here, the UI should drive this, the proxy should just have an API to
> retrieve events.
>
Other projects did that, and IMHO it scale poorly, if we are going to
introduce events/streams lets do it right and use a message bus/other
streaming protocol.
> I think the only "allowed" automatic update in this case would be add new
>> features to the proxy, and I would consider it an improvement, as its the
>> default process a user usually take.
>>
> As stated, I would consider that dangerous, especially in large networks.
> The combination of new features, taxonomies, multiple subnets/domains, and
> the various ways in which we query the proxies make me unsure that we could
> guarantee not having unintended changes. Safer to notify the user of
> detected features and then give them the power to update or not.
>
You might have a point there, but the alternative of refresh all is not
great either, IMHO notification that there is a change + a way to accept it
(using your suggestion of accepting per feature) is probably a good way
forward.
> I do think that over time, the proxy will need to send an event stream
>> back to foreman, in order to enrich the visibility / management in the
>> application and enable a rich provisioning state machine.
>>
> There's no guarantee the proxy has any access to Foreman. Network
> segregation may prevent it, and today only the template plugin needs it -
> everything else is one-way (although technically a puppetmaster would need
> it for reports, thats not actually the proxy making the connection).
>
thats not true, it depends on the feature, i could think of the following
features that need to initiate connection from the proxy:
- puppet (facts, enc and reports)
- templates (provisioning templates - kickstart etc)
- scap (similar to puppet - send reports in various formats)
- remote execution (afair there is a proxy -> foreman updates on changes?)
and I'm sure thats only part of the list.
>
> I have no issue with being able to read events, but if we stick to our
> current erchitecture, then we need to drive it from the Foreman side.
>
>> regarding the implementation details that @witlessbird
>> <https://github.com/witlessbird> and @GregSutcliffe
>> <https://github.com/GregSutcliffe> mentioned I fully agree it should not
>> block the proxy in any level, e.g. using a different thread to reach out to
>> foreman, do nothing if it fails etc.
>>
>> I do not think that the usage case of mutlitple foreman hosts using the
>> same proxy is that common, if we however wish to support it we can simply
>> provide an array of foreman urls (which we dont afaik support today anyway).
>>
> Agreed, it was something in my head at the time as I had just been
> discussing it on IRC with a user for other reasons. In any case, if the
> above stories are solved from the UI side, then this case also works.
>
> TL;DR - no auto-updates in the UI, use notifications, and get the
> logs/features/restarts from the UI side.
>
I'm fine with that, as long as the proxy tries to notify us we can decide
how to notify the user from foreman side (and of course, revoke the cache

···
On Tue, Jan 26, 2016 at 12:06 PM, Greg Sutcliffe wrote:
> On 25 January 2016 at 15:42, Ohad Levy wrote: