erp5_officejs_support_request_ui: Speedup the Support Request worklist searching.
Use a custom ERP5 script instead global worklist searching.
-
Owner
I do not see how this could improve performance any better than erp5_workflist_cache.
Please explain and provide performance figures showing that it is indeed beneficial.
-
mentioned in merge request !760 (merged)
-
Developer
It has been a long time since I wrote this commit. I tried to recall the scenario and check the current implementation. If I recall correct, the script
SupportRequestModule_getWorklistAsJson
is for get specific worklists forSupport Request
, the normal way will get all worklists. We thought this can "save" some time.The improvements which wrote by @jerome (!760 (merged)) is better. He use
ERP5Site_getTicketWorkflowWorklistInfoDict
instead of hardcoded query string. This is the first time that I meet thegetVarMatchKeys
API... But is this worth to loop all objects in theworkflow.worklists
?cc @tc
-
Owner
Thanks for feedback @Daetalus
But is this worth to loop all objects in the
workflow.worklists
?In
ERP5Site_getTicketWorkflowWorklistInfoDict
, this is only the worklists from ticket workflow (there are 4 worklists), so I don't think it's an issue.From my experience from user feedback, even if worklists with portal_workflow if very efficient from server resources, users will say "it's slow" because they have to wait 5 minutes because of the cache. Some users admitted to me that as a consequence of this, they were not using worklists at all. And that's a pitty because worklists is really excellent in a well configured ERP5.
I feel where we should try improve is reducing this 5 minutes delay. I had one idea by indexing in a
worklist
table all documents that are currently in a worklist for a user - and deleting records from this table when documents are not in any worklist. As long as users "do their job" and worklists are well configured it should be fast, but otherwise it would become very slow, so it's not really better...Another thing, that @tc already applied in nexedi ERP5, is that we don't need to calculate worklists on all pages, only when users request it - but it would als be nice if there could be some kind of notifications "hey there's a new support request to close" based on worklists , or if the worklist view could highlight the "new" entries in worklists.
Anyway, thanks for feedback and hopefully one day we'll improve user experience on worklists globally.
-
Owner
To reduce the effect of the cache, in a worklist sql cached setup (as opposed to the basic "each user gets their own 5 minutes Zope-level cache"), a JP-validated but never implemented idea is to feed a new table with deltas each time a document changes state, and to use that table to update worklists. Then, this table would be flushed when we refresh workflist cache, to catch any incremental inconsistency. This should improve user experience.
-
Owner
Yes, there was also this idea. There's one thing I don't understand though, is how when we index document in new state we can calculate the delta against the old state ?
If we take the example of system state where we have 10 submitted support requests:
portal_type state count Support Request submitted 10 when one of those 10 submitted support request is open, we want to insert two lines so that we have:
portal_type state count Support Request submitted 10 Support Request submitted -1 Support Request open 1 but when indexing the new support request it's already in open state, so I don't know how we can figure out we should insert -1 for submitted state. Maybe selecting it from catalog table before updating this table ?
I could not find a good way, so I started thinking at other ideas, but there might be a way. The delta approach seems much better.
-
Owner
There is another complication for the delta approach: security. If any catalog security column changes, the lines must also change. While I can see how state could work (pre- transition interaction + post-transition interaction), security I have no idea.
-
Owner
pre- transition interaction + post-transition interaction ... ah yes that was the missing point for me. Then if we assume all changes to security on the document goes through the same API call, we could probably do the same dance of pre/post interactions.
-
Owner
Also, these approaches would need an alarm to "compress" the table, isn't it ?
-
Owner
To me compression would just be the normal sql worklist cache alarm: every 5 minutes, refresh the "normal" table and remove all deltas: we are back in sync.
Another idea: update table in-place instead of inserting deltas elsewhere. But this may cause divergences as I don't know if "x = x + " is atomic (what if commit-time "x" is different from query-time "x" ?). [EDIT]: by which I mean, what if another transaction did a change on the same row ?
-
Owner
To me compression would just be the normal sql worklist cache alarm: every 5 minutes, refresh the "normal" table and remove all deltas: we are back in sync.
Nice. I feel we also have to take care of concurrent transactions in that case, so that no table modify insert to the worklist between when alarm reads from catalog and insert in worklist table. But since this happens only every 5 minutes a big lock on the table during alarm processing is probably ok if that's needed.