5 minutes of effort = 1600% increase in performance

Posted by:

So, the other day I was humming along for one of our larger clients – putting the finishing touches on one of their applications. The app in question handles matching of vacancies against applications. The administrator needs to quickly have the ability to browse what vacancies have what status. 

One vacancy can have one of three different statuses:

  1. View – Show the selected application for this vacancy.
  2. List – Lists all matching applications for this particular vacancy.
  3. Missing – No current applications exists that match the criteria for this vacancy.

Don't worry too much about the business logic, I only provide a quick background to give you some idea of what I was trying to achieve.


Oh, I nearly forgot! Of course this is an IBM Domino solution as we are a "True Blue" IBM Premium Business partner. Unfortunately this client was stuck with a fairly old IBM Domino server: 8.5.2, so I wasn't too keen on using an XPages solution. Why? Because in my humble opinion XPages just hadn't matured enough yet. (Missing HTML5 features, no SSJS debugger, old DOJO release, loooong "first hit" boot times and so on – Yes, all of this can be mended, worked around, patched etc. But sometimes you can just say enough is enough and go with old school Domino development.)

Anywho, my first inclination was to create a WebQuerySave agent that did the matching ServerSide before presenting the result to the user, a very common approach. But the performance was really bad – The WQS agent took around 4000ms although the workload was light and number of documents fairly low (in the hundreds anyway). The page itself took around a second to load with an empty cache, so we we're looking at a total page load time here in the neighborhood of five seconds – not acceptable.

So I used an approach that's known as perceived performance. Let's take a clock as example – If you use a clock that doesn't show seconds, time seems to move slower. If there's a lot of stuff going on, the general experience is that things are happening quicker. So I extracted my LotusScript code and placed it in an agent that get's called thru AJAX. This way the page loads quickly, the user can start to interact with it immediately and the state of each vacancy trickles in as the server sees fit.

Below is a screenshot of the application, waiting for the AJAX request to comeback with the data. (The only vacancies that know its state are the ones that are booked, they're marked with "Visa" in the column to the right, the rest are pending).


Finally, here we have the first icarnation of the code:

[gist id=”10656980″]

Nothing too odd about the above, I would even dare to say a fairly common approach. I do what I can to speed up the process by using NotesViewEntryCollections and the ColumnValue-property. Using one view as the source, that has a column that combines the particular criteria for that vacancy. The key can look something like this:

2015-05-27DSurgeon (Date + Slot + Role)

We use this key to find any matching requests, if none is found it's added to the string that's returned to the AJAX request. We only need the ones that doesn't have a match (Missing) as the first state (View) is stored with the vacancy and the second state (List) are all that are not missing. Ideally "Missing" should always be very few documents so the data transfer over the wire should also be low, increasing performance further.


So, everything was "hunky-dory" then. The application loaded quickly and felt responsive to the user, but… The bad performance of the LotusScript code was nawing at me… 

After a murky night of coding I came up with the following:

[gist id=”c2c7c7a60adfe7d801b6″]

The big difference here is the use of the NotesViewNavigator. When using the NotesViewNavigator object you have the opportunity to use it's cache – "BufferMaxEntries". I set it to 100 in my case as the view will show no more then 100 rows at a time. I also set the EntryOptions to "VN_ENTRYOPT_NOCOUNTDATA" as I have no parent/child relationship in the view.

All in all the the running time of the agent went from 4000ms to 25ms! Pretty darn impressive if anyone were to ask me!


This technique is nothing new, but the performance gains are so huge I thought it's well wort repeating.

I was heavily influenced by this article: "Fast Retrieval of View Data Using the ViewNavigator Cache – V8.52" and I highly recommend you check that out for more details.



After re-running the performance tests in my local test environment I've "only" managed a 200% increase in performance. Beware, YMMV….



  1. Sean Haggerty  May 19, 2017

    If the views are automatically refreshing themselves to show new entires/reservations,  perhaps use a Folder.

    The Folder has no refresh issues, and then schedule for reservations to be added to the (static) Folder.


  2. Tomas Nielsen  June 9, 2014

    I think the fact that you removed the refresh of the two views made a huge impact to performance also:


    Call vRequest.refresh()

    Call vMatchVacancy.refresh()


    Refreshing views in runtime is expensive and not always needed. (There are some special cases though)

    • Joacim Boive  June 11, 2014

      Good catch! =)

      Actually, the production code contains these rows as well. I’ve updated the Gist.

      The operations is expensive, as you point out, but not that bad in our use case with relatively few documents (a couple of hundred at most). Did a quick test just now to add them and the first hit showed a responsetime of ≈ 300ms but on the next request it dropped to ≈25ms again, so somewhat inconclusive. Nothing to loose any sleep over anyway, anything below 300ms is percived as instant.

  3. Patrick Kwinten  June 9, 2014

    Can you make any estimation how much development time you would have gained/saved by upgrading to 8.5.3 first and then build the solution in XPages?

    8.5.3 + XPages seems enough mature for me.

    • Joacim Boive  June 11, 2014

      Hard to say. It would have been two totally different approaches – like right now it reloads the entire page when you move between the tabs, it would have done that with partial refresh in an XPages solution. Of course I could have rolled my own partial refresh using plain JS, but it wasn’t a requirement and the page is already very quick – at least in a modern browser (read Chrome, Firefox and IE >= 9).

      I’ve used CSS3Pie to achieve the rounded corners in IE and the drawback here is that between tab-switches you can clearly see that corners are redrawn on old IE versions. It would’ve been more efficent to use an CSS sliding doors approach for old IE, but then never browsers are punished and the design would be less flexible. But with XPages you would only see the layout changes on the first hit, which would have been prefered of course. But, on the other hand, this page is way leaner then any XPage app and we don’t have the issue that XPages have with the first hit (when it needs to bring the app into memory).

      As you can see, the design is all custom so I wouldn’t have any benefit of using XPages here either,


      So, to conclude: I don’t know. =)

      I suppose the the development time would have been roughly the same, because I wouldn’t been able to use the ready made components of XPages anyway.


Add a Comment