The initial target was to reduce the volume of cart expirations (Timeout attrition) by shortening the time between creating the cart and the user being presented with conversion incentive options. To improve the win rate the API team approached the issue from two angles, the server side and the partner side. From that, two strategies were employed:

  1. Reduce the number of API calls into the inventory system, and
  2. Set up a partner feed that can cache up-sell and incentive items to reduce delivery time.

Because of the caching strategy in the partner API, the length of time for an initial request (even before the user added any item to the cart) was slower during the initial seconds of the sale event, as the cache had not been populated with items that could serve other requests. If a value had been cached on one server, that only benefitted that specific instance. So the value had to be queried out of the SOLR search index a total of 16 times (once from each application server) before the cached values would be served for those early seconds. And this had a cascade effect, where the SOLR index was “hit” many more than 16 times across the tier, as the spike in initial requests outpaced the ability for the server to recognize that a cached value was available. In essence, the cache was not able to “catch up” with the demand until the the request rate had significantly fallen.

So we created a cache “warming” method to request sale events before they were generally available to potential customers. I created a user that was provisioned to have access to “sale” event information before the item was officially active. I then programmatically called into each specific application cache server directly to request that information just before the official start time of the sale event.

I created this process in the soapUI framework, as I was managing infrastructure for the partner API group at the time – and test management was part of the purview. The advantage of using soapUI was that the process could be compiled as a jar and run from any server. So the network operations group deployed the jar to a server that had access to the cache servers and set it to run just before the hour.

First, I would call directly into the database that resides behind the SOLR tier and extract the upcoming items that will be listed in the next hour. (Side note – this is actually MySQL, but the markup for that language is broken, so I’m using the Transact-SQL markup for this example)

This became the seed data that fed into the SOLR requests that were processed against each server. Below is a re-creation of the filter that’s used by the SOLR service to retrieve up-sell and conversion items. This is concatenated into the request that’s sent for each item from the soapUI jar. For all intents and purposes, these requests “look like” the same form of request that’s received directly from the API request layer, except for the time band I’m imposing on the request.

From that, the cache is populated with the values from the SOLR index just before the sale event occurs. With a cache refresh of 2 minutes this gives sufficient “shelf life” to the values such that the early spike of requests can be directly served by the cache. While it’s of secondary importance that the tool verifies the a response, deep validation is not necessary. The function below is simply checking to see that a valid cart item was returned.