Cloud Caching

What is Cloud Caching?

Many of our customers use Data Links to securely access data on their legacy systems. Due to many reasons, these systems are often very slow, often taking tens of seconds to fetch data. In today’s mobile-centric world, it is more important than ever to access data very quickly, to avoid mobile app users becoming frustrated by having to wait for your app to display their data.

To address this problem, we created Cloud Caching -- a way for you to easily provide a better experience for your users by allowing Kinvey to automatically maintain a cached version of your data. This guide will show you how to configure and use Cloud Caching.

Cloud Caching is a new feature and currently not part of our base offering. Please contact your representative to enable Cloud Caching for you.

Enabling Cloud Caching

Cloud Caching can be configured for any collection backed by a Service (Custom or RAPID). After having created a Service, and configured a collection to use it, navigate to the Collection Settings page (Collections->Settings icon on card), and click Cloud Caching in the menu.

Configuration

Check the Use cache for this collection box to enable Cloud Caching for your collection.

Each entry in your collection cache will only be valid for a specific duration before expiring. Once expired, the next time a user requests that particular entry from Kinvey, it will be fetched from the Service and then cached, enabling quick access the next time it is requested. The default Time-To-Live (TTL) for your cached data -- that is, the maximum amount of time data will exist in the cache before being replaced -- is 300 seconds. However, you may choose any value you wish and enter it in the TTL in seconds box to change this default duration.

Effects of Time-To-Live on data freshness

The value of TTL determines the balance between performance and staleness of data. A higher value means that data will be quickly retrieved from the cache (rather than from your Service) more often, but that data may be out of date. A lower value means that your data is more likely to be up-to-date, but since your cache entries will expire more frequently, more requests will require a round-trip to your slower Service.

For example, assume that your TTL is set to 300 seconds (5 minutes). The first time one of your users requests data, Kinvey will fetch it from your Service, return it to the user, and cache the data. Within the next 5 minutes, any user requesting this same piece of data will receive the cached version, avoiding the costly trip to your Service. However -- that data may be up to 5 minutes old (same as your TTL value).

It’s important to consider the implications of using shorter vs. longer TTL values, and to make the best choice for your use case, setting the TTL to the highest value that still fits within your data freshness requirements.

Cloud Caching behavior

Cloud Caching is completely invisible to your users. Once enabled, it works behind the scenes to reduce total round-trip times, enabling users to make the same requests but receive significantly quicker responses. This requires no change on your client app -- the only thing you need to do is enable Cloud Caching for your collection, and Kinvey does the rest.

Kinvey caches both individual entities as well as whole queries. When a user makes a request to retrieve data from a Cloud Caching-enabled, Service-backed collection, Kinvey will query the cache to check if it contains the data needed to fulfill the request. When doing so, it will ignore any data that has expired -- that is, any data which was added to the cache more than TTL seconds ago. If data was successfully retrieved from the cache (“cache hit”), it is simply returned to the user. If data matching these criteria was not found in the cache (“cache miss”), Kinvey will request the data from the Service. Once the data is retrieved, it will be added to the cache so that future requests can benefit from quicker round-trip-times, and return the data to the user.

Fetch behavior

When a user makes a request to update or delete data from a Cloud Caching-enabled collection, Kinvey will invalidate the corresponding cache entry. The next time this data is requested, Kinvey will fetch the most recent copy from the Service, and update the cache. This behavior lowers the risk that your users will receive stale data from the cache, while still keeping Service-bound requests to a minimum.

Update behavior

Cache size limits and replacement policy

Each Cloud Caching-enabled collection is allocated a cache, and each one of these collection caches can retain up to 100mb of data. As data is retrieved from your Service and the per-collection cache fills up, entities will be evicted from the cache using a least-recently-used (LRU) policy to make room for new data. This policy ensures that frequently-accessed data has a greater chance of remaining in the cache for longer periods of time, thereby maximizing the performance benefits of Cloud Caching.

Kinvey keeps track of the most recent time each entry was retrieved, updating this value every time data is successfully fetched from the cache. When data is about to be cached, but doing so will cause the cache’s size limit to be reached, entries are removed from the cache until enough room has been cleared to accommodate the new data. When choosing which entries to remove, the cache will prioritize the removal of those that have been retrieved the longest amount of time ago.

Manual cache invalidation

In certain cases, you may need to completely empty the entire cache for a collection. Kinvey allows you to perform this administrative action using the console.

To do so, navigate to the Collection Settings page (Collections->Settings icon on card), and scroll down to find the Cloud Caching section. Click the Invalidate Cache button to invalidate the cache for this collection, removing all cached entities and queries.

Update behavior

Further considerations

Each collection cache is shared among all users of that collection, using the _id field as the cache key for entities, and the query string (including query modifiers) as the cache key for queries. Our assumption is that the backing Service will return a consistent set of data when a specific entity is requested or a query is executed, regardless of user context. As such, if your Service does return data that differs based on which user is making the request, your use case may not be compatible with Cloud Caching at its current evolution. This is something you must be aware of and carefully consider based on the behavior of your Service and your app’s use case.

Got a question?