Data Store

The simplest use case of Kinvey is storing and retrieving data to and from your cloud backend.

The basic unit of data is an entity and entities of the same kind are organized in collections. An entity is a set of key-value pairs which are stored in the backend in JSON format. Kinvey's libraries automatically translate your native objects to JSON.

Kinvey's data store provides simple CRUD operations on data, as well as powerful filtering and aggregation.

Collections

To start working with a collection, you need to instantiate a DataStore in your application. Each DataStore represents a collection on your backend.

// we use the example of a "Book" datastore
DataStore<Book> dataStore = DataStore<Book>.Collection("books");

Through the remainder of this guide, we will refer to the instance we just retrieved as dataStore.

Optionally, you can configure the DataStore type when creating it. To understand the types of data stores and when to use each type, refer to the Data Store Types section.

Data Store Types

The Cache store type has been deprecated, and will be removed from the SDK entirely as early as December 1, 2019. If you are using the Cache store, please make changes to move to a different store type. The Auto store type has been introduced as a replacement for the Cache store.

When you get an instance of a data store in your application, you can optionally select a data store type. The type has to do with how the library handles intermittent or prolonged interruptions in connectivity to the backend.

Choose a type that most closely resembles your data requirements.

  • DataStoreType.Sync: You want a copy of some (or all) the data from your backend to be available locally on the device and you like to sync it periodically with the backend.
  • DataStoreType.Auto: You want to read and write data primarily from the backend, but want to be more robust against varying network conditions and short periods of network loss.
  • DataStoreType.Network: You want the data in your backend to never be stored locally on the device, even if it means that the app will not offer offline use.
  • DataStoreType.Cache: Deprecated You want data stored on the device to optimize your app's performance and/or provide offline support for short periods of network loss. This is the default data store type if a type is not explicitly set.

Head to the Specify Data Store Type section to learn how to request the data store type that you selected.

Entities

The Kinvey service has the concept of entities, which represent a single resource.

using Newtonsoft.Json;
using Kinvey;

[JsonObject(MemberSerialization.OptIn)]
public class Book : Kinvey.Entity
{
  [JsonProperty("title")]
  public string Title { get; set; }

  [JsonProperty("details")]
  public string Details { get; set; }

  [JsonProperty("author")]
  public string Author { get; set; }
}

Fetching

You can retrieve entities by either looking them up using an ID, or by querying a collection.

Regardless of the chosen fetch method, results from a single query are limited to 10,000 entities or 100 MB of data, whichever is reached first. If your query produces more results that can fit into these numbers, only the first 10,000 or 100 MB are returned.

To overcome these limitations, use paging.

Note that for Sync, Auto, and Cache stores the entity limit does not apply if the result is entirely returned from the local cache.

Fetching by Id

To fetch an (one) entity by id, call dataStore.findById.

entity = await dataStore.FindByIDAsync(t.ID);

Fetching by Query

To fetch all entities in a collection, call dataStore.find.

List<Book> books = new List<Book>();

books = await dataStore.FindAsync();

To fetch multiple entities using a query, call dataStore.find and pass in a query.

List<Book> booksByQuery = new List<Book>();

var q = dataStore.Where(x => x.Title.StartsWith("The", StringComparison.Ordinal));

booksByQuery = await dataStore.FindAsync(q);

It's worth noting that an empty array is returned when the query matches zero entities.

Saving

You can save an entity by calling dataStore.save.

try
{
    Book savedBook = await dataStore.SaveAsync(new Book() {
        Title = "MyBook"
    });
}
catch (KinveyException ke)
{
  // Handle error
}

The save method acts as upsert. The library uses the _id property of the entity to distinguish between updates and inserts.

  • If the entity has an _id, the library treats it as an update to an existing entity.

  • If the entity does not have an _id, the library treats it as a new entity. The Kinvey backend assigns an automatic _id for a new entity.

When using custom values for _id, you should avoid values that are exactly 12 or 24 symbols in length. Such values are automatically converted to BSON ObjectID and are not stored as strings in the database, which can interfere with querying and other operations.

To ensure that no items with 12 or 24 symbols long _id are stored in the collection, you can create a pre-save hook that either prevents saving such items, or appends an additional symbol (for example, underscore) to the _id:

if (_id.length === 12 || _id.length === 24) {
  _id += "_";
}

Creating Multiple Entities at Once

To create multiple entities at once, call dataStore.SaveAsync and pass in a collection containing the entities that you want to create:

try
{
  var result = await dataStore.SaveAsync(listOfBooks);

  // Contains the data about the entities that were created successfully
  var entities = result.Entities;

  // Contains errors for entities that were not created successfully
  var errors = result.Errors;
}
catch (KinveyException ke)
{
  // Handle error
}

The response is comprised of an entities array that shows the items which were created successfully and an errors array which appears when some items failed to be created. Here is an example response for a request that attempts to create two entities but succeeds only for one of them:

{
  "entities": [
    {
      "_id": "entity-id",
      "field": "value1",
      "_acl": {
        "creator": "5ef49b7576723200150be295"
      },
      "_kmd": {
        "lmt": "2020-07-09T16:16:37.597Z",
        "ect": "2020-07-09T16:16:37.597Z"
      }
    },
    null
  ],
  "errors": [
    {
      "error": "KinveyInternalErrorRetry",
      "description": "The Kinvey server encountered an unexpected error. Please retry your request.",
      "debug": "An entity with that _id already exists in this collection",
      "index": 1
    }
  ]
}

There is a difference in the behavior of 'Multi-Insert' save, compared to the single entity save. When used with a list of entities, which have the '_id' field, the save function always tries to create a new entity instead of updating the existing entity with the same _id. So, 'Multi-Insert' save is a pure 'Insert' operation, NOT the 'upsert' operation like single entity save. If an entity in the list of entities provided to the multi-save function is already present in the backend collection, then it will error out with "KinveyInternalErrorRetry - An entity with that _id already exists in this collection" as mentioned in the above response example.

Deleting

To delete an entity, call dataStore.removeById and pass in the entity _id.

KinveyDeleteResponse kdr = await dataStore.RemoveAsync(book.ID);

Deleting Multiple Entities at Once

To delete multiple entities at once, call dataStore.remove. Optionally, you can pass in a query to only delete entities matching the query.

Support for Delete By Query is not currently available, but will be coming soon!

Querying

Queries to the backend are performed through LINQ. LINQ provides a number of methods to add filters and modifiers to the query, although the Kinvey-Xamarin library only supports a subset of these operators. Querying is performed on the DataStore by using the FindAsync(query) method, where the query that is passed into the method can be constructed using LINQ syntax.

Under most circumstances, you want to fetch data from your backend that match a specific set of conditions. LINQ provides a mechanism to build up a query for retrieval of a list of items from a collection. For example, you may want to retrieve a sorted list from your "books" collection:

DataStore<Book> dataStore = DataStore<Book>.Collection("books");

var query = from book in dataStore
  orderby book.Title
  select book;

List<Book> booksFromCache = new List<Book>();

// The KinveyDelegate is used to obtain the intermediate results
// of a find operation. In the Cache data store case, the `onSuccess()` callback
// of the delegate will be called back with results returned from the cache
// and then the method will return with data from the network.
KinveyDelegate<List<Book>> cacheDelegate = new KinveyDelegate<List<Book>>()
{
  onSuccess = (List<Book> results) => booksFromCache.AddRange(results),
  onError = (Exception e) => Console.WriteLine(e.Message)
};

List<Book> booksFromNetwork = await bookStore.FindAsync(query, cacheDelegate);

Operators

In addition to exact match and null queries, you can query based upon several types of expressions: comparison, set match, string and array operations.

Logical Operators

  • && - matches records where both conditions are true
  • || - matches records where either of the conditions are true

Ordering operators

  • orderby <field> - orders the result set by the ascending value of the specified field.

  • orderby <field> descending - orders the result set by the descending value of the specified field.

Kinvey Query Strings

The Kinvey DataStore also provides a FindWithMongoQueryAsync(string kinveyQuery) method, which takes a raw Kinvey-style query string, as an alternative to using the LINQ syntax. We use the MongoDB query syntax, and this method provides complete flexibility for expressing queryies.

Modifiers

Modifiers can be applied to a query in order to control how query results are presented. This includes returning sections of the results, sorting results, and only returning specific fields of an entity.

Limit and Skip

Depending on the size of the results from a query, you may choose to receive the results in smaller sections of the total result set. This is where the concept of limit and skip comes into play. Providing a limit size ensures that the number of results passed back does not exceed a certain number. In conjunction with limit, providing a skip count allows you to offset where your results start inside the total result set. Used together, these modifiers can achieve paging of results.

In LINQ, limit is achieved using the Take operator and skip is done using the Skip operator. The example below shows how to take the first 30 books from the book store in blocks of 10:

var queryBlock1 = bookStore
    .Where(x => x.Title.StartsWith("How To", StringComparison.Ordinal))
    .Skip(0)
    .Take(10);
var queryBlock2 = bookStore
    .Where(x => x.Title.StartsWith("How To", StringComparison.Ordinal))
    .Skip(10)
    .Take(10);
var queryBlock3 = bookStore
    .Where(x => x.Title.StartsWith("How To", StringComparison.Ordinal))
    .Skip(20)
    .Take(10);

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. If you specify pageSize > 10000 in the example above, the backend will silently limit the results to only the first 10,000 entities. For this reason, we strongly recommend fetching your data in pages.

Note that for Auto, Sync, and Cache stores the entity limit does not apply if the result is entirely returned from the local cache.

Sort

As mentioned above in the Ordering Operators section, results from a query can be modified in order to be returned in sorted order (either ascending or descending), based on a particular field.

Order by Ascending

// ascending
var query = bookStore
    .Where(x => x.Title.StartsWith("How To", StringComparison.Ordinal))
    .OrderBy(x => x.Title);

Order by Descending

// descending
var query = bookStore
    .Where(x => x.Title.StartsWith("How To", StringComparison.Ordinal))
    .OrderByDescending(x => x.Title);

Field Selection

There may be instances where you do not want to retrieve entire entities, but only partial entities containing the fields that you are interested in. To do this, a field selection modifier can be added to a query. In LINQ, this can be accomplished using the select operator to return only a single field.

// single field selection
var query = from book in bookStore
            where book.Details.StartsWith("How To", StringComparison.Ordinal)
            select book.Title;

List<Book> listOfHowToBooks = await bookStore.FindAsync(query);

In the case where multiple fields are required, this can be accomplished in C# using anonymous types.

// multiple field selection using anonymous types
var query = from book in bookStore
            where book.Details.StartsWith("How To", StringComparison.Ordinal)
            select new { book.Title, book.Author };

List<Book> listOfHowToBooks = await bookStore.FindAsync(query);

Saving entities after retrieving them using Field Selection will result in the loss of all fields not selected. In addition, these partial entities will not be available for use with Caching & Offline Saving.

Counting

To count the number of objects in a collection, use the GetCountAsync() method.

uint count = await dataStore.GetCountAsync();

To count the number of objects in a collection that satisfy a query, use GetCountAsync(query).

Caching and Offline

A key aspect of good mobile apps is their ability to render responsive UIs, even under conditions of poor or missing network connectivity. The Kinvey library provides caching and offline capabilities for you to easily manage data access and synchronization between the device and the backend.

Kinvey's DataStore provides configuration options to solve common caching and offline requirements. If you need better control, you can utilize the options described in Granular Control.

Specify Data Store Type

When initializing a new DataStore to work with a Kinvey collection, you can optionally specify the DataStore type. The type determines how the SDK handles data reads and writes in full or intermittent absence of connection to the app backend.

In this section:

Sync

Configuring your data store as Sync allows you to pull a copy of your data to the device and work with it completely offline. The library provides APIs to synchronize local data with the backend.

This type of data store is ideal for apps that need to work for long periods without a network connection.

Here is how you should use a Sync store:

// Get an instance
string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.SYNC
);

// Pull data from your backend and save it locally on the device.
var pullResponse = await dataStore.PullAsync();

// Find data locally on the device.
var books = new List<Book>();

books = await dataStore.FindAsync();

// Save an entity locally on the device. This will add the item to the
// sync table to be pushed to your backend at a later time.
var book = new Book()
{
    Title = "My First Book"
};
await dataStore.SaveAsync(book);

// Sync local data with the backend
// This will push data that is saved locally on the device to your backend; 
// and then pull any new data on the backend and save it locally.
var syncResponse = await dataStore.SyncAsync();

The Pull, Push, Sync, and Sync Count APIs allow you to synchronize data between the application and the backend.

Auto

Configuring your data store as Auto allows it to primarily work directly with the backend. However, in cases of short network connection interruptions, the library utilizes a local data store to return any results available on the device. Results from previous read requests to the network are stored on the local device with this store type, which is what enables the ability to retrieve results locally when the network cannot be reached.

When your app makes a save request during a network outage, this store type saves the data locally and creates a "pending writes queue" entry to represent the save. You can use the Push operation as soon as the network connection resumes to save the data in the data store.

The following example shows how to use an Auto store:

// Get an instance
string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.AUTO);

List<Book> autoStoreResults = await dataStore.FindAsync();

// Save an entity. The entity will be saved to the device and then the backend. 
// If you do not have a network connection, the entity will be stored
// in local storage and will be available for you to push to the backend
// when network becomes available.
Book book = new Book()
{
    Title = "My First Book"
};
await dataStore.SaveAsync(book);

You can think of this data store as a more robust Network data store, where local storage is used to provide more continuity when the network is temporarily unavailable.

Cache

The Cache store type has been deprecated, and will be removed from the SDK entirely as early as December 1, 2019. If you are using the Cache store, please make changes to move to a different store type. The Auto store type has been introduced as a replacement for the Cache store.

Configuring your data store as Cache allows you to use the performance optimizations provided by the library. In addition, the cache allows you to work with data when the device goes offline.

The Cache mode is the default mode for the data store. Most of the time you don't need to set it explicitly.

This type of data store is ideal for apps that are generally used with an active network, but may experience short periods of network loss.

Here is how you should use a Cache store:

// Get an instance
string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.CACHE);

List<Book> books = new List<Book>();

// The KinveyDelegate onSuccess() method will be called with cache results.
// The method will then return with the network results.
KinveyDelegate<List<Book>> cacheDelegate = new KinveyDelegate<List<Book>>()
{
  onSuccess = (List<Book> results) => books.AddRange(results),
  onError = (Exception e) => Console.WriteLine(e.Message)
};

List<Book> networkResults = await dataStore.FindAsync(cacheResults: cacheDelegate);
books.AddRange(networkResults);

// Save an entity. The entity will be saved to the device and your backend. 
// If you do not have a network connection, the entity will be stored
// in local storage to get pushed to the backend when network becomes available.
Book book = new Book()
{
    Title = "My First Book"
};
await dataStore.SaveAsync(book);

The Cache data store executes all CRUD requests against local storage as well as the backend. Any data retrieved from the backend is stored in the cache. This allows the app to work offline by fetching data that has been cached from past usage.

The Cache data store also stores pending write operations when the app is offline. However, the developer is required to push these pending operations to the backend when the network resumes. The Push API should be used to accomplish this.

The Pull, Push, Sync, and Sync Count APIs allow you to synchronize data between the application and the backend.

Network

Configuring your data store as Network turns off all caching in the library. All requests to fetch and save data are sent to the backend.

We don't recommend this type of data store for apps in production, since the app will not work without network connectivity. However, it may be useful in a development scenario to validate backend data without a device cache.

Here is how you should use a Network store:

// Get an instance
string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.NETWORK
);

// Retrieve data directly from your backend
List<Book> books = new List<Book>();

books = await dataStore.FindAsync();

// Save an entity directly to your backend
Book book = new Book()
{
    Title = "My First Book"
};
await dataStore.SaveAsync(book);

Pull Operation

This operation can be used on Sync, Auto, and Cache stores only.

Calling pull() retrieves data from the backend and stores it locally in the cache.

By default, pulling retrieves the entire collection to the device. Optionally, you can provide a query parameter to pull to restrict what entities are retrieved. If you prefer to only retrieve the changes since your last pull operation, you should enable Delta Sync.

The pull API needs a network connection in order to succeed.

string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.SYNC
);

// In this example, we pull all the data in the collection
// from the backend to the Sync Store
try
{
  var books = await dataStore.PullAsync();
}
catch (KinveyException ke)
{
  // handle exception
}

If your Sync Store has pending local changes, they must be pushed to the backend before pulling data to the store.

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. For pulls that result in more than 10,000 entities, the backend silently limits the results to only the first 10,000 entities. For this reason, we strongly recommend pulling your data in pages.

Push Operation

This operation can be used on Sync, Auto and Cache stores only.

Calling push() kicks off a uni-directional push of data from the library to the backend.

The library goes through the following steps to push entities modified in local storage to the backend:

  • Reads from the "pending writes queue" to determine what entities have been changed locally. The "pending writes queue" maintains a reference for each entity in local storage that has been modified by the app. For an entity that gets modified multiple times in local storage, the queue only references the last modification on the entity.

  • Creates a REST API request for each pending change in the queue. The type of request depends on the type of modification that was performed locally on the entity.

    • If an entity is newly created, the library builds a POST request.
    • If an entity is modified, the library builds a PUT request.
    • If an entity is deleted, the library builds a DELETE request.
  • Makes the REST API requests against the backend concurrently. Requests are batched to avoid hitting platform limits on the number of open network requests.

    • For each successful request, the corresponding reference in the queue is removed.
    • For each failed request, the corresponding reference remains persisted in the queue. The library adds information in the push/sync response to indicate that a failure occurred. Push failures are discussed in Handling Push Failures.
  • Returns a response to the application indicating the count of entities that were successfully synced and a list of errors for entities that failed to sync.

string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.SYNC
);

// In this example, we push all the data in the collection
// from the backend to the Sync Store
try
{
  var pushResponse = await dataStore.PushAsync();
}
catch (KinveyException ke)
{
  // handle exception
}

Handling Push Failures

The push response contains information about the entities that failed to push to the backend. For each failed entity, the corresponding reference in the pending writes queue is retained. This is to prevent any data loss during the push operation. Consider these options for handling failures:

  • Retry pushing your changes at a later time. You can simply call push again on the data store to attempt again.
  • Ignore the failed changes. You can call purge on the data store, which will remove all pending writes from the queue. The failed entity remains in your local cache, but the library will not attempt to push it again to the backend.
  • Destroy the local cache. You call clear on the data store, which destroys the local cache for the store. You will need to pull fresh data from the backend to start using the cache again.

Each of the APIs mentioned in this section are described in more detail in the data store API reference.

Sync Operation

This operation can be used on Sync, Auto, and Cache stores only.

Calling sync() kicks off a bi-directional synchronization of data between the library and the backend. First, the library calls push to send local changes to the backend. Subsequently, the library calls pull to fetch data in the collection from the backend and stores it on the device.

You can provide a query as a parameter to the sync API to restrict the data that is pulled from the backend. The query does not affect what data gets pushed to the backend.

If you prefer to only retrieve the changes since your last sync operation, you should enable Delta Sync.

string collectionName = "books";
DataStore<Book> dataStore = DataStore<Book>.Collection(collectionName,
                                                       DataStoreType.SYNC
);

// In this example, we sync all the data in the collection from the
// backend to the Sync Store, which does a push followed by a pull
try
{
  var syncResponse = await dataStore.SyncAsync();
}
catch (KinveyException ke)
{
  // handle exception
}


// In this example, we restrict the data that is pulled from the backend
// to local storage by specifying a query in the sync API

var q = dataStore.Where(x => x.Title.StartsWith("The", StringComparison.Ordinal));

try
{
  var syncResponse = await dataStore.SyncAsync(q);
}
catch (KinveyException ke)
{
  // handle exception
}

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. For syncs that result in more than 10,000 entities, the backend silently limits the results to only the first 10,000 entities. For this reason, we strongly recommend syncing your data in pages.

Sync Count Operation

This operation can be used on Sync, Auto, and Cache stores only.

You can retrieve a count of entities modified locally and pending a push to the backend.

// Number of entities modified offline
int syncStoreCount = dataStore.GetSyncCount();

Granular Control

Selecting a DataStoreType is usually sufficient to solve the caching and offline needs of most apps. However, should you desire more control over how data is managed in your app, you can use the granular configuration options provided by the library. The following sections discuss the advanced options available on the DataStore.

In this section:

Data Reads

The way to control Data Reads is to set a ReadPolicy.

Read Policy

ReadPolicy controls how the library fetches data. When you select a store configuration, the library sets the appropriate ReadPolicy on the store. However, individual reads can override the default read policy of the store by specifying a ReadPolicy.

The following read policies are available:

  • ForceLocal - forces the datastore to read data from local storage. If no valid data is found in local storage, the request fails.
  • ForceNetwork - forces the datastore to read data from the backend. If network is unavailable, the request fails.
  • Both - reads first from the local cache and then attempts to get data from the backend.

Examples

Assume that you are using a datastore with caching enabled (the default), but want to force a certain find request to fetch data from the backend. This can be achieved by specifying a ReadPolicy when you call the find API on your store.

Data Writes

Write Policy

When you save data, the type of datastore you use determines how the data gets saved.

For a store of type Sync, data is written to your local copy. In addition, the library maintains additional information in a "pending writes queue" to recognize that this data needs to be sent to the backend when you decide to sync or push.

For a store of type Auto, data is written to your local copy. Then an attempt is made to write this to the backend. If the write to the backend fails, the library maintains additional information in a "pending writes queue" to recognize that this data needs to be sent to the backend when you decide to sync or push.

For a store of type Cache, data is written first to your local copy and sent immediately to be written to the backend. If the write to the backend fails (e.g. because of network connectivity), the library maintains information to recognize that this data needs to be sent to the backend when connectivity becomes available again. Due to platform limitations, this does not happen automatically, but needs to be initiated from the user by calling the push() or sync() methods.

For a store of type Network, data is sent directly to the backend. If the write to the backend fails, the library does not persist the data for a future write.

Timeout

When performing any datastore operations, you can pass a timeout value as an option to stop the datastore operation after some amount of time if it hasn't already completed.

Conflict Resolution

When using sync and cache stores, you need to be aware of situations where multiple users could be working on the same entity simultaneously offline. Consider the following scenario:

  1. User X edits entity A offline.
  2. User Y edits entity A offline.
  3. Network connectivity is restored for X, and A is synchronized with Kinvey.
  4. Network connectivity is restored for Y, and A is synchronized with Kinvey.

In the above scenario, the changes made by user X are overwritten by Y.

The libraries and backend implement a default mechanism of "client wins", which implies that the data in the backend reflects the last client that performed a write. Custom conflict management policies can be implemented with Business Logic.

Delta Sync

When your app handles large amounts of data, syncing entire collections can be expensive in terms of both bandwidth and speed, especially on slower networks. Rather than syncing the entire collection, fetching only new and updated entities can save bandwidth and improve your app's response times.

To help optimize fetching collection data, Kinvey implements Delta Sync, also known as data differencing. When an app performs a pull or find request for a collection that has the Delta Sync feature turned on, the library asks the backend only for those entities that have been created, modified, or deleted since the app last made that same request. This allows the backend to return only a small subset of data rather than the entire set of query results. The library then processes the data and updates its local cache appropriately.

Delta Sync requires the data store to be running in Auto, Cache, or Sync mode.

Calculating the delta is offloaded to the backend for better performance.

Limitations

Delta Sync can bring significant read performance improvements in most situations but you need to have the following limitations in mind:

  • Delta Sync does not guarantee data consistency between the server and the client:
    • If, on the server, you update an entity, changing the field on which you have previously queried the entity, the entity will not appear as updated in the data delta. This leaves a data discrepancy between the server and the client that you can rectify by making a full sync.
    • If, on the server, you use permissions to deny the user read access to an entity that is already cached on the user device, the data delta will not return the entity as updated or deleted.
  • External data coming from FlexData or RapidData is not supported.
  • Delta Sync is not supported for the User and Files collections.
  • If your collection has a Before Business Logic collection hook that calls response.complete(), Delta Sync requests will not execute and the response from your hook will be returned.
  • If the request features skip or limit modifiers, the library does a normal Find or Pull and does not utilize Delta Sync.

Configuring Delta Sync

The Delta Sync feature is configured per collection. The performance benefits of Delta Sync will be most noticeable on large collections that update infrequently. On the other hand, it may make sense to keep this feature turned off for small collections. This is because fetching the entire collection, if it's small, is expected to be faster than waiting for the server to calculate the delta and send it back.

Delta Sync is turned off by default for collections.

To turn on Delta Sync for a collection:

  1. Log in to the Console.
  2. Navigate to your app and select an environment to work with.
  3. Under Data, click Collections.
  4. On the collection card you want to configure, click the Settings icon.
  5. From the Settings menu, click Delta Set.
  6. Click Enable Delta Set for this collection.
  7. Optionally, change the default Deleted TTL in days value.

The Deleted TTL in days option specifies the change history, or the maximum period for which information about deleted collection entities is stored. This change history is required for building a delta. Delta Sync queries requesting changes that precede this period return an error. The library then automatically requests a full sync.

The maximum Deleted TTL in days you can set is 30 days.

Because Kinvey starts collecting data for Delta Sync only when you turn on the feature for a collection, the actual period for which a data delta can be retrieved can be shorter than the specified days. For example, if you turned on Delta Sync for a collection yesterday, you will only have one day's worth of change history instead of the configured 15. The change history will only reach and maintain its full size after the 15th day.

Click the checkbox and specify TTL for the deleted items

Delta Sync cannot be turned on for the User and Files collections.

Keeping and returning information about deletions is important, because without it, when receiving the data on the client, you won't be able to determine why the entity is missing from the data delta: because it has been deleted or because it has stayed unchanged.

Turning off Delta Sync for a collection results in permanently removing all information about deleted entities from the server. If you turn on Delta Sync again for the collection at a later stage, the accumulation of information about deleted entities starts from the beginning.

Using Delta Sync

To use Delta Sync, you need to set a flag on the data store instance you are working with.

// Create a data store
DataStore<ToDo> todoStore = DataStore<ToDo>.Collection(collectionName,
                                                       DataStoreType.SYNC
);

// Enable Delta Sync on this store
todoStore.DeltaSetFetchingEnabled = true;

After that, data deltas are requested automatically by the library for this data store but only under certain conditions. The library only sends a delta request if all of the following requirements are met. Otherwise it performs a regular pull or find.

  • Delta Sync is turned on for the underlying collection on the backend.
  • The data store you are working with is in Cache or Sync mode.
  • The request that you are making is cached, or in other words, it's not the first time you are making it.
  • The request does not feature skip or limit modifiers unless it is the library doing autopaging.

On receiving the delta, the library takes care of deleting those local entities that the delta marked as deleted and creating or updating the respective new or modified entities from the delta.

Note that Delta Sync changes the behavior of find and pull operations. Instead of returning the full count of entities inside the collection, each operation returns the number of entities contained in the returned data delta.

Error Handling and Troubleshooting

The library makes using Delta Sync transparent to you, handling Delta Sync-related errors internally. In case you need to track errors linked to this feature, you can enable the library logging at debug level.

The library will still propagate any errors that are not specific to Delta Sync. Examples of such errors include network connectivity issues and authentication errors, as well as errors specific to Find and Pull requests made by the library in case the delta request has errored out.

Forced Network Requests

Even in Auto, Sync, and Cache data store modes, you can send out forced Network requests if you set the respective request-level option. Such requests will still get Delta Sync support because the local requests cache is already available on the device.

How it Works

Delta Sync builds on top of information about previous read requests kept in the local cache maintained in Auto, Cache, and Sync data store modes. For this reason, Delta Sync does not operate in Network mode.

To make Delta Sync possible, the backend stores records of deleted entities (a change history) for a configurable amount of time. Records are stored for each collection that has the Delta Sync option turned on.

When your app code sends a read request, the library checks the local cache to see if the request has been executed before. If it has, the library makes a requests for the data delta instead of executing the request directly.

On the backend, the server executes the query normally, but also uses the change history to determine which entities that had matched the query the previous time have been deleted. This way, the server can return information to help the library determine which entities to delete from the local cache.

The backend runs any Before or After Business Logic hooks that might be in place (see Limitations).

The server response contains a pair of arrays: one listing entities created or modified since the last execution time, and another listing entities deleted since that time.

Using the returned data, the library reconstructs the data on the server locally, taking the current state of the cache as a basis. It first deletes all entities listed in the deleted array, so that if any entity was deleted and then re-created with the same ID, it would not be lost. After that, the library caches any newly-created entities and updates existing ones, completing the process.

Related Samples