Data Store

The simplest use case of Kinvey is storing and retrieving data to and from your cloud backend.

The basic unit of data is an entity and entities of the same kind are organized in collections. An entity is a set of key-value pairs which are stored in the backend in JSON format. Kinvey's libraries automatically translate your native objects to JSON.

Kinvey's data store provides simple CRUD operations on data, as well as powerful filtering and aggregation.

Collections

To start working with a collection, you need to instantiate a DataStore in your application. Each DataStore represents a collection on your backend.

// we use the example of a "Book" datastore
DataStore<Book> dataStore = DataStore.collection(
                "books",
                Book.class,
                StoreType.SYNC,
                client);

If the active user logs out, all DataStore references are invalid, because the data store is cleared on logout. The DataStore must be re-instantiated after login.

Through the remainder of this guide, we will refer to the instance we just retrieved as dataStore.

Optionally, you can configure the DataStore type when creating it. To understand the types of data stores and when to use each type, refer to the Data Store Types section.

Data Store Types

The Cache store type has been deprecated, and will be removed from the SDK entirely as early as December 1, 2019. If you are using the Cache store, please make changes to move to a different store type. The Auto store type has been introduced as a replacement for the Cache store.

When you get an instance of a data store in your application, you can optionally select a data store type. The type has to do with how the library handles intermittent or prolonged interruptions in connectivity to the backend.

Choose a type that most closely resembles your data requirements.

  • DataStoreType.Sync: You want a copy of some (or all) the data from your backend to be available locally on the device and you like to sync it periodically with the backend.
  • DataStoreType.Auto: You want to read and write data primarily from the backend, but want to be more robust against varying network conditions and short periods of network loss.
  • DataStoreType.Network: You want the data in your backend to never be stored locally on the device, even if it means that the app will not offer offline use.
  • DataStoreType.Cache: Deprecated You want data stored on the device to optimize your app's performance and/or provide offline support for short periods of network loss. This is the default data store type if a type is not explicitly set.

Head to the Specify Data Store Type section to learn how to request the data store type that you selected.

Entities

The Kinvey service has the concept of entities, which represent a single resource.

public class Book extends GenericJson {
    @Key("name")
    public String name;
}

Fetching

You can retrieve entities by either looking them up using an ID, or by querying a collection.

Regardless of the chosen fetch method, results from a single query are limited to 10,000 entities or 100 MB of data, whichever is reached first. If your query produces more results that can fit into these numbers, only the first 10,000 or 100 MB are returned.

To overcome these limitations, use paging.

Note that for Sync, Auto, and Cache stores the entity limit does not apply if the result is entirely returned from the local cache.

Fetching by Id

To fetch an (one) entity by id, call dataStore.find.

dataStore.find(t.ID, new KinveyClientCallback<Book>(){
  @Override
  public void onSuccess(Book result) {
    // Place your code here
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your code here
  }
});

Fetching by Query

To fetch all entities in a collection, call dataStore.find.

dataStore.find(new KinveyListCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
      // Place your code here
  }

  @Override
  public void onFailure(Throwable error) {
      // Place your code here
  }
});

To fetch multiple entities using a query, call dataStore.find and pass in a query.

Query query = client.query().in("_id", new String[]{"1", "2"});

dataStore.find(query, new KinveyListCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
      // Place your code here
  }

  @Override
  public void onFailure(Throwable error) {
      // Place your code here
  }
});

It's worth noting that an empty array is returned when the query matches zero entities.

Saving

You can save an entity by calling dataStore.save.

try
{
  dataStore.save(new Book("MyBook"), new KinveyClientCallback<Book>(){
    @Override
    public void onSuccess(Book result) {
      // Place your code here
      // here we have a Book object with defined unique `_id`
    }

    @Override
    public void onFailure(Throwable error) {
      // Place your code here
    }
  });
}
catch (KinveyException ke)
{
  // Handle error
}

The save method acts as upsert. The library uses the _id property of the entity to distinguish between updates and inserts.

  • If the entity has an _id, the library treats it as an update to an existing entity.

  • If the entity does not have an _id, the library treats it as a new entity. The Kinvey backend assigns an automatic _id for a new entity.

When using custom values for _id, you should avoid values that are exactly 12 or 24 symbols in length. Such values are automatically converted to BSON ObjectID and are not stored as strings in the database, which can interfere with querying and other operations.

To ensure that no items with 12 or 24 symbols long _id are stored in the collection, you can create a pre-save hook that either prevents saving such items, or appends an additional symbol (for example, underscore) to the _id:

if (_id.length === 12 || _id.length === 24) {
  _id += "_";
}

Creating Multiple Entities at Once

To create multiple entities at once, call dataStore.create and pass in an ArrayList containing the entities that you want to create:

try {
  dataStore.create(listOfBooks,
    new KinveyClientCallback<KinveySaveBatchResponse<Book>>() {
      @Override
      public void onSuccess(KinveySaveBatchResponse<Book> result) {

        // Handle the created entities
        List<Book> entities = result.getEntities();

        // Handle errors for entities that were not saved successfully
        List<KinveyBatchInsertError> errors = result.getErrors();
      }
      @Override
      public void onFailure(Throwable throwable) {
        // Place your code here
      }
    });
}
catch (KinveyException ke)
{
  // Handle error
}

The response is comprised of an entities array that shows the items which were created successfully and an errors array which appears when some items failed to be created. Here is an example response for a request that attempts to create two entities but succeeds only for one of them:

{
  "entities": [
    {
      "_id": "entity-id",
      "field": "value1",
      "_acl": {
        "creator": "5ef49b7576723200150be295"
      },
      "_kmd": {
        "lmt": "2020-07-09T16:16:37.597Z",
        "ect": "2020-07-09T16:16:37.597Z"
      }
    },
    null
  ],
  "errors": [
    {
      "error": "KinveyInternalErrorRetry",
      "description": "The Kinvey server encountered an unexpected error. Please retry your request.",
      "debug": "An entity with that _id already exists in this collection",
      "index": 1
    }
  ]
}

There is a difference in the behavior of 'Multi-Insert' save, compared to the single entity save. When used with a list of entities, which have the '_id' field, the save function always tries to create a new entity instead of updating the existing entity with the same _id. So, 'Multi-Insert' save is a pure 'Insert' operation, NOT the 'upsert' operation like single entity save. If an entity in the list of entities provided to the multi-save function is already present in the backend collection, then it will error out with "KinveyInternalErrorRetry - An entity with that _id already exists in this collection" as mentioned in the above response example.

Deleting

To delete an entity, call dataStore.delete and pass in the entity _id.

dataStore.delete(id, new KinveyDeleteCallback() {
  @Override
  public void onSuccess(Integer integer) {
    // Place your code here
  }

  @Override
  public void onFailure(Throwable throwable) {
    // Place your code here
  }
});

Deleting Multiple Entities at Once

To delete multiple entities at once, call dataStore.delete. Optionally, you can pass in a query to only delete entities matching the query.


Query query = myClient.query().in("_id", new String[]{"1", "2"});


dataStore.delete(query, new KinveyDeleteCallback() {
  @Override
  public void onSuccess(Integer integer) {
    // Place your code here
  }

  @Override
  public void onFailure(Throwable throwable) {
    // Place your code here
  }
});

Querying

Queries to the backend are performed through iternal implementation. Implementation provides a number of methods to add filters and modifiers to the query, although the Kinvey-Android library only supports a subset of these operators. Querying is performed on the DataStore by using the find(query, KinveyListCallback) method, where the query that is passed into the method can be constructed using Client.query().

Under most circumstances, you want to fetch data from your backend that match a specific set of conditions. Our lib provides a mechanism to build up a query for retrieval of a list of items from a collection. For example, you may want to retrieve a sorted list from your "books" collection:

List<Book> books;

DataStore<Book> dataStore = DataStore.collection(
                "books",
                Book.class,
                StoreType.SYNC,
                myClient);

Query query = myClient.query().addSort("title", SortOrder.ASC);

// The KinveyListCallback is used to obtain the results of a find operation.
// In the Cache data store case, the `onSuccess()` callback of the delegate
// will be called back with results returned from the source you pass as
// StoreType parameter in DataStore creation step.

KinveyListCallback<Book> kinveyCallback = new KinveyListCallback<Book>()
{
  @Override
  public void onSuccess(List<Book> result) {
    // Place your code here
    books = result;
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
};

bookStore.find(query, kinveyCallback);

Operators

In addition to exact match and null queries, you can query based upon several types of expressions: comparison, set match, string and array operations.

Logical Operators

  • and(query) - matches records where both conditions are true
  • or(query) - matches records where either of the conditions are true

Ordering operators

addSort(fieldName, SortOrder) - orders the result set by chosen field and Sort order.

Comparison operators

  • equals(key, value) - the field value passed as first parameter must be equal to the second parameter
  • greaterThanEqualTo(key, value) - the field value passed as first parameter must be greater than or equal to the second parameter
  • lessThanEqualTo(key, value) - the field value passed as first parameter must be less than or equal to the second parameter
  • greaterThan(key, value) - the field value passed as first parameter must be greater than the second parameter
  • lessThan(key, value) - the field value passed as first parameter must be less than the second parameter
  • notEqual(key, value) - the field value passed as first parameter must be not equal to the second parameter
  • in(key, value[]) - the field value passed as second parameter is a array and it must contain the first parameter

Modifiers

Modifiers can be applied to a query in order to control how query results are presented. This includes returning sections of the results, sorting results, and only returning specific fields of an entity.

Limit and Skip

Depending on the size of the results from a query, you may choose to receive the results in smaller sections of the total result set. This is where the concept of limit and skip come into play. Providing a limit size will ensure that the number of results passed back do not exceed the limit given. In conjunction with this, providing a skip count will allow you to offset where your results start, relative to the total result set. Used together, this can achieve pagination of results.

In current Query implementation, limit is achieved using the setLimit() operators and skip is done using, appropriately enough, the setSkip() operator. The example below shows how you would take the first 30 books from the book store in chunks of 10:

Query query = myClient.query()
                      .startsWith("title", "How To")
                      .setSkip(0)
                      .setLimit(10);
Query query = myClient.query()
                      .startsWith("title", "How To")
                      .setSkip(10)
                      .setLimit(10);
Query query = myClient.query()
                      .startsWith("title", "How To")
                      .setSkip(20)
                      .setLimit(20);

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. If you specify setLimit() > 10,000 in the example above, the backend will silently limit the results to only the first 10,000 entities. For this reason, we strongly recommend fetching your data in pages or enabling autopaging.

Note that for Auto, Sync, and Cache stores the entity limit does not apply if the result is entirely returned from the local cache.

Sort

As mentioned above in the Ordering Operators section, results from a query can be modified in order to be returned in sorted order (either ascending or descending), based on a particular field.

// ascending
Query query = myClient.query()
                      .startsWith("title", "How To")
                      .addSort("title", SortOrder.ASC);

// descending
Query query = myClient.query()
                      .startsWith("title", "How To")
                      .addSort("title", SortOrder.DESC);

Aggregation/Grouping

Grouping allows you to collect all entities with the same value for a field or fields, and then apply a reduce function (such as count or average) on all those items.

The results are returned as an Aggregation object that represents the list of groups containing the result of the reduce function.

To count all elements in the group, use the dataStore.group() with AggregateType.COUNT:

ArrayList<String> list = new ArrayList<String>();
list.add("bookname");
Query query = myClient.query();
query = query.equals("field", "value"); 
dataStore.group(AggregateType.COUNT, list, null, query, new KinveyAggregateCallback() {
  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }

  @Override
  public void onSuccess(Aggregation response) {
    Log.i("TAG",  "got: " + response.getResultsFor("key", "value"));
    // Place your code here
  }
  }, new KinveyCachedAggregateCallback() {
  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }

  @Override
  public void onSuccess(Aggregation response) {
    Log.i("TAG",  "got: " + response.getResultsFor("key", "value"));
    // Place your code here
  }
});

There are group methods with five pre-defined aggregation types:

  • group(AggregateType.COUNT, fields, null, query, kinveyAggregateCallback, kinveyCachedAggregateCallback)—counts all elements in the group.
  • group(AggregateType.SUM, fields, reduceField, query, kinveyAggregateCallback, kinveyCachedAggregateCallback)—sums together the numeric values for the input field.
  • group(AggregateType.MIN, fields, reduceField, query, kinveyAggregateCallback, kinveyCachedAggregateCallback)—finds the minimum of the numeric values of the input field.
  • group(AggregateType.MAX, fields, reduceField, query, kinveyAggregateCallback, kinveyCachedAggregateCallback)—finds the maximum of the numeric values of the input field.
  • group(AggregateType.AVERAGE, fields, reduceField, query, kinveyAggregateCallback, kinveyCachedAggregateCallback)—finds the average of the numeric values of the input field.

Caching and Offline

A key aspect of good mobile apps is their ability to render responsive UIs, even under conditions of poor or missing network connectivity. The Kinvey library provides caching and offline capabilities for you to easily manage data access and synchronization between the device and the backend.

Kinvey's DataStore provides configuration options to solve common caching and offline requirements. If you need better control, you can utilize the options described in Granular Control.

Specify Data Store Type

When initializing a new DataStore to work with a Kinvey collection, you can optionally specify the DataStore type. The type determines how the SDK handles data reads and writes in full or intermittent absence of connection to the app backend.

In this section:

Sync

Configuring your data store as Sync allows you to pull a copy of your data to the device and work with it completely offline. The library provides APIs to synchronize local data with the backend.

This type of data store is ideal for apps that need to work for long periods without a network connection.

Here is how you should use a Sync store:

// Get an instance
DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.SYNC,
            myClient);

// Pull data from your backend and save it locally on the device.
dataStore.pull(new KinveyPullCallback<Book>(){
  @Override
  public void onSuccess(KinveyPullResponse<Book> result) {
      int bookCount = result.getCount();
  }

  @Override
  public void onFailure(Throwable error) {
      // Place your error handler here
  }
});

// Find data locally on the device.
dataStore.find(new KinveyListCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
    booksFromCache = result;
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
});

// Save an entity locally on the device. This will add the item to the
// sync table to be pushed to your backend at a later time.
Book book = new Book("My First Book");
dataStore.save(book, new KinveyClientCallback<Book>(){
  @Override
  public void onSuccess(Book result) {
      savedBook = result;
  }

  @Override
  public void onFailure(Throwable error) {
      // Place your error handler here
  }
});

// Sync local data with the backend
// This will push data that is saved locally on the device to your backend; 
// and then pull any new data on the backend and save it locally.
dataStore.sync(new KinveySyncCallback<Book>(){
  public void onSuccess(
                KinveyPushResponse kinveyPushResponse,
                KinveyPullResponse<Book> kinveyPullResponse
  ){};
  public void onPullStarted(){};
  public void onPushStarted(){};
  public void onPullSuccess(KinveyPullResponse<Book> kinveyPullResponse){};
  public void onPushSuccess(KinveyPushResponse kinveyPushResponse){};
  public void onFailure(Throwable t){};
});

The Pull, Push, Sync, and Sync Count APIs allow you to synchronize data between the application and the backend.

Auto

Configuring your data store as Auto allows it to primarily work directly with the backend. However, in cases of short network connection interruptions, the library utilizes a local data store to return any results available on the device. Results from previous read requests to the network are stored on the local device with this store type, which is what enables the ability to retrieve results locally when the network cannot be reached.

When your app makes a save request during a network outage, this store type saves the data locally and creates a "pending writes queue" entry to represent the save. You can use the Push operation as soon as the network connection resumes to save the data in the data store.

The following example shows how to use an Auto store:

// Get an instance
DataStore<Book> dataStore = DataStore.collection(
                "books",
                Book.class,
                StoreType.AUTO,
                myClient);

dataStore.find(new KinveyReadCallback<Book>(){
 @Override
 public void onSuccess(KinveyReadResponse<Book> result) {
   books = result;
 }

 @Override
 public void onFailure(Throwable error) {
   // Place your error handler here
 }
}, null);

// Save an entity. The entity will be saved to your backend.
// If you do not have a network connection, the entity will be
// stored in local storage to get pushed to the backend when
// network becomes available.
Book book = new Book("My First Book");
dataStore.save(book, new KinveyClientCallback<Book>(){
 @Override
 public void onSuccess(Book result) {
     book = result;
 }

 @Override
 public void onFailure(Throwable error) {
     // Place your error handler here
 }
});

You can think of this data store as a more robust Network data store, where local storage is used to provide more continuity when the network is temporarily unavailable.

Cache

The Cache store type has been deprecated, and will be removed from the SDK entirely as early as December 1, 2019. If you are using the Cache store, please make changes to move to a different store type. The Auto store type has been introduced as a replacement for the Cache store.

Configuring your data store as Cache allows you to use the performance optimizations provided by the library. In addition, the cache allows you to work with data when the device goes offline.

The Cache mode is the default mode for the data store. Most of the time you don't need to set it explicitly.

This type of data store is ideal for apps that are generally used with an active network, but may experience short periods of network loss.

Here is how you should use a Cache store:

// Get an instance
DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.CACHE,
            myClient);

dataStore.find(new KinveyListCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
    books = result;
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
}, new KinveyCachedClientCallback<Book>() {
  @Override
  public void onSuccess(List<Book> result) {
    books = result;
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
});

// Save an entity. The entity will be saved to the device and your backend.
// If you do not have a network connection, the entity will be stored
// in local storage to get pushed to the backend when network becomes available.
Book book = new Book("My First Book");
dataStore.save(book, new KinveyClientCallback<Book>(){
  @Override
  public void onSuccess(Book result) {
      book = result;
  }

  @Override
  public void onFailure(Throwable error) {
      // Place your error handler here
  }
});

The Cache data store executes all CRUD requests against local storage as well as the backend. Any data retrieved from the backend is stored in the cache. This allows the app to work offline by fetching data that has been cached from past usage.

The Cache data store also stores pending write operations when the app is offline. However, the developer is required to push these pending operations to the backend when the network resumes. The Push API should be used to accomplish this.

The Pull, Push, Sync, and Sync Count APIs allow you to synchronize data between the application and the backend.

Network

Configuring your data store as Network turns off all caching in the library. All requests to fetch and save data are sent to the backend.

We don't recommend this type of data store for apps in production, since the app will not work without network connectivity. However, it may be useful in a development scenario to validate backend data without a device cache.

Here is how you should use a Network store:

// Get an instance
DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.Network,
            client);


// Retrieve data directly from your backend
dataStore.find(new KinveyListCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
    books = result
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
});

// Save an entity directly to your backend
Book book = new Book("My First Book");
dataStore.save(book, new KinveyClientCallback<Book>(){
  @Override
  public void onSuccess(List<Book> result) {
    books = result
  }

  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
});

Pull Operation

This operation can be used on Sync, Auto, and Cache stores only.

Calling pull() retrieves data from the backend and stores it locally in the cache.

By default, pulling retrieves the entire collection to the device. Optionally, you can provide a query parameter to pull to restrict what entities are retrieved. If you prefer to only retrieve the changes since your last pull operation, you should enable Delta Sync.

The pull API needs a network connection in order to succeed.

DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.SYNC,
            client);
try
{
  // Pull data from your backend and save it locally on the device.
  dataStore.pull(new KinveyPullCallback<Book>(){
    @Override
    public void onSuccess(KinveyPullResponse<Book> result) {
        int bookCount = result.getCount();
    }

    @Override
    public void onFailure(Throwable error) {
        // Place your error handler here
    }
  });
} catch (KinveyException ke) {
    // handle exception
}

If your Sync Store has pending local changes, they must be pushed to the backend before pulling data to the store.

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. For pulls that result in more than 10,000 entities, the backend silently limits the results to only the first 10,000 entities. For this reason, we strongly recommend pulling your data in pages or enabling autopaging.

Push Operation

This operation can be used on Sync, Auto and Cache stores only.

Calling push() kicks off a uni-directional push of data from the library to the backend.

The library goes through the following steps to push entities modified in local storage to the backend:

  • Reads from the "pending writes queue" to determine what entities have been changed locally. The "pending writes queue" maintains a reference for each entity in local storage that has been modified by the app. For an entity that gets modified multiple times in local storage, the queue only references the last modification on the entity.

  • Creates a REST API request for each pending change in the queue. The type of request depends on the type of modification that was performed locally on the entity.

    • If an entity is newly created, the library builds a POST request.
    • If an entity is modified, the library builds a PUT request.
    • If an entity is deleted, the library builds a DELETE request.
  • Makes the REST API requests against the backend concurrently. Requests are batched to avoid hitting platform limits on the number of open network requests.

    • For each successful request, the corresponding reference in the queue is removed.
    • For each failed request, the corresponding reference remains persisted in the queue. The library adds information in the push/sync response to indicate that a failure occurred. Push failures are discussed in Handling Push Failures.
  • Returns a response to the application indicating the count of entities that were successfully synced and a list of errors for entities that failed to sync.

DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.SYNC,
            client);
try
{
  // Push data to your backend.
  dataStore.push(new KinveyPushCallback(){
    @Override
    public void onSuccess(KinveyPushResponse result) {
    }

    @Override
    public void onFailure(Throwable error) {
        // Place your error handler here
    }

    @Override
    public void onProgress(long current, long all) {

    }
  });
} catch (KinveyException ke) {
    // handle exception
}

Handling Push Failures

The push response contains information about the entities that failed to push to the backend. For each failed entity, the corresponding reference in the pending writes queue is retained. This is to prevent any data loss during the push operation. Consider these options for handling failures:

  • Retry pushing your changes at a later time. You can simply call push again on the data store to attempt again.
  • Ignore the failed changes. You can call purge on the data store, which will remove all pending writes from the queue. The failed entity remains in your local cache, but the library will not attempt to push it again to the backend.
  • Destroy the local cache. You call clear on the data store, which destroys the local cache for the store. You will need to pull fresh data from the backend to start using the cache again.

Each of the APIs mentioned in this section are described in more detail in the data store API reference.

Sync Operation

This operation can be used on Sync, Auto, and Cache stores only.

Calling sync() kicks off a bi-directional synchronization of data between the library and the backend. First, the library calls push to send local changes to the backend. Subsequently, the library calls pull to fetch data in the collection from the backend and stores it on the device.

You can provide a query as a parameter to the sync API to restrict the data that is pulled from the backend. The query does not affect what data gets pushed to the backend.

If you prefer to only retrieve the changes since your last sync operation, you should enable Delta Sync.

DataStore<Book> dataStore = DataStore.collection(
                "books",
                Book.class,
                StoreType.SYNC,
                client);
try {
  dataStore.sync(new KinveySyncCallback<Book>(){
    public void onSuccess(
        KinveyPushResponse kinveyPushResponse,
        KinveyPullResponse<Book> kinveyPullResponse
    ){};
    public void onPullStarted(){};
    public void onPushStarted(){};
    public void onPullSuccess(KinveyPullResponse<Book> kinveyPullResponse){};
    public void onPushSuccess(KinveyPushResponse kinveyPushResponse){};
    public void onFailure(Throwable t){};
  });
} catch (KinveyException ke) {
    // handle exception
}

Kinvey imposes a limit of 10,000 entities on a single request to fetch data stored in the backend. For syncs that result in more than 10,000 entities, the backend silently limits the results to only the first 10,000 entities. For this reason, we strongly recommend syncing your data in pages or enabling autopaging.

Sync Count Operation

This operation can be used on Sync, Auto, and Cache stores only.

You can retrieve a count of entities modified locally and pending a push to the backend.

// Number of entities modified offline
int syncStoreCount = dataStore.syncCount();

Granular Control

Selecting a DataStoreType is usually sufficient to solve the caching and offline needs of most apps. However, should you desire more control over how data is managed in your app, you can use the granular configuration options provided by the library. The following sections discuss the advanced options available on the DataStore.

In this section:

Data Reads

The way to control Data Reads is to set a ReadPolicy.

Read Policy

ReadPolicy controls how the library fetches data. When you select a store configuration, the library sets the appropriate ReadPolicy on the store. However, individual reads can override the default read policy of the store by specifying a ReadPolicy.

The following read policies are available:

  • ForceLocal - forces the datastore to read data from local storage. If no valid data is found in local storage, the request fails.
  • ForceNetwork - forces the datastore to read data from the backend. If network is unavailable, the request fails.
  • Both - reads first from the local cache and then attempts to get data from the backend.

Examples

Assume that you are using a datastore with caching enabled (the default), but want to force a certain find request to fetch data from the backend. This can be achieved by specifying a ReadPolicy when you call the find API on your store.

//create a Sync store
DataStore<Book> dataStore = DataStore<Book>.Collection(
            "books",
            Book.class,
            StoreType.SYNC,
            client);
... 

//force the datastore to request data from the backend
DataStore<Book> dataStore = DataStore<Book>.Collection(
            "books",
            Book.class,
            StoreType.NETWORK,
            client);

Data Writes

Write Policy

When you save data, the type of datastore you use determines how the data gets saved.

For a store of type Sync, data is written to your local copy. In addition, the library maintains additional information in a "pending writes queue" to recognize that this data needs to be sent to the backend when you decide to sync or push.

For a store of type Auto, data is written to your local copy. Then an attempt is made to write this to the backend. If the write to the backend fails, the library maintains additional information in a "pending writes queue" to recognize that this data needs to be sent to the backend when you decide to sync or push.

For a store of type Cache, data is written first to your local copy and sent immediately to be written to the backend. If the write to the backend fails (e.g. because of network connectivity), the library maintains information to recognize that this data needs to be sent to the backend when connectivity becomes available again. Due to platform limitations, this does not happen automatically, but needs to be initiated from the user by calling the push() or sync() methods.

For a store of type Network, data is sent directly to the backend. If the write to the backend fails, the library does not persist the data for a future write.

Timeout

When performing any datastore operations, you can pass a timeout value as an option to stop the datastore operation after some amount of time if it hasn't already completed.

The global default timeout in the SDK is set to 60 seconds. You can set the global timeout to your own value in three ways:

  1. When you initialize the Client.

    Client myKinveyClient = new Client.Builder(this)
                                   .setRequestTimeout(120000)
                                   .build();
  2. In kinvey.properties file, you need add request.timeout option in the end of file.

    request.timeout=120000
  3. When client was already initialized.

    myKinveyClient.setRequestTimeout(120000);

Conflict Resolution

When using sync and cache stores, you need to be aware of situations where multiple users could be working on the same entity simultaneously offline. Consider the following scenario:

  1. User X edits entity A offline.
  2. User Y edits entity A offline.
  3. Network connectivity is restored for X, and A is synchronized with Kinvey.
  4. Network connectivity is restored for Y, and A is synchronized with Kinvey.

In the above scenario, the changes made by user X are overwritten by Y.

The libraries and backend implement a default mechanism of "client wins", which implies that the data in the backend reflects the last client that performed a write. Custom conflict management policies can be implemented with Business Logic.

Delta Sync

When your app handles large amounts of data, syncing entire collections can be expensive in terms of both bandwidth and speed, especially on slower networks. Rather than syncing the entire collection, fetching only new and updated entities can save bandwidth and improve your app's response times.

To help optimize fetching collection data, Kinvey implements Delta Sync, also known as data differencing. When an app performs a pull or find request for a collection that has the Delta Sync feature turned on, the library asks the backend only for those entities that have been created, modified, or deleted since the app last made that same request. This allows the backend to return only a small subset of data rather than the entire set of query results. The library then processes the data and updates its local cache appropriately.

Delta Sync requires the data store to be running in Auto, Cache, or Sync mode.

Calculating the delta is offloaded to the backend for better performance.

Limitations

Delta Sync can bring significant read performance improvements in most situations but you need to have the following limitations in mind:

  • Delta Sync does not guarantee data consistency between the server and the client:
    • If, on the server, you update an entity, changing the field on which you have previously queried the entity, the entity will not appear as updated in the data delta. This leaves a data discrepancy between the server and the client that you can rectify by making a full sync.
    • If, on the server, you use permissions to deny the user read access to an entity that is already cached on the user device, the data delta will not return the entity as updated or deleted.
  • External data coming from FlexData or RapidData is not supported.
  • Delta Sync is not supported for the User and Files collections.
  • If your collection has a Before Business Logic collection hook that calls response.complete(), Delta Sync requests will not execute and the response from your hook will be returned.
  • If the request features skip or limit modifiers, the library does a normal Find or Pull and does not utilize Delta Sync.

Configuring Delta Sync

The Delta Sync feature is configured per collection. The performance benefits of Delta Sync will be most noticeable on large collections that update infrequently. On the other hand, it may make sense to keep this feature turned off for small collections. This is because fetching the entire collection, if it's small, is expected to be faster than waiting for the server to calculate the delta and send it back.

Delta Sync is turned off by default for collections.

To turn on Delta Sync for a collection:

  1. Log in to the Console.
  2. Navigate to your app and select an environment to work with.
  3. Under Data, click Collections.
  4. On the collection card you want to configure, click the Settings icon.
  5. From the Settings menu, click Delta Set.
  6. Click Enable Delta Set for this collection.
  7. Optionally, change the default Deleted TTL in days value.

The Deleted TTL in days option specifies the change history, or the maximum period for which information about deleted collection entities is stored. This change history is required for building a delta. Delta Sync queries requesting changes that precede this period return an error. The library then automatically requests a full sync.

The maximum Deleted TTL in days you can set is 30 days.

Because Kinvey starts collecting data for Delta Sync only when you turn on the feature for a collection, the actual period for which a data delta can be retrieved can be shorter than the specified days. For example, if you turned on Delta Sync for a collection yesterday, you will only have one day's worth of change history instead of the configured 15. The change history will only reach and maintain its full size after the 15th day.

Click the checkbox and specify TTL for the deleted items

Delta Sync cannot be turned on for the User and Files collections.

Keeping and returning information about deletions is important, because without it, when receiving the data on the client, you won't be able to determine why the entity is missing from the data delta: because it has been deleted or because it has stayed unchanged.

Turning off Delta Sync for a collection results in permanently removing all information about deleted entities from the server. If you turn on Delta Sync again for the collection at a later stage, the accumulation of information about deleted entities starts from the beginning.

Using Delta Sync

To use Delta Sync, you need to set a flag on the data store instance you are working with.

// enable Delta Sync on this store
store.setDeltaSetCachingEnabled(true);

After that, data deltas are requested automatically by the library for this data store but only under certain conditions. The library only sends a delta request if all of the following requirements are met. Otherwise it performs a regular pull or find.

  • Delta Sync is turned on for the underlying collection on the backend.
  • The data store you are working with is in Cache or Sync mode.
  • The request that you are making is cached, or in other words, it's not the first time you are making it.
  • The request does not feature skip or limit modifiers unless it is the library doing autopaging.

On receiving the delta, the library takes care of deleting those local entities that the delta marked as deleted and creating or updating the respective new or modified entities from the delta.

Note that Delta Sync changes the behavior of find and pull operations. Instead of returning the full count of entities inside the collection, each operation returns the number of entities contained in the returned data delta.

Error Handling and Troubleshooting

The library makes using Delta Sync transparent to you, handling Delta Sync-related errors internally. In case you need to track errors linked to this feature, you can enable the library logging at debug level.

The library will still propagate any errors that are not specific to Delta Sync. Examples of such errors include network connectivity issues and authentication errors, as well as errors specific to Find and Pull requests made by the library in case the delta request has errored out.

Forced Network Requests

Even in Auto, Sync, and Cache data store modes, you can send out forced Network requests if you set the respective request-level option. Such requests will still get Delta Sync support because the local requests cache is already available on the device.

How it Works

Delta Sync builds on top of information about previous read requests kept in the local cache maintained in Auto, Cache, and Sync data store modes. For this reason, Delta Sync does not operate in Network mode.

To make Delta Sync possible, the backend stores records of deleted entities (a change history) for a configurable amount of time. Records are stored for each collection that has the Delta Sync option turned on.

When your app code sends a read request, the library checks the local cache to see if the request has been executed before. If it has, the library makes a requests for the data delta instead of executing the request directly.

On the backend, the server executes the query normally, but also uses the change history to determine which entities that had matched the query the previous time have been deleted. This way, the server can return information to help the library determine which entities to delete from the local cache.

The backend runs any Before or After Business Logic hooks that might be in place (see Limitations).

The server response contains a pair of arrays: one listing entities created or modified since the last execution time, and another listing entities deleted since that time.

Using the returned data, the library reconstructs the data on the server locally, taking the current state of the cache as a basis. It first deletes all entities listed in the deleted array, so that if any entity was deleted and then re-created with the same ID, it would not be lost. After that, the library caches any newly-created entities and updates existing ones, completing the process.

Additional Information

The Kinvey data store also comes with other features that are optional or have more limited applications.

Automatic Paging

Autopaging is an SDK feature that allows you to query your collection as normal and receive all results without worrying for the 10,000 entities count limit imposed by the backend.

If you expect queries to a collection to regularly return a result count that exceeds the backend limit, you may want to enable autopaging instead of using the limit and skip modifiers every time.

Autopaging works by automatically utilizing limit and skip on the background and storing all received pages in the offline cache. For that reason, autopaging does not work with data stores of type Network.

Autopaging only works with pulling. When you pull with autopaging enabled to refresh the local cache, the SDK reads and stores locally all entities in a collection or a subset of them if you pass a limiting query. It automatically uses paging if the entry count exceeds the backend-imposed limit.

To enable autopaging, call Pull with an argument specifying the page size. Pulls that omit the page size don't make use of autopaging.

List<Book> books;

DataStore<Book> dataStore = DataStore.collection(
            "books",
            Book.class,
            StoreType.SYNC,
            myClient);

Query query = myClient.query().addSort("title", SortOrder.ASC);

KinveyListCallback<Book> kinveyCallback = new KinveyListCallback<Book>()
{
  @Override
  public void onSuccess(List<Book> result) {
    // Place your code here
    books = result;
  }
  @Override
  public void onFailure(Throwable error) {
    // Place your error handler here
  }
};
bookStore.pull(query, 20, kinveyCallback);

After you have all the needed entities persisted on the local device, you can call Find as normal. Depending on the store type, the operation is executed against the local store only or against both the local store and the backend. For executions against the local cache, the maximum result count as imposed by the backend does not apply.

Autopaging is subject to the follow caveats:

  • Autopaging, similarly to manual paging, has the potential to miss new entities when such are written to the backend collection while a paged Pull is in progress. Each next page retrieval always works on the latest state of the collection which may change between the first and the last page retrieval.
  • Enabling autopaging may have performance implications on your app depending on the collection size and the device performance. Fetching large amounts of data can be slow and working with it locally increases the memory and storage footprint on the device.
  • When autopaging is enabled, any limit and skip modifiers on outgoing queries are ignored.

Related Samples