Dataverse Resilience

Introduction

Where I work we make extensive use of both Azure Functions and console apps, to manipulate data in Dataverse, or to integrate to/from Dataverse and other systems.

We have found that it’s very easy to run into Dataverse API Service Protection limits when working with quite small datasets, so we’ve had to learn how to adopt various techniques to keep our applications working.

This “project” will pull together an occasional series of posts documenting the different approaches we have tried, based on our own experience and a trawl through Microsoft example code.

Dataverse API Service Protection limits

These are enforced by Microsoft to ensure no one consumer impacts on the overall performance of the Dataverse platform for all consumers. At the time of writing the Dataverse Service protection API limits are evaluated per-user and per web-server (see below), and are set at:

  • A cumulative 600 requests in a 300 second sliding window
  • A combined execution time of 1200 seconds aggregated across requests in a 300 second sliding window
  • A maximum of 52 concurrent requests per-user

These limits are enforced per web server, however the number of web servers servicing a given Dataverse environment is opaque, so it is prudent to plan for only one server when considering limits.

Impact of exceeding limits

Depending on which API you are using, the platform will signal that limits are exceeded in one of two different ways:

  • with the WebApi, a 429 Too Many Requests error, and on the response a Retry-After header with a value in seconds indicating how long the caller should pause for.

  • With the Dataverse SDK for .NET, an OrganizationServiceFault error with one of three specific error codes:

    Error code (from SDK)Hex code (from Web API)Message
    -21470159020x80072322Number of requests exceeded the limit of 6000 over time window of 300 seconds
    -21470159030x80072321Combined execution time of incoming requests exceeded limit of 1,200,000 milliseconds over time window of 300 seconds. Decrease number of concurrent requests or reduce the duration of requests and try again later.
    -21470158980x80072326Number of concurrent requests exceeded the limit of 52

    In the OrganizationServiceFault.ErrorDetails collection there will be an entry with key Retry-After containing a TimeSpan value representing the necessary delay.

Areas I aim to cover

Our experiences include: simple console apps, typically used for data manipulation or bulk import; simple low-volume Azure Functions; and a couple of more complex Azure Functions apps that can scale out significantly and create a significant parallel load on the Dataverse API.

In this series I aim to touch on all these scenarios and document the techniques we have found to work.

As I publish posts in this series they will be linked at the bottom of this post.

Example code can be found here.

See also

Meta

Image credit: Neil Cummings (source) - licenced CC-BY-SA 2.0

#100DaysToOffload 6/100

Avatar
Proactive application of technology to business

My interests include technology, personal knowledge management, social change

Dataverse resilience pause and reflect
A short reflection on the Dataverse resilience tests so far #
Dataverse resilience experiment 4
A fourth approach to parallel record creation in Dataverse using HttpClient with retry logic from Polly #
Dataverse resilience experiment 3
A third approach to parallel creation in Dataverse using HttpClient without any retry logic (naive starting point) #
Dataverse resilience experiment 2
A second approach to parallel record creation using ServiceClient but using Parallel.ForEachAsync #
Dataverse resilience experiment 1
I try a naive approach to parallel record creation using ServiceClient and example code from the documentation. #