Rate Limits

Understand how rate limiting works and our current policies for concurrent requests.

Currently: Unlimited Concurrent Requests

We are currently offering unlimited concurrent requests for all plans. There is no throttling or queuing—send as many parallel requests as you need.

What are Rate Limits?

Rate limits control how many API requests you can make within a given time period. They typically come in two forms:

Requests per Second

Maximum number of requests you can make per second. Higher limits mean faster data retrieval.

Concurrent Requests

Maximum number of simultaneous requests that can be in-flight at the same time.

Future Policy Changes

While we currently offer unlimited concurrency, this may change in the future as we scale. However, we guarantee:

  • Existing customers will be grandfathered into their current rate limits
  • Any changes will be communicated at least 30 days in advance
  • Higher-tier plans will always have higher limits

Best Practices

Even with unlimited rate limits, follow these best practices for optimal performance:

1

Use connection pooling

Reuse HTTP connections to reduce overhead and improve throughput.

2

Implement exponential backoff

If a request fails, wait before retrying with increasing delays.

3

Cache responses when possible

Store frequently accessed data locally to reduce API calls.

4

Batch requests logically

Group related requests together rather than making many small calls.

5

Monitor your usage

Keep an eye on your dashboard to understand your usage patterns.

Response Headers

When rate limits are implemented, responses will include these headers:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the window resets