Rate Limits
Understand how rate limiting works and our current policies for concurrent requests.
Currently: Unlimited Concurrent Requests
We are currently offering unlimited concurrent requests for all plans. There is no throttling or queuing—send as many parallel requests as you need.
What are Rate Limits?
Rate limits control how many API requests you can make within a given time period. They typically come in two forms:
Requests per Second
Maximum number of requests you can make per second. Higher limits mean faster data retrieval.
Concurrent Requests
Maximum number of simultaneous requests that can be in-flight at the same time.
Future Policy Changes
While we currently offer unlimited concurrency, this may change in the future as we scale. However, we guarantee:
- ✓Existing customers will be grandfathered into their current rate limits
- ✓Any changes will be communicated at least 30 days in advance
- ✓Higher-tier plans will always have higher limits
Best Practices
Even with unlimited rate limits, follow these best practices for optimal performance:
Use connection pooling
Reuse HTTP connections to reduce overhead and improve throughput.
Implement exponential backoff
If a request fails, wait before retrying with increasing delays.
Cache responses when possible
Store frequently accessed data locally to reduce API calls.
Batch requests logically
Group related requests together rather than making many small calls.
Monitor your usage
Keep an eye on your dashboard to understand your usage patterns.
Response Headers
When rate limits are implemented, responses will include these headers:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |