Rate limits protect the platform and your users. Here’s how to design clients that feel instant and never fail noisily.
Understand the limits
- Minute and daily limits are enforced per API key
- Premium tiers raise ceilings; see Docs → Rate Limits
Strategies that work
- Client-side caching for identical GETs (5–60s)
- Queued background jobs for bulk work
- Retry with exponential backoff on 429
- Use ETags/If-None-Match when available
JavaScript retry helper
async function call(url, options, retries = 3){ for (let i=0; i<retries; i++){ const res = await fetch(url, options) if (res.status === 429){ const wait = (+res.headers.get('Retry-After') || Math.pow(2,i)) * 1000 await new Promise(r=>setTimeout(r, wait)) continue } if (!res.ok) throw new Error((await res.json()).message) return res.json() } }
FAQ
What should my default retry policy be?
3 tries with 1s, 2s, 4s backoff is a good baseline. Respect Retry-After when present.
How do I avoid thrashing?
Batch non-urgent calls and switch to background queues; cache success responses.