"30 requests every 10 seconds per IP"
As I think of this there could be three possibilities how request rate limiting works:
- There is some kind of global, absolute timer that counts to 10 and resets itself. API lets every user to make 30 requests during those 10-second periods.
- Every request has 10 second "cooldown". If there are currently 30 requests "on cooldown" every new request receives 503 error.
- First request triggers 10 second timer. One can make at most 29 consecutive requests during that time.
Well?
Reply by Travis Bell
on დეკემბერი 14, 2014 at 11:07 AM
Hi lavsprat,
Honestly, at this point rate limiting is barely working. The reason is because we're just using Nginx's rate limiting features but since we don'tt use sticky sessions and there's 8 API instances in our cluster with no global count or timer, the counts get split across all 8 of course, mitigating any kind of limiting ability.
So how is it supposed to work right now? Mostly like option #3. You can at most make 30 requests in a 10 second span. Every 10 seconds the counter resets.
I posted this update about our future rate limiting methods but me and our ops team got busy with other things and it still has yet to make its way into production. Having said that, this is still a priority for us and has recently had a renewed interest internally. It will be going live, I just don't know exactly when. The new version fixes all of the problems we had in v1 mentioned above.
Reply by lavsprat
on დეკემბერი 14, 2014 at 5:12 PM
Thanks for quick answer.