The Movie Database Support

Hi everyone,

There's going to be some changes to the rate limiting we do on the API coming up shortly. This won't affect the rate limits themselves but rather how we calculate them and what we return. I'm happy to answer any specific questions should have one.

Let me first outline the key problems with our current system. Right now our API web servers are load balanced using Amazon's Elastic Load Balancer (ELB). When we first started doing this we only had 2 servers. With Nginx in front taking care of the rate limiting it worked ok for us. Keep in mind, Nginx doesn't share any kind of a hash table so each IP was technically, rate limited separately on each server. At 2 servers we were ok with this since the way we split traffic was generally by IP to each individual availability zone. This meant that mostly everyone's requests ended up at the same Nginx instance.

Fast forward to 2014 and our API web server cluster is 8 servers which is now making any attempt to rate limit with Nginx almost useless.

The new system will share the state of an IP address across all 8 instances and provide proper balanced rate limiting. The rate limits themselves remain unchanged (max. 30 requests in a 10 second span). The key difference is in the response handling during your requests and when you trip the rate limits. I'll give you some examples so you can make changes to your code before we go live with this change.

Every request will soon have these 3 headers:

X-RateLimit-Limit: 30
X-RateLimit-Remaining: 18
X-RateLimit-Reset: 1394060670
  • X-RateLimit-Limit: The number of requests you're allowed to make in a 10 second span.
  • X-RateLimit-Remaining: The number of requests you have left before the counter resets.
  • X-RateLimit-Reset: The Epoch timestamp when the counter will reset.

Right now when you actually trip the rate limits, we just throw a 503 error which is really not the right way to do this. Moving forward, we'll be throwing a proper 429 status code along with a Retry-After header telling you how many seconds to wait until you're allowed to make a request again. It looks like so:

HTTP/1.1 429
Content-Length: 104
Date: Wed, 05 Mar 2014 23:08:12 GMT
Retry-After: 7

Hopefully this will help you guys build better systems around the API. It's important for us to try and provide the best and most complete service we can and this should help a lot of you guys out.

Cheers.

20 replies (on page 1 of 2)

Jump to last post

Next pageLast page

Happy to see my issue with this is being taken seriously :-), I will be updating php-tmdb-api soon to support it. For any other authors / interested folks, this is the relevant ticket: http://tmdb.lighthouseapp.com/projects/83077/tickets/356-implement-rfc6585-section-4-for-rate-limiting .

Are the servers solid state based now?

I found a huge performance increase in our API transactions by mysql and the OS to them. No more need for rate limiting :D

Are the servers solid state based now?

Our DB and web servers are, yes. The SSD's have close to no effect on the web servers though as everything is served from memory. We do very, very little IO. The bigger difference we noticed was just bumping to the new c3 instances with their better CPU's.

No more need for rate limiting :D

This has no bearing on us choosing to rate limit. We have had a lot of trouble with people pushing code into the wild that ends up stuck in loops forever and ever (we had one client in particular that was generating over 6,000 requests per second all by itself, looping forever and ever until we got the developer to push a fix for it). When you process the kind of requests we do it just becomes a natural requirement—we can't let a few bad developers ruin the experience for everyone.

Has this been implemented yet? I'm not seeing the headers (or the rate-limiting) on the production api.themoviedb.org

-Mike

Hey Mike,

No not yet, I'm waiting on our ops team to deploy this.

Oh cool. Will check in later then :)

At this point, are the limits being enforced?

I created a test script, but that didn't gave me any warnings:

require 'themoviedb'

Tmdb::Api.key("xx")
times_counter = 0 
100.times do
  Tmdb::Movie.find("batman")
  times_counter += 1
  puts times_counter
end

Hey guys,

We deployed this last night, and it is now live in production. We increased the rate limit to 40 requests every 10 seconds too, so there's a little bump.

Thanks Travis. I'll see to it that it gets supported in TMDbLib

This is a completely reasonable restriction in theory, but not on an API that is so frustratingly limited in methods for retrieving data.

I have a simple app that is basically an alternate view of a user's list. It pulls the list and then displays a table with title, runtime, poster, director, etc. This is what the API currently requires me to do:

  1. GET /list/list_id
  2. For each movie in list, GET /movies/movie_id
  3. For each movie in list, GET /movies/movie_id/credits

For a list with 50 movies, this is 101 API calls just to get a couple kilobytes of data. With no way to get more than one movie at a time (by ID) and such a small selection of attributes returned for a list's movies, I'm already forced to mirror the data in a local database. Now when a user's list has more than a dozen movies I haven't mirrored yet, I hit the API limit.

What are my options here?

Hi dpmccabe,

I'm not sure of a potential feature/plan but you can trim your movie requests down to a single call with append_to_response. For each movie ID, you can call:

https://api.themoviedb.org/3/movie/{ID}?api_key={API_KEY}&append_to_response=credits

Lists is being re-written soon, so I can take that opportunity to think about this problem in more depth at that time.

Travis,

Perhaps a fetch method accepting multiple IDs?

Regards,

Travis,

I'm seeing 429 responses with the header: "Retry-After: 0".

This is counter-productive. Could you always set it to at least 1, or round up to the nearest integer?

also the error 429 occurs even when accessing after the time from X-RateLimit-Reset. Adding 1 second more helps but even then 429 happens sometimes...

Hello, are there any example implementations in python (or other) to see how they respect this rate limit when issuing their requests?

Can't find a movie or TV show? Login to create it.

Global

s focus the search bar
p open profile menu
esc close an open window
? open keyboard shortcut window

On media pages

b go back (or to parent when applicable)
e go to edit page

On TV season pages

(right arrow) go to next season
(left arrow) go to previous season

On TV episode pages

(right arrow) go to next episode
(left arrow) go to previous episode

On all image pages

a open add image window

On all edit pages

t open translation selector
ctrl+ s submit form

On discussion pages

n create new discussion
w toggle watching status
p toggle public/private
c toggle close/open
a open activity
r reply to discussion
l go to last reply
ctrl+ enter submit your message
(right arrow) next page
(left arrow) previous page

Settings

Want to rate or add this item to a list?

Login