configure sidekiq to work only two requests per second

May 2024 ยท 2 minute read

I'm sending stockupdates to the shopify-api. The requests are queued with sidekiq. Shopify allows 2 requests per second. I can't find a way to configure sidekiq to work only 2 scheduled jobs per second.

3

2 Answers

What you are looking to do is to slow down the number of requests that your web application makes to an API. Aka "Rate Limiting".

Sidekiq Enterprise provides a rate limiting API that gives you three options for rate limiting, two of which could likely raise an exception after the rate limit has been exceeded.

Rate limiters do not slow down Sidekiq's job processing. If you push 1000 jobs to Redis, Sidekiq will run those jobs as fast as possible which may cause many of those jobs to fail with an OverLimit error.

Therefore

If you want to trickle jobs into Sidekiq slowly, the only way to do that is with manual scheduling.

With that said, now your options are twofold: You can either slow down redis or you can implement the rate limiting feature yourself within the context of your web application.

As an example from the same documentation page:

Here's how you can schedule 1 job per second to ensure that Sidekiq doesn't run all jobs immediately:

1000.times do |index| SomeWorker.perform_in(index, some_args) end 

And you probably don't want to have to slow down redis because

Rate limiting is unusually hard on Redis for a Sidekiq feature

Instead, you could rescue the exception that would be thrown by the API you're working with similarly to the following answer to this question:

class TwitterWorker include Sidekiq::Worker def perform(status_id) status = Twitter.status(status_id) # ... rescue Twitter::Error::TooManyRequests # Reschedule the query to be performed in the next time slot TwitterWorker.perform_in(15.minutes, status_id) end end 

In conclusion, unless you want to pay for Sidekiq Enterprise and implement your solution around it or deal with redis directly, I recommend that you rescue the exception creating a loop with a retry feature (perform_in) only until you've exceeded the API's requests limit, as in the example above.

I hope that helps!

I got an idea of adding a class variable to my worker so I could store an array with timestamp in seconds of the last scheduled job and a number o jobs for this second. When I schedule a job I look at the array for the last scheduled second and the number and increment the seconds and jobs if there are two for a future second and schedule my job on the next second.

ncG1vNJzZmirpJawrLvVnqmfpJ%2Bse6S7zGiorp2jqbawutJoamxuaW2BdH%2BOnKannpmcwrOxjKygnZ2bnr5uwM5mrqiqm2K8r7jYZquwp12nsrLBxKyrrGWgmr9uv8Scpqec