-
-
Notifications
You must be signed in to change notification settings - Fork 80
Closed
Description
Hello
Since version 0.5.0, we lost the concurrency limit feature introduced by this PR.
In fact using GlobalConcurrencyLimitLayer or ConcurrencyLimitLayer will allow to run at most 1 job per worker, without fetching jobs from Redis allowing other application (I have the same app scaled on a K8s cluster) to pull jobs which is good and not causing starvation !!!!
But what I do not understand is what's the point of having this function fn register_with_count + config.buffer_size = 1 if the worker will still run many async jobs ?
IMO with a config like the one below, we should not have to add a ConcurrencyLimitLayer because:
- The storage is configured to fetch 1 job at a time
- The ONLY worker present should handle a job then notify the monitor that he is available for more work allowing the monitor to fetch again in Redis for more jobs
let redis_connection = make_redis_connection(&settings.broker).await?;
let config = {
let mut config = Config::default();
config.set_buffer_size(1);
config.set_fetch_interval(Duration::from_millis(100));
config.set_max_retries(3);
config.set_keep_alive(Duration::from_secs(120));
config
};
let storage = RedisStorage::new_with_config(redis_connection, config);
let worker_msintuit =
WorkerBuilder::new(format!("worker-{}", rng.gen::<u16>()))
.data(app_state.clone())
.with_storage(storage)
.build_fn(handler);Thank you :)
Metadata
Metadata
Assignees
Labels
No labels