-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
bugSomething isn't workingSomething isn't working
Description
What happened?
done_job.lua stores job results in {namespace}:data::result hash on every job completion, but vacuum.lua never cleans these entries. The result hash grows indefinitely.
Expected behavior
vacuum() should also remove result hash entries for vacuumed jobs, or there should be a TTL/cleanup mechanism for results.
Steps to reproduce
- Run jobs that complete successfully
- Call
storage.vacuum().await - Check Redis:
HLEN {namespace}:data::result - Count remains the same - results are not cleaned
Minimal code example
let mut storage: RedisStorage<MyJob> = RedisStorage::new(conn);
// Push and complete jobs
storage.push(MyJob { ... }).await?;
// ... jobs complete ...
// Vacuum
storage.vacuum().await?;
// Result hash still has entries
// redis-cli: HLEN "my_job:data::result" → still shows countVersion
0.7.x
Environment
- OS: Windows 11
- Rust version: 1.91.1
- Cargo version: 1.91.1
Relevant log output
Additional context
done_job.lua line 5:
redis.call("hmset", KEYS[3].. ns, ARGV[1], ARGV[3])vacuum.lua only cleans KEYS[1] (done_list) and KEYS[2] (data hash).
KEYS[3]::result is never touched.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working