Replies: 5 comments
-
|
Hmm, I think |
Beta Was this translation helpful? Give feedback.
-
|
I made a test repository but could not reproduct the issue. After trying several options and narrowing down the differences, it seems to be caused by usage of MQTT within the worker loop. I'm starting to think I'm introducing odd behaviors by having an event loop within a loop on a spawned worker perhaps? My solution seems a bit more hacky knowning this - but it works as desired lol worker snippets: // This works properly, infinite 'looping' without re-enqueued workers.
loop {
info!("looping");
std::thread::sleep(Duration::from_secs(1));
} // Once the timer hits the reenqueue orphans treshhold, it starts mass reloading all tasks over and over again
loop {
info!("looping");
match event_loop.poll().await {
Ok(event) => match event {
Incoming(Publish(publish)) =>doTheThing(),
_ => continue
},
Err(e) => {
error!("error polling for events: {e:#?}");
},
}
}Without revealing too much, the |
Beta Was this translation helpful? Give feedback.
-
|
Is the loop inside the job function? I am suspecting you are possibly blocking the thread, but I would want to see how you have setup the worker and job function. The snippets provided are not clear per se.
|
Beta Was this translation helpful? Give feedback.
-
|
Also, you might just want to wrap the whole loop into something like |
Beta Was this translation helpful? Give feedback.
-
|
I could want to make it easy to run long running tasks. This might also include a resume feature. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm working on a use-case that is somewhat atypical, at least when compared to the examples within the project.
The application itself uses Actix Web to expose admin endpoints to manage a list of devices, and Apallis automatically spins up workers that continously monitor them. (1-1 mapping, each device has a worker on an infinite loop)
While prototyping, I noticed duplicate jobs were launched after a certain amount of time, leading to conflicts in operations. This was caused by the
reenqueue_orphaned_aftervalue since long running operations are considered orphaned. The default value is 5 minutes, which made it very difficult to identity :)I settled on simply providing a very large duration (>100 years) to avoid having duplicate tasks, but I'm wondering if there's apetite to officially support this type of workflow?
We could set an
Option<Duration>value instead, which would skip the check entirely.Beta Was this translation helpful? Give feedback.
All reactions