-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(threading): add multithreading book chapter #465
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,45 @@ | ||||||
# Multithreading | ||||||
|
||||||
RIOT-rs supports multithreading on the Cortex-M, RISC-V, and Xtensa architectures. | ||||||
|
||||||
## Scheduling | ||||||
|
||||||
- **Fixed-priority, preemptive scheduling** policy with up to 32 supported priority levels. | ||||||
- **Thread priorities are dynamic** and can be changed at runtime. | ||||||
- The Runqueue is implemented as a circular linked list for each priority, requiring n × m × 8 bits for _n_ maximum threads and _m_ supported priorities. All operations on the runqueue are performed in constant time, except for the deletion of a thread that is not the head of the runqueue. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is very detailed, too detailed for this overview IMO. Mentioning the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
When the priority of the thread is changed. |
||||||
- **Deep sleep when idle**: On single core, no idle threads are created. Instead, if the runqueue is empty, the processor enters deep sleep until a next thread is ready. The context of the previously running thread is only saved once the next thread is ready and the context switch occurs. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This I'd drop. It is too inaccurate at the moment, we don't "deep leep" yet, just idle the core. |
||||||
- **Same priority threads are scheduled cooperatively.** The scheduler itself is tickless, therefore time-slicing isn't supported. [Timers](https://docs.rs/embassy-time/latest/embassy_time/struct.Timer.html) are still supported through the timer API from the integrated [Embassy] crate. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This needs a previous "Policy: The highest priority runnable thread (or threads in the multi-core case) is executed" or similar to make sense. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe drop the timers sentence? |
||||||
|
||||||
### Thread Creation | ||||||
|
||||||
- Threads can either be declared using a macro, which creates and starts the thread during startup, or spawned dynamically at runtime. In the latter case, the stack memory must still be statically allocated at compile time. | ||||||
- The **maximum number of threads** is defined with a constant value at compile time. This maximum limits the number of concurrently running threads, but it is still possible to create more threads if earlier ones have finished their execution. | ||||||
- Multiple **asynchronous Tasks** can be spawned within each thread with an executor from the integrated [Embassy] crate. This bridges the gap with async Rust, future-based concurrency, and asynchronous I/O. The executor executes all its tasks inside the thread context. When all tasks on the executor are pending, the owning thread is suspended. | ||||||
|
||||||
## Synchronization Primitives in RIOT-rs | ||||||
|
||||||
- **Locks** in RIOT-rs are basic non-reentrant locking objects that carry no data and serve as a building block for other synchronization primitives. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe add that Locks don't have an owner? |
||||||
- **Mutexes** are the user-facing variant of locks that wrap shared objects and provide mutual exclusion when accessing the inner data. The mutexes implement the priority inheritance protocol to prevent priority inversion if a higher priority thread is blocked on a mutex that is locked by a lower priority thread. The access itself is realized through a `MutexGuard`. If the _Guard_ goes out of scope, the mutex is automatically released. | ||||||
- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm somewhat confused by this sentence: should sender and receiver be producer and consumer instead (to me sender and receiver refers to the channel handles)? Does the last part mean that the channel is an SPSC channel? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We don't have sender and receiver channel handles here. But I am not sure if the producer/consumer terminology fits here either, given that that it's from the general producer-consumer pattern that is independent of the underlying synchronization primitive. In our channel implementation, there is just one type Wdyt of:
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So it's an MPMC channel where consumers compete for the received value? The trick is that the channel has to be static so the compiler can prove across threads that the channel is still live (also the inner data has to be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yep it must be static, see https://github.com/future-proof-iot/RIOT-rs/blob/main/examples/threading-channel/src/main.rs. |
||||||
- **Thread flags** can be set per thread at any time. A thread is blocked until the flags it is waiting for have been set, which also includes flags that have been set prior to the _wait_ call. | ||||||
|
||||||
**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be freed for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
||||||
## Multicore Support | ||||||
|
||||||
RIOT-rs optionally supports symmetric multiprocessing (SMP). | ||||||
- **Supported dual-core Chips** where SMP is enabled by default: | ||||||
- ESP32-S3 | ||||||
- RP2040 | ||||||
- **Porting from single-core to multicore** requires no changes in the user-application. | ||||||
- **One global runqueue** is shared among all cores. The scheduler assigns the _n_ highest-priority, ready, and non-conflicting threads to the _n_ available cores. The scheduler can be invoked individually on each core. Whenever a higher priority thread becomes ready, the scheduler is triggered on the core with the lowest-priority running thread to perform a context switch. | ||||||
- **Core affinities** are optionally supported. They can be set per thread to restrict the thread's execution to one or multiple specific processors. | ||||||
- **One Idle Thread per core** is created on multicore system. This helps avoid conflicts that could occur on multicore if deep sleep is entered directly from within the scheduler. When the idle thread is running, it then prompts the current core to enter deep sleep. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think you need to change all "deep sleep" to "sleep" (or maybe just "idle mode"?). |
||||||
|
||||||
## Mutual Exclusion in the Kernel | ||||||
|
||||||
RIOT-rs uses a single global critical section for all kernel operations. | ||||||
- **On single-core** this critical section is implemented by masking all interrupts. | ||||||
- **On multicore** an additional hardware spinlock is used in the case of the RP2040, and atomics are used on the ESP32-S3 to ensure that the critical section is enforced on all cores. | ||||||
|
||||||
[Embassy]: https://embassy.dev/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add "tickless"