Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(threading): add multithreading book chapter #465

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion book/book.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
[book]
authors = ["Kaspar Schleiser", "Emmanuel Baccelli", "Christian Amsüss"]
authors = [
"Kaspar Schleiser",
"Emmanuel Baccelli",
"Christian Amsüss",
"Elena Frank",
]
language = "en"
multilingual = false
src = "src"
Expand Down
1 change: 1 addition & 0 deletions book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
- [Manifesto](./manifesto.md)
- [Introduction](./introduction.md)
- [Hardware & Functionality Support](./hardware_functionality_support.md)
- [Multithreading](./multithreading.md)
- [Code of Conduct](./CODE_OF_CONDUCT.md)
- [Reporting Guidelines](./COC_reporting.md)

Expand Down
45 changes: 45 additions & 0 deletions book/src/multithreading.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Multithreading

RIOT-rs supports multithreading on the Cortex-M, RISC-V, and Xtensa architectures.

## Scheduling

- **Fixed-priority, preemptive scheduling** policy with up to 32 supported priority levels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add "tickless"

- **Thread priorities are dynamic** and can be changed at runtime.
- The Runqueue is implemented as a circular linked list for each priority, requiring n × m × 8 bits for _n_ maximum threads and _m_ supported priorities. All operations on the runqueue are performed in constant time, except for the deletion of a thread that is not the head of the runqueue.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very detailed, too detailed for this overview IMO. Mentioning the O(1) operations make sense though. (when is a thread that is not head deleted again?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when is a thread that is not head deleted again?

When the priority of the thread is changed.

- **Deep sleep when idle**: On single core, no idle threads are created. Instead, if the runqueue is empty, the processor enters deep sleep until a next thread is ready. The context of the previously running thread is only saved once the next thread is ready and the context switch occurs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I'd drop. It is too inaccurate at the moment, we don't "deep leep" yet, just idle the core.
And the lacy context save is an optimization that we should maybe list bundled elsewhere.

- **Same priority threads are scheduled cooperatively.** The scheduler itself is tickless, therefore time-slicing isn't supported. [Timers](https://docs.rs/embassy-time/latest/embassy_time/struct.Timer.html) are still supported through the timer API from the integrated [Embassy] crate.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a previous "Policy: The highest priority runnable thread (or threads in the multi-core case) is executed" or similar to make sense.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe drop the timers sentence?


### Thread Creation

- Threads can either be declared using a macro, which creates and starts the thread during startup, or spawned dynamically at runtime. In the latter case, the stack memory must still be statically allocated at compile time.
- The **maximum number of threads** is defined with a constant value at compile time. This maximum limits the number of concurrently running threads, but it is still possible to create more threads if earlier ones have finished their execution.
- Multiple **asynchronous Tasks** can be spawned within each thread with an executor from the integrated [Embassy] crate. This bridges the gap with async Rust, future-based concurrency, and asynchronous I/O. The executor executes all its tasks inside the thread context. When all tasks on the executor are pending, the owning thread is suspended.

## Synchronization Primitives in RIOT-rs

- **Locks** in RIOT-rs are basic non-reentrant locking objects that carry no data and serve as a building block for other synchronization primitives.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add that Locks don't have an owner?

- **Mutexes** are the user-facing variant of locks that wrap shared objects and provide mutual exclusion when accessing the inner data. The mutexes implement the priority inheritance protocol to prevent priority inversion if a higher priority thread is blocked on a mutex that is locked by a lower priority thread. The access itself is realized through a `MutexGuard`. If the _Guard_ goes out of scope, the mutex is automatically released.
- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm somewhat confused by this sentence: should sender and receiver be producer and consumer instead (to me sender and receiver refers to the channel handles)? Does the last part mean that the channel is an SPSC channel?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have sender and receiver channel handles here. But I am not sure if the producer/consumer terminology fits here either, given that that it's from the general producer-consumer pattern that is independent of the underlying synchronization primitive.

In our channel implementation, there is just one type Channel, that any thread with a immutable ref to can both send and receive on. The last part means that a transmission is always synchronously between exactly one sending thread and one receiving thread.

Wdyt of:

Suggested change
- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time.
- **Channels** facilitate the synchronous transfer of data between threads. The channel is not split into a sender and receiver part, so all threads can both send and receive on the same channel. A transmission is 1:1 between one sending and one receiving thread.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's an MPMC channel where consumers compete for the received value? The trick is that the channel has to be static so the compiler can prove across threads that the channel is still live (also the inner data has to be Copy).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trick is that the channel has to be static so the compiler can prove across threads that the channel is still live

Yep it must be static, see https://github.com/future-proof-iot/RIOT-rs/blob/main/examples/threading-channel/src/main.rs.

- **Thread flags** can be set per thread at any time. A thread is blocked until the flags it is waiting for have been set, which also includes flags that have been set prior to the _wait_ call.

**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be freed for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be freed for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.
**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be used for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.


## Multicore Support

RIOT-rs optionally supports symmetric multiprocessing (SMP).
- **Supported dual-core Chips** where SMP is enabled by default:
- ESP32-S3
- RP2040
- **Porting from single-core to multicore** requires no changes in the user-application.
- **One global runqueue** is shared among all cores. The scheduler assigns the _n_ highest-priority, ready, and non-conflicting threads to the _n_ available cores. The scheduler can be invoked individually on each core. Whenever a higher priority thread becomes ready, the scheduler is triggered on the core with the lowest-priority running thread to perform a context switch.
- **Core affinities** are optionally supported. They can be set per thread to restrict the thread's execution to one or multiple specific processors.
- **One Idle Thread per core** is created on multicore system. This helps avoid conflicts that could occur on multicore if deep sleep is entered directly from within the scheduler. When the idle thread is running, it then prompts the current core to enter deep sleep.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to change all "deep sleep" to "sleep" (or maybe just "idle mode"?).


## Mutual Exclusion in the Kernel

RIOT-rs uses a single global critical section for all kernel operations.
- **On single-core** this critical section is implemented by masking all interrupts.
- **On multicore** an additional hardware spinlock is used in the case of the RP2040, and atomics are used on the ESP32-S3 to ensure that the critical section is enforced on all cores.

[Embassy]: https://embassy.dev/
Loading