Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(threading): add multithreading book chapter #465

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

elenaf9
Copy link
Collaborator

@elenaf9 elenaf9 commented Oct 12, 2024

Description

Add chapter about multithreading in RIOT-rs to the book.

Issues/PRs references

Depends on:

Includes multicore features that are PR'd in #397, but not merged yet.

@ROMemories ROMemories added docs Improvements or additions to documentation threading labels Oct 16, 2024
@elenaf9 elenaf9 marked this pull request as ready for review October 23, 2024 12:58
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
@ROMemories
Copy link
Collaborator

An entry in the SUMMARY.md file has to be added so that this new page gets rendered.

Copy link
Collaborator

@ROMemories ROMemories left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's been agreed to use "multi-core" instead of "multicore" (without a hyphen) for laze module naming, and I do agree to keep it that way, but I'd prefer we use the "multicore" spelling in prose, as it seems to be the primary spelling in use nowadays.
Only the Wikipedia page uses a hyphen, but the Wiktionary page uses the "multicore" spelling, as do the Merriam-Webster and Oxford dictionaries.

book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved

- **Locks** in RIOT-rs are basic non-reentrant locking objects that carry no data and serve as a building block for other synchronization primitives.
- **Mutexes** are the user-facing variant of locks that wrap shared objects and provide mutual exclusion when accessing the inner data. The mutexes implement the priority inheritance protocol to prevent priority inversion if a higher priority thread is blocked on a mutex that is locked by a lower priority thread. The access itself is realized through a `MutexGuard`. If the _Guard_ goes out of scope, the mutex is automatically released.
- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm somewhat confused by this sentence: should sender and receiver be producer and consumer instead (to me sender and receiver refers to the channel handles)? Does the last part mean that the channel is an SPSC channel?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have sender and receiver channel handles here. But I am not sure if the producer/consumer terminology fits here either, given that that it's from the general producer-consumer pattern that is independent of the underlying synchronization primitive.

In our channel implementation, there is just one type Channel, that any thread with a immutable ref to can both send and receive on. The last part means that a transmission is always synchronously between exactly one sending thread and one receiving thread.

Wdyt of:

Suggested change
- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time.
- **Channels** facilitate the synchronous transfer of data between threads. The channel is not split into a sender and receiver part, so all threads can both send and receive on the same channel. A transmission is 1:1 between one sending and one receiving thread.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's an MPMC channel where consumers compete for the received value? The trick is that the channel has to be static so the compiler can prove across threads that the channel is still live (also the inner data has to be Copy).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The trick is that the channel has to be static so the compiler can prove across threads that the channel is still live

Yep it must be static, see https://github.com/future-proof-iot/RIOT-rs/blob/main/examples/threading-channel/src/main.rs.

book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
book/src/multithreading.md Outdated Show resolved Hide resolved
@emmanuelsearch
Copy link
Contributor

emmanuelsearch commented Nov 12, 2024

@elenaf9 would be good to have an initial version of this in the book before the 1st release. Is there a lot left to discuss from your point of view? (Implicit question: has some stuff substantially changed in what got merged, which needs update in this documentation?)

book/src/multithreading.md Outdated Show resolved Hide resolved
@elenaf9 elenaf9 force-pushed the book/multithreading branch 2 times, most recently from d2f5ac5 to 3f7e6c0 Compare November 13, 2024 10:03
@elenaf9
Copy link
Collaborator Author

elenaf9 commented Nov 13, 2024

@elenaf9 would be good to have an initial version of this in the book before the 1st release. Is there a lot left to discuss from your point of view? (Implicit question: has some stuff substantially changed in what got merged, which needs update in this documentation?)

Nothing left to discuss from my side apart from the one open discussion. But the Mutexes described in the doc haven't been merged yet: #398


- **Fixed-priority, preemptive scheduling** policy with up to 32 supported priority levels.
- **Thread priorities are dynamic** and can be changed at runtime.
- The Runqueue is implemented as a circular linked list for each priority, requiring n × m × 8 bits for _n_ maximum threads and _m_ supported priorities. All operations on the runqueue are performed in constant time, except for the deletion of a thread that is not the head of the runqueue.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very detailed, too detailed for this overview IMO. Mentioning the O(1) operations make sense though. (when is a thread that is not head deleted again?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when is a thread that is not head deleted again?

When the priority of the thread is changed.


## Scheduling

- **Fixed-priority, preemptive scheduling** policy with up to 32 supported priority levels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add "tickless"

- **Fixed-priority, preemptive scheduling** policy with up to 32 supported priority levels.
- **Thread priorities are dynamic** and can be changed at runtime.
- The Runqueue is implemented as a circular linked list for each priority, requiring n × m × 8 bits for _n_ maximum threads and _m_ supported priorities. All operations on the runqueue are performed in constant time, except for the deletion of a thread that is not the head of the runqueue.
- **Deep sleep when idle**: On single core, no idle threads are created. Instead, if the runqueue is empty, the processor enters deep sleep until a next thread is ready. The context of the previously running thread is only saved once the next thread is ready and the context switch occurs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I'd drop. It is too inaccurate at the moment, we don't "deep leep" yet, just idle the core.
And the lacy context save is an optimization that we should maybe list bundled elsewhere.

- **Thread priorities are dynamic** and can be changed at runtime.
- The Runqueue is implemented as a circular linked list for each priority, requiring n × m × 8 bits for _n_ maximum threads and _m_ supported priorities. All operations on the runqueue are performed in constant time, except for the deletion of a thread that is not the head of the runqueue.
- **Deep sleep when idle**: On single core, no idle threads are created. Instead, if the runqueue is empty, the processor enters deep sleep until a next thread is ready. The context of the previously running thread is only saved once the next thread is ready and the context switch occurs.
- **Same priority threads are scheduled cooperatively.** The scheduler itself is tickless, therefore time-slicing isn't supported. [Timers](https://docs.rs/embassy-time/latest/embassy_time/struct.Timer.html) are still supported through the timer API from the integrated [Embassy] crate.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a previous "Policy: The highest priority runnable thread (or threads in the multi-core case) is executed" or similar to make sense.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe drop the timers sentence?


## Synchronization Primitives in RIOT-rs

- **Locks** in RIOT-rs are basic non-reentrant locking objects that carry no data and serve as a building block for other synchronization primitives.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add that Locks don't have an owner?

- **Channels** facilitate the synchronous transfer of data between threads. They are not bound to one specific sender or receiver, but only one sender and one receiver are possible at a time.
- **Thread flags** can be set per thread at any time. A thread is blocked until the flags it is waiting for have been set, which also includes flags that have been set prior to the _wait_ call.

**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be freed for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be freed for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.
**All of the above synchronization primitives are blocking**. When a thread is blocked, its execution is paused, allowing the CPU to be used for other tasks. If multiple threads are blocked on the same object, they are entered into a waitlist that is sorted by priority and FIFO order.

- **Porting from single-core to multicore** requires no changes in the user-application.
- **One global runqueue** is shared among all cores. The scheduler assigns the _n_ highest-priority, ready, and non-conflicting threads to the _n_ available cores. The scheduler can be invoked individually on each core. Whenever a higher priority thread becomes ready, the scheduler is triggered on the core with the lowest-priority running thread to perform a context switch.
- **Core affinities** are optionally supported. They can be set per thread to restrict the thread's execution to one or multiple specific processors.
- **One Idle Thread per core** is created on multicore system. This helps avoid conflicts that could occur on multicore if deep sleep is entered directly from within the scheduler. When the idle thread is running, it then prompts the current core to enter deep sleep.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to change all "deep sleep" to "sleep" (or maybe just "idle mode"?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Improvements or additions to documentation threading
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants