Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions src/utest/sched_mtx_tc.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,56 @@
* Change Logs:
* Date Author Notes
* 2024-01-17 Shell the first version
* 2025-12-12 lhxj Add standardized utest documentation block
*/

/**
* Test Case Name: Scheduler Mutex Stress Test (core.scheduler_mutex)
*
* Test Objectives:
* - Validate the stability of the Mutex subsystem under high contention (Stress Test).
* - Ensure priority inheritance (if supported) or basic blocking/waking works correctly.
* - In multi-core systems, verify data consistency and spinlock mechanisms when multiple cores contend for a single kernel object simultaneously.
* - List specific functions or APIs to be tested:
* - rt_mutex_take()
* - rt_mutex_release()
* - rt_thread_create()
*
* Test Scenarios:
* - **Stress Test (mutex_stress_tc):**
* 1. Create `RT_CPUS_NR` threads (e.g., 4 threads in a Quad-Core setup).
* 2. Assign **staggered priorities** to these threads (`priority_base + i % range`) to simulate contention between high and low priority tasks.
* 3. All tester threads execute a tight loop: attempting to take and immediately release the *same* global mutex (`_racing_lock`).
* - In SMP, this simulates true parallel contention.
* 4. The main test thread sleeps for `TEST_SECONDS` (30s), periodically printing progress.
* 5. After time is up, signal threads to exit (`_exit_flag`) and wait for them using a semaphore (`_thr_exit_sem`).
*
* Verification Metrics:
* - **Pass:** The system must remain responsive (no deadlocks, hard faults, or RCU stalls) during the 30-second run.
* - **Pass:** The main thread must successfully wait for all tester threads to exit (`rt_sem_take` returns `RT_EOK`).
* - **Pass:** `uassert_true(1)` is executed periodically, confirming the main loop is alive.
*
* Dependencies:
* - Hardware requirements (e.g., specific peripherals)
* - No specific peripherals required, but Multi-core CPU recommended for SMP testing.
* (This is met by the qemu-virt64-riscv BSP).
* - Software configuration (e.g., kernel options, driver initialization)
* - `RT_USING_UTEST` must be enabled (`RT-Thread Utestcases`).
* - `Scheduler Test` must be enabled (`RT-Thread Utestcases` -> `Kernel Core` -> 'Scheduler Test').
* - (Optional) Enable SMP for parallel testing:
* - Go to `RT-Thread Kernel` -> `Enable SMP (Symmetric multiprocessing)`.
* - Set `Number of CPUs` to > 1 (e.g., 4).
* - Environmental assumptions
* - Requires sufficient heap memory to allocate stacks for `RT_CPUS_NR` threads.
* - Run the test case from the msh prompt:
* `utest_run core.scheduler_mutex`
*
* Expected Results:
* - The test continues for approximately 30 seconds.
* - The console logs periodic success assertions.
* - Final Output: `[ PASSED ] [ result ] testcase (core.scheduler_mutex)`
*/

#include <rtthread.h>
#include <stdlib.h>
#include "utest.h"
Expand Down
53 changes: 53 additions & 0 deletions src/utest/sched_sem_tc.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,60 @@
* Change Logs:
* Date Author Notes
* 2024-01-17 Shell the first version
* 2025-12-12 lhxj Add standardized utest documentation block
*/

/**
* Test Case Name: Priority Based Semaphore Synchronization Test (core.scheduler_sem)
*
* Test Objectives:
* - Verify the stability and correctness of the scheduler under high concurrency.
* - Verify thread synchronization and execution order using Semaphore chains across different priority levels.
* - Verify SMP (Symmetric Multiprocessing) load balancing and atomic operations in a multi-core environment.
* - List specific functions or APIs to be tested:
* - rt_sem_init
* - rt_sem_take
* - rt_sem_release
* - rt_thread_create
* - rt_thread_startup
* - rt_atomic_add
*
* Test Scenarios:
* - **Semaphore Chained Scheduling:**
* 1. Initialize a "thread matrix" where threads are created across multiple priority levels (`TEST_LEVEL_COUNTS`).
* 2. For each priority level, create multiple concurrent threads (`RT_CPUS_NR * 2`).
* 3. Establish a dependency chain (Ring Topology):
* - **Level 0 threads:** Notify Level 1, then wait for their own resource.
* - **Middle Level threads:** Wait for their resource (notified by Level N-1), then notify Level N+1.
* - **Last Level threads:** Wait for their resource, print status (CPU ID), delay, then notify Level 0.
* 4. Each thread increments an atomic load counter for the specific CPU it is running on.
* 5. The main test thread waits for all sub-threads to signal completion via `_thr_exit_sem`.
*
* Verification Metrics:
* - **Pass:** All created threads must complete their execution loops without deadlocking.
* - **Pass:** The sum of execution counts across all CPUs (`_load_average`) must equal the calculated expected total (`KERN_TEST_CONFIG_LOOP_TIMES * TEST_LEVEL_COUNTS * KERN_TEST_CONCURRENT_THREADS`).
*
* Dependencies:
* - Hardware requirements
* - No specific peripherals required, but multi-core CPU is recommended for SMP verification.
* (This is met by the qemu-virt64-riscv BSP).
* - Software configuration
* - `RT_USING_UTEST` must be enabled (`RT-Thread Utestcases`).
* - `Scheduler Test` must be enabled (`RT-Thread Utestcases` -> `Kernel Core` -> 'Scheduler Test').
* - (Optional) Enable SMP for parallel testing (Highly Recommended):
* - Go to `RT-Thread Kernel` -> `Enable SMP (Symmetric multiprocessing)`.
* - Set `Number of CPUs` to > 1 (e.g., 2 or 4).
* - Environmental assumptions
* - The system must support enough valid priority levels (`RT_THREAD_PRIORITY_MAX`) to accommodate `TEST_LEVEL_COUNTS`.
* - Run the test case from the msh prompt:
* `utest_run core.scheduler_sem`
*
* Expected Results:
* - The console should print character patterns (e.g., `*0*1...`) indicating thread activity on specific CPUs.
* - The final load statistics per CPU should be printed.
* - Final Output: `[ PASSED ] [ result ] testcase (core.scheduler_sem)`
*/

#define __RT_IPC_SOURCE__

#include <rtthread.h>
Expand Down
51 changes: 51 additions & 0 deletions src/utest/sched_thread_tc.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,58 @@
* Change Logs:
* Date Author Notes
* 2024-01-25 Shell init ver.
* 2025-12-12 lhxj Add standardized utest documentation block
*/

/**
* Test Case Name: Scheduler Thread Stability Test (core.scheduler_thread)
*
* Test Objectives:
* - Verify the stability of the scheduler during intensive context switching.
* - Test the interaction between `rt_thread_suspend` and `rt_thread_resume` within critical sections.
* - Verify scheduler robustness in multi-core environments (using `RT_CPUS_NR`) ensuring no deadlocks or race conditions occur during thread state transitions.
* - List specific functions or APIs to be tested:
* - rt_thread_create
* - rt_thread_startup
* - rt_thread_suspend
* - rt_thread_resume
* - rt_enter_critical / rt_exit_critical_safe
* - rt_atomic_add
*
* Test Scenarios:
* - **Multi-threaded Ping-Pong Context Switching:**
* 1. Initialize a semaphore `_thr_exit_sem` for completion synchronization.
* 2. Create `TEST_THREAD_COUNT` pairs of threads (based on `RT_CPUS_NR`).
* 3. Each thread pair performs a "ping-pong" operation in a loop (100,000 iterations):
* - Thread A enters critical section, suspends self, resumes Thread B, exits critical section.
* - Thread B enters critical section, suspends self, resumes Thread A, exits critical section.
* 4. An atomic counter `_progress_counter` tracks execution progress, triggering `uassert_true` at intervals.
* 5. The main test thread waits for all worker threads to signal completion via the semaphore.
*
* Verification Metrics:
* - **Pass:** All created threads complete their execution loops without system hangs or crashes.
* - **Pass:** The progress counter increments as expected, validating thread execution flow.
*
* Dependencies:
* - Hardware requirements (e.g., specific peripherals)
* - No specific peripherals required.
* (This is met by the qemu-virt64-riscv BSP).
* - Software configuration (e.g., kernel options, driver initialization)
* - `RT_USING_UTEST` must be enabled (`RT-Thread Utestcases`).
* - `Scheduler Test` must be enabled (`RT-Thread Utestcases` -> `Kernel Core` -> 'Scheduler Test').
* - (Optional) Enable SMP for parallel testing:
* - Go to `RT-Thread Kernel` -> `Enable SMP (Symmetric multiprocessing)`.
* - Set `Number of CPUs` to > 1 (e.g., 4).
* - Environmental assumptions
* - `UTEST_THR_STACK_SIZE` is sufficient for the test threads.
* - Run the test case from the msh prompt:
* `utest_run core.scheduler_thread`
*
* Expected Results:
* - The test proceeds through multiple loops of thread suspension and resumption.
* - Final Output: `[ PASSED ] [ result ] testcase (core.scheduler_thread)`
*/

#define __RT_KERNEL_SOURCE__
#include <rtthread.h>
#include "utest.h"
Expand Down
47 changes: 47 additions & 0 deletions src/utest/sched_timed_mtx_tc.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,54 @@
* Change Logs:
* Date Author Notes
* 2024-01-25 Shell init ver.
* 2025-12-12 lhxj Add standardized utest documentation block
*/

/**
* Test Case Name: Timed Mutex Race Condition Test (core.scheduler_timed_mtx)
*
* Test Objectives:
* - Verify mutex behavior when a timeout race condition occurs between the timeout timer (scheduler) and mutex release.
* - Ensure strict round-robin ownership (Producer <-> Consumer) is maintained despite timeouts.
* - Validate that `rt_mutex_take_interruptible` correctly handles timeouts returning `-RT_ETIMEOUT`.
* - Ensure a thread does not hold the mutex if it reports a timeout.
* - List specific functions or APIs to be tested:
* - rt_mutex_take_interruptible
* - rt_mutex_take
* - rt_mutex_release
* - rt_tick_get
*
* Test Scenarios:
* - **Timeout vs Release Race:**
* 1. Create a Producer thread and a Consumer thread.
* 2. Producer acquires the mutex, aligns execution to the system tick edge (`_wait_until_edge`) with random latency, and releases the mutex.
* 3. Consumer attempts to acquire the mutex with a short timeout (1 tick) using `rt_mutex_take_interruptible`.
* 4. Verify that if Consumer times out, it does not hold the mutex.
* 5. Verify that if Consumer acquires the mutex, strict ownership order (Producer -> Consumer) was followed using magic flags.
* 6. Repeat for `TEST_LOOP_TICKS`.
*
* Verification Metrics:
* - **Pass:** The mutex ownership sequence (Consumer -> Producer -> Consumer) is never violated.
* - **Pass:** `rt_mutex_get_owner` returns NULL or not the current thread if `rt_mutex_take_interruptible` returns `-RT_ETIMEOUT`.
* - **Pass:** Both threads complete their loops and signal exit without asserting failure.
*
* Dependencies:
* - Hardware requirements (e.g., specific peripherals)
* - No specific hardware requirements.
* (This is met by the qemu-virt64-riscv BSP).
* - Software configuration (e.g., kernel options, driver initialization)
* - `RT_USING_UTEST` must be enabled (`RT-Thread Utestcases`).
* - `Scheduler Test` must be enabled (`RT-Thread Utestcases` -> `Kernel Core` -> 'Scheduler Test').
* - Environmental assumptions
* - No specific environmental assumptions.
* - Run the test case from the msh prompt:
* `utest_run core.scheduler_timed_mtx`
*
* Expected Results:
* - The test logs "Total failed times: X(in Y)" indicating valid timeouts handled correctly.
* - Final Output: `[ PASSED ] [ result ] testcase (core.scheduler_timed_mtx)`
*/

#define __RT_KERNEL_SOURCE__
#include <rtthread.h>
#include <stdlib.h>
Expand Down
49 changes: 49 additions & 0 deletions src/utest/sched_timed_sem_tc.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,56 @@
* Change Logs:
* Date Author Notes
* 2024-01-25 Shell init ver.
* 2025-12-12 lhxj Add standardized utest documentation block
*/

/**
* Test Case Name: Scheduler Timed Semaphore Race Test (core.scheduler_timed_sem)
*
* Test Objectives:
* - Verify IPC (Semaphore) behavior under tight timing conditions (tick edge).
* - Stress test the race condition where a timeout routine and a producer thread
* race to wake up a sleeping consumer.
* - Ensure the scheduler handles interruptible semaphore takes correctly without
* returning unexpected error codes during high-contention/edge-case timing.
* - List specific functions or APIs to be tested:
* - rt_sem_take_interruptible
* - rt_sem_release
* - rt_tick_get
* - rt_thread_create
*
* Test Scenarios:
* - **Producer-Consumer Tick Edge Race:**
* 1. Initialize two semaphores (`_ipc_sem`, `_thr_exit_sem`).
* 2. Create two threads: a Producer (priority +1) and a Consumer (priority +1).
* 3. **Producer Loop:** Wait specifically for the RT-Thread tick count to change (tick edge),
* add a small random latency, and then release `_ipc_sem`.
* 4. **Consumer Loop:** Attempt to take `_ipc_sem` with a timeout of exactly 1 tick.
* 5. Track "failed times" (valid timeouts) versus "unexpected errors" (assert failure).
* 6. Run this loop for `TEST_SECONDS` (10 seconds).
*
* Verification Metrics:
* - **Pass:** The test completes the duration without triggering `uassert_true(0)`.
* - **Pass:** Consumer receives either `RT_EOK` (success) or `-RT_ETIMEOUT` (expected race loss).
* - **Fail:** Consumer receives any error code other than `RT_EOK` or `-RT_ETIMEOUT`.
*
* Dependencies:
* - Hardware requirements
* - No specific peripheral required.
* (This is met by the qemu-virt64-riscv BSP).
* - Software configuration
* - `RT_USING_UTEST` must be enabled (`RT-Thread Utestcases`).
* - `Scheduler Test` must be enabled (`RT-Thread Utestcases` -> `Kernel Core` -> 'Scheduler Test').
* - Environmental assumptions
* - System tick must be running.
* - Run the test case from the msh prompt:
* `utest_run core.scheduler_timed_sem`
*
* Expected Results:
* - The system logs "Total failed times: X(in Y)" (Timeouts are allowed/counted, not fatal).
* - Final Output: `[ PASSED ] [ result ] testcase (core.scheduler_timed_sem)`
*/

#define __RT_KERNEL_SOURCE__
#include <rtthread.h>
#include <stdlib.h>
Expand Down