In a deployed application, switching between threads requires a finite amount of time depending on the current state of the thread, embedded processor, and OS. Kernel latency defines the time required for the operating system to respond to a trigger signal, stop execution of any running threads, and start the execution of the thread responsible for the trigger signal.
SoC Blockset™ models simulate Kernel latency as a delay at the start of execution of a task the first time the task moves from the waiting to running state. The following diagram shows the execution timing of a high-priority and low-priority task on a system that simulates a single processor core.
Other factors affecting kernel latency, such as context switch times, can be considered negligible compared to other effects and are not modeled in simulation.
Kernel latency requires advanced knowledge of the processor specifications and can be
generally set to
0 without impact to the simulation.
This example shows the effect of kernel latency on the behavior and timing of two timer driven tasks in an SoC application.
The following model simulates a software application with two timer driven tasks. The task characteristics, specified in the Task Manager block, are as follows:
With these timing conditions, the high priority task preempts the low priority task. In the model Configuration Parameters dialog box, the Hardware Implementation > Operating system/scheduler > Kernel latency is set to
Run the model and open the Simulation Data Inspector. Selecting the two task signal produces the following display.
Inspecting the Simulation Data Inspector, a change in task state from Waiting to Running shows a latency of
0.002 seconds. However, when the task changes from Preempted to Running, no latency occurs. This timing matches with the expected behavior of task, experiencing a latency in startup of that task execution instance, but not when the task instance already exists.