[FreeBSD] Clock Interrupts

2024. 7. 10. 19:03ComputerScience/FreeBSD

 

 

 

3.4 Clock Interrupts

The system is driven by a clock that interrupts at regular intervals. Each interrupt is referred to as a tick. On the PC, the clock ticks 1000 times per second. At each tick, the system updates the current time of day as well as user-process and system timers. 

 

Handling 1000 interrupts per second can be time consuming. To reduce the interrupt load, the kernel computes the number of ticks in the future at which an action may need to be taken. It then schedules the next clock interrupt to occur at that time. Thus, clock interrupts typically occur much less frequently than the 1000 ticks-per-second rate implies. This reduced interrupt rate is particularly helpful for systems with limited power budgets such as laptop computers and embedded systems as it allows them to spend much more time in low-power-consumption sleep mode.

 

Interrupts for clock ticks are posted at a high hardware-interrupt priority. After switching to the clock-device process, the hardclock() routine is called. It is important that the hardclock() routine finish its job quickly:

• If hardclock() runs for more than one tick, it will miss the next clock interrupt. Since hardclock() maintains the time of day for the system, a missed interrupt will cause the system to lose time.

• Because of hardclock()’s high interrupt priority, nearly all other activity in the system is blocked while hardclock() is running. This blocking can cause network controllers to miss packets. 

 

So the time spent in hardclock() is minimized, less critical time-related processing is handled by a lower-priority software-interrupt handler called softclock(). In addition, if multiple clocks are available, some time-related processing can be handled by other routines supported by alternate clocks. On the PC there are two additional clocks that run at a different frequency than the system clock: the statclock(), which runs at 127 ticks per second to collect system statistics, and the profclock(), which runs at 8128 ticks per second to collect profiling information.

 

The work done by hardclock() is as follows:

• If the currently running process has a virtual or profiling interval timer, it decrements the timer and delivers a signal if the timer has expired.

• It increments the current time of day by the number of ticks since the previous call to hardclock().

• If the system does not have a separate clock for process profiling, the hardclock() routine does the operations normally done by profclock(), as described in the next section. (대신할 수 있다) 

• If the system does not have a separate clock for statistics gathering, the hardclock() routine does the operations normally done by statclock(), as described in the next section.

• If softclock() needs to be run, it makes the softclock process runnable.

 

 

 

Statistics and Process Scheduling

On historic FreeBSD systems, the hardclock() routine collected resource-utilization statistics about what was happening when the clock interrupted. These statistics were used to do accounting, to monitor what the system was doing, and to determine future scheduling priorities. In addition, hardclock() forced context switches so that all processes would get a share of the CPU. (스케줄링의 근본적인 목적)

 

This approach has weaknesses because the clock supporting hardclock() interrupts on a regular basis. Processes can become synchronized with the system clock, resulting in inaccurate measurements of resource utilization (especially CPU) and inaccurate profiling. It is also possible to write programs that deliberately synchronize with the system clock to outwit the scheduler.

 

On architectures with multiple high-precision, programmable clocks - such as the PC - a statistics clock is run at a different frequency than the time-of-day clock. The FreeBSD statclock() runs at 127 ticks per second and is responsible for accumulating resource usage to processes. At each tick, it charges the currently running process with a tick; if the process has accumulated four ticks, it recalculates its priority. If the new priority is less than the current priority, it arranges for the process to be rescheduled. Thus, processes that become synchronized with the system clock still have CPU time accounted to them. 

 

The statclock() also collects statistics on what the system was doing at the time of the tick (sitting idle, executing in user mode, or executing in system mode). Finally, it collects basic information on system I/O, such as which disk drives are currently active. 

 

To allow the collection of more accurate profiling information, FreeBSD supports a profiling clock. When one or more processes are requesting profiling information, the profiling clock is set to run at a tick rate that is relatively prime to the main system clock (8128 ticks per second on the PC). At each tick, it checks to see if one of the processes that it has been asked to profile is running. If so, it obtains the current location of the program counter and increments a counter associated with that location in the profiling buffer associated with the process.

 

 

 

Timeouts

The remaining time-related processing involves processing timeout requests and periodically reprioritizing processes that are ready to run. These functions are handled by the softclock() routine. 

When hardclock() completes, if there were any softclock() functions to be done, hardclock() schedules the softclock process to run.

 

The primary task of the softclock() routine is to arrange for the execution of periodic events, such as the following:

• Process real-time timer (see Section 3.6)

• Retransmission of dropped network packets (드랍된 패킷들 재전송) 

• Watchdog timers on peripherals that require monitoring

• System process-rescheduling events

 

An important event is the scheduling that periodically raises or lowers the CPU priority for each process in the system based on that process’s recent CPU usage. The rescheduling calculation is done once per second. The scheduler is started at boot time, and each time that it runs, it requests that it be invoked again 1 second in the future.

 

On a heavily loaded system with many processes, the scheduler may take a long time to complete its job. Posting its next invocation 1 second after each completion may cause scheduling to occur less frequently than once per second. However, as the scheduler is not responsible for any time-critical functions, such as maintaining the time of day, scheduling less

frequently than once a second is normally not a problem.

 

The data structure that describes waiting events is called the callout queue. Figure 3.2 shows an example of the callout queue. When a process schedules an event, it specifies a function to be called, a pointer to be passed as an argument to the function, and the number of clock ticks until the event should occur.

 

 

 

Timer events in the callout queue.

 

 

 

The kernel maintains an array of queue headers, each representing a particular time. There is a pointer that references the queue header for the current time, marked “now” in Figure 3.2. The queue header that follows the currently referenced one represents events that are one tick in the future. The queue header after that is two ticks in the future. The list wraps around, so if the last queue header in the list represents time t, then the first queue header in the list represents time t + 1.

 

The queue header immediately preceding the currently referenced one represents the time furthest in the future. In Figure 3.2 there are 200 queue headers, so the queue header immediately preceding the one marked “now” represents events that are 199 ticks in the future. (now를 기준으로 정의되는 상대적인 시간 개념) 

 

Each time the hardclock() routine runs, it increments the callout queue-head pointer. If the queue is not empty, it schedules the softclock() process to run. The softclock() process scans the events in the current queue. It compares the current time to the time stored in the event structure. If the times match, the event is removed from the list and its requested function is called, being passed the argument specified when it was registered.

 

When an event n ticks in the future is to be posted, its queue header is calculated by taking the index of the queue labelled “now” in Figure 3.2, adding n to it, and then taking the resulting value modulo the number of queue headers. If an event is to occur further in the future than the number of queue headers, then it will end up on a list with other events that are to happen sooner. 

 

Thus, the actual time of the event is stored in its entry so that when the queue is scanned by softclock(), it can determine which events are current and which are to occur in the future. In Figure 3.2, the second entry in the “now” queue will be skipped on the current scan of the queue, but it will be handled 200 ticks in the future when softclock() next processes this queue.

 

An argument is provided to the callout-queue function when it is called so that one function can be used by multiple processes. For example, there is a single real-time timer function that sends a signal to a process when a timer expires. Every process that has a real-time timer running posts a timeout request for this function; the argument that is passed to the function is a pointer to the process structure for the process. This argument enables the timeout function to deliver the signal to the correct process.

 

Timeout processing is more efficient when the timeouts are specified in ticks. Time updates require only an integer decrement, and checks for timer expiration require only an integer comparison. If the timers contained time values, decrementing and comparisons would be more complex. The approach used in FreeBSD is based on the work of Varghese & Lauck. Another possible approach is to maintain a heap with the next-occurring event at the top. 

 

 

 

'ComputerScience > FreeBSD' 카테고리의 다른 글

[FreeBSD] Timing Services  (0) 2024.07.15
[FreeBSD] Memory Management Services  (1) 2024.07.14
[FreeBSD] Traps and Interrupts  (0) 2024.07.10
[FreeBSD] System Calls  (0) 2024.07.10
[FreeBSD] Run-time structure of the kernel  (0) 2024.07.10