Tasklets

Tasklets are the preferred way to implement deferrable functions in I/O drivers. As already explained, tasklets are built on top of two softirqs named hi_softirq and tasklet_softirq. Several tasklets may be associated with the same softirq, each tasklet carrying its own function. There is no real difference between the two softirqs, except that do_softirq( ) executes hi_softirq's tasklets before tasklet_softirq's tasklets.

Tasklets and high-priority tasklets are stored in the tasklet_vec and tasklet_hi_vec arrays, respectively. Both of them include nr_cpus elements of type tasklet_head, and each element consists of a pointer to a list of tasklet descriptors. The tasklet descriptor is a data structure of type tasklet_struct, whose fields are shown in Table 4-8.

Table 4-8. The fields of the tasklet descriptor

Field name

Description

next

Pointer to next descriptor in the list

state

Status of the tasklet

count

Lock counter

func

Pointer to the tasklet function

data

An unsigned long integer that may be used by the tasklet function

The state field of the tasklet descriptor includes two flags:

TASKLET_STATE_SCHED

When set, this indicates that the tasklet is pending (has been scheduled for execution); it also means that the tasklet descriptor is inserted in one of the lists of the tasklet_vec and tasklet_hi_vec arrays.

TASKLET STATE RUN

When set, this indicates that the tasklet is being executed; on a uniprocessor system this flag is not used because there is no need to check whether a specific tasklet is running.

Let's suppose you're writing a device driver and you want to use a tasklet: what has to be done? First of all, you should allocate a new tasklet_struct data structure and initialize it by invoking tasklet_init( ); this function receives as parameters the address of the tasklet descriptor, the address of your tasklet function, and its optional integer argument.

Your tasklet may be selectively disabled by invoking either tasklet_disable_nosync( ) or tasklet_disable( ). Both functions increment the count field of the tasklet descriptor, but the latter function does not return until an already running instance of the tasklet function has terminated. To re-enable your tasklet, use tasklet_enable( ).

To activate the tasklet, you should invoke either the tasklet_schedule( ) function or the tasklet_hi_schedule( ) function, according to the priority that you require for your tasklet. The two functions are very similar; each of them performs the following actions:

1. Checks the tasklet_state_sched flag; if it is set, returns (the tasklet has already been scheduled)

2. Gets the logical number of the CPU that is executing the function

3. Saves the state of the IF flag and clears it to disable local interrupts

4. Adds the tasklet descriptor at the beginning of the list pointed to by tasklet_vec[cpu] or tasklet_hi_vec[cpu]

5. Invokes cpu_raise_softirq( ) to activate either the tasklet_softirq softirq or the hi_softirq softirq

6. Restores the value of the IF flag saved in Step 3 (local interrupts enabled or disabled)

Finally, let's see how your tasklet is executed. We know from the previous section that, once activated, softirq functions are executed by the do_softirq( ) function. The softirq function associated with the hi_softirq softirq is named tasklet_hi_action( ), while the function associated with tasklet_softirq is named tasklet_action( ). Once again, the two functions are very similar; each of them:

1. Gets the logical number of the CPU that is executing the function.

2. Disables local interrupts, saving the previous state of the IF flag.

3. Stores the address of the list pointed to by tasklet_vec[cpu] or tasklet_hi_vec[cpu] in the list local variable.

4. Puts a NULL address in tasklet_vec[cpu] or tasklet_hi_vec[cpu] ; thus, the list of scheduled tasklet descriptors is emptied.

Enables local interrupts.

6. For each tasklet descriptor in the list pointed to by list:

a. In multiprocessor systems, checks the tasklet_state_run flag of the tasklet. If it is set, a tasklet of the same type is already running on another CPU, so the function reinserts the task descriptor in the list pointed to by tasklet_vec[cpu] or tasklet_hi_vec[cpu] and activates the tasklet_softirq or hi_softirq softirq again. In this way, execution of the tasklet is deferred until other tasklets of the same type are running on other CPUs.

b. If the tasklet_state_run flag is not set, the tasklet is not running on other CPUs. In multiprocessor systems, the function sets the flag so that the tasklet function cannot be executed on other CPUs.

c. Checks whether the tasklet is disabled by looking at the count field of the tasklet descriptor. If it is disabled, it reinserts the task descriptor in the list pointed to by tasklet_vec[cpu] or tasklet_hi_vec[cpu] ; then the function activates the tasklet_softirq or hi_softirq softirq again.

d. If the tasklet is enabled, clears the tasklet_state_sched flag and executes the tasklet function.

Notice that, unless the tasklet function re-activates itself, every tasklet activation triggers at most one execution of the tasklet function.

4.7.3 Bottom Halves

A bottom half is essentially a high-priority tasklet that cannot be executed concurrently with any other bottom half, even if it is of a different type and on another CPU. The global_bh_lock spin lock is used to ensure that at most one bottom half is running.

Linux uses an array called the bh_base table to group all bottom halves together. It is an array of pointers to bottom halves and can include up to 32 entries, one for each type of bottom half. In practice, Linux uses about half of them; the types are listed in Table 4-9. As you can see from the table, some of the bottom halves are associated with hardware devices that are not necessarily installed in the system or that are specific to platforms besides the IBM PC compatible. But timer_bh, tqueue_bh, serial_bh, and immediate_bh still see widespread use. We describe the tqueue_bh and immediate_bh bottom half later in this chapter and the timer_bh bottom half in Chapter 6.

Table 4-9. The Linux bottom halves

Bottom half

Peripheral device

TIMER BH

Timer

TQUEUE BH

Periodic task queue

DIGI BH

DigiBoard PC/Xe

SERIAL BH

Serial port

RISCOM8 BH

RISCom/8

SPECIALIX BH

Specialix IO8+

AURORA BH

Aurora multiport card (SPARC)

ESP BH

Hayes ESP serial card

SCSI BH

SCSI interface

IMMEDIATE BH

Immediate task queue

CYCLADES BH

Cyclades Cyclom-Y serial multiport

CM2 0 6 BH

CD-ROM Philips/LMS cm206 disk

MACSERIAL BH

Power Macintosh's serial port

ISICOM BH

MultiTech's ISI cards

The bh_task_vec array stores 32 tasklet descriptors, one for each bottom half. During kernel initialization, these tasklet descriptors are initialized in the following way:

tasklet init(bh task vec+i, bh action, i);

As usual, before a bottom half is invoked for the first time, it must be initialized. This is done by invoking init_bh(n, routine), which inserts the routine address as the n th entry of bh_base. Conversely, remove_bh(n) removes the n th bottom half from the table.

Bottom-half activation is done by invoking mark_bh( ) . Since bottom halves are high-priority tasklets, mark_bh(n) just reduces to tasklet_hi_schedule(bh_task_vec + n) .

The bh_action( ) function is the tasklet function common to all bottom halves. It receives as a parameter the index of the bottom half and performs the following steps:

1. Gets the logical number of the CPU executing the tasklet function.

2. Checks whether the global_bh_lock spin lock has already been acquired. In this case, another CPU is running a bottom half. The function invokes mark_bh( ) to reactivate the bottom half and returns.

3. Otherwise, the function acquires the global_bh_lock spin lock so that no other bottom half can be executed in the system.

4. Checks that the local_irq_count field is set to zero (bottom halves are supposed to be run outside interrupt service routines), and that global interrupts are enabled (see Chapter 5). If one of these cases doesn't hold, the function releases the global_bh_lock spin lock and terminates.

5. Invokes the bottom half function stored in the proper entry of the bh_base array.

6. Releases the global_bh_lock spin lock and returns. 4.7.3.1 Extending a bottom half

The motivation for introducing deferrable functions is to allow a limited number of functions related to interrupt handling to be executed in a deferred manner. This approach has been stretched in two directions:

• To allow not only a function that services an interrupt, but also a generic kernel function to be executed as a bottom half

• To allow several kernel functions, instead of a single one, to be associated with a bottom half

Groups of functions are represented by task queues, which are lists of tq_struct structures whose fields are shown in Table 4-10.

Table 4-10. The fields of the tq_struct structure

Field name

Description

list

Links for doubly linked list

sync

Used to prevent multiple activations

routine

Function to call

As we shall see in Chapter 13, I/O device drivers use task queues to require the execution of several related functions when a specific interrupt occurs.

The declare_task_queue macro allocates a new task queue, while queue_task( ) inserts a new function in a task queue. The run_task_queue( ) function executes all the functions included in a given task queue.

It's worth mentioning three particular task queues:

• The tq _immediate task queue, run by the immediate_bh bottom half, includes kernel functions to be executed together with the standard bottom halves. The kernel invokes mark_bh( ) to activate the immediate_bh bottom half whenever a function is added to the tq _immediate task queue. It is executed as soon as do_softirq( ) is invoked.

• The tq _timer task queue is run by the tqueue_bh bottom half, which is activated at every timer interrupt. As we'll see in Chapter 6, that means it runs about every 10 ms.

• The tq_context task queue is not associated with a bottom half, but it is run by the keventd kernel thread. The schedule_task( ) function adds a function to the task queue; its execution is deferred until the scheduler selects the keventd kernel thread as the next process to run.

The main advantage of tq_context, with respect to the other task queues based on deferrable functions, is that its functions can freely perform blocking operations. On the other hand, softirqs (and therefore tasklets and bottom halves) are similar to interrupt handlers in that kernel developers cannot make any assumption on the process that will execute the deferrable functions. From a practical point of view, this means that softirqs cannot perform blocking operations like accessing a file, acquiring a semaphore, or sleeping in a wait queue.

The price to pay is that, once scheduled for execution in tq_context, a function might be delayed for quite a long time interval.

I [email protected] RuBoard

A PREVIOUS

Was this article helpful?

0 0

Post a comment