0% found this document useful (0 votes)
44 views25 pages

IO Systems Streams: Subject: Operating System Ibrar Afzal Lecturer IT Department

The document discusses the STREAMS framework in Unix operating systems. STREAMS allows applications to assemble pipelines of driver code dynamically. A stream connects a device driver to a user process and can contain a stream head, driver end, and zero or more stream modules. Each component contains read and write queues, and messages are passed between queues. Modules provide functionality and are added via ioctl calls. STREAMS provides asynchronous I/O and a modular approach to device drivers and network protocols.

Uploaded by

Rimsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views25 pages

IO Systems Streams: Subject: Operating System Ibrar Afzal Lecturer IT Department

The document discusses the STREAMS framework in Unix operating systems. STREAMS allows applications to assemble pipelines of driver code dynamically. A stream connects a device driver to a user process and can contain a stream head, driver end, and zero or more stream modules. Each component contains read and write queues, and messages are passed between queues. Modules provide functionality and are added via ioctl calls. STREAMS provides asynchronous I/O and a modular approach to device drivers and network protocols.

Uploaded by

Rimsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 25

IO Systems

STREAMS
Lecture -3

Subject: Operating System

Ibrar Afzal

Lecturer IT Department

1
STREAMS

 It enabled an application to assemble pipelines of driver code dynamically.

 A stream is a full-duplex connection between a device driver and a user-level process.

 It consists

 Stream head
 That interfaces with the user process
 A driver end
 That controls the device, and zero or more
 Stream modules between the stream
 Head and the driver end.
 Each of these components contains a pair of queues

 A read queue
 A write queue
 Message passing is used to transfer data between queues.

2
STREAMS

Modules provide the functionality of STREAMS processing


 They are pushed onto a stream by use of the ioctl() system call

 A process can open a serial-port device via a stream and can push on a module to

handle input editing


 Because messages are exchanged between queues in adjacent modules

 A queue in one module may overflow an adjacent queue

3
STREAMS

 To prevent this from occurring, a queue may support flow control.

 Without flow control,

 A queue accepts all messages and immediately sends them on to the queue in the
adjacent module without buffering them

 A queue that supports flow control buffers messages and does not accept messages without

sufficient buffer space


 This process involves exchanges of control messages between queues in adjacent

modules

4
STREAMS

5
STREAMS

 A user process writes data to a device using either

 The write() system call writes raw data to the stream

 whereas putmsg() allows the user process to specify a message

 the stream head copies the data into a message

 delivers it to the queue for the next module in line

 User process reads data from the stream head using either the read() or getmsg() system call.

 If read() is used

 The stream head gets a message from its adjacent queue

 If getmsg() is used

 A message is returned to the process.

6
STREAMS
 STREAMS I/O is asynchronous (or nonblocking)

 except when the user process communicates with the stream head.

 When writing to the stream

 The user process will block, until there is room to copy the message
 Assuming the next queue uses flow control
 The user process will block

 When reading from the stream until data are available

7
STREAMS

STREAMS I/O is asynchronous (or nonblocking)


 The driver end has a read and write queue

 Driver end must respond to interrupts

 Such as one triggered when a frame is ready to be read from a network

 Unlike the stream head, which may block if it is unable to copy a message to the next queue in

line, the driver end must handle all incoming data


 Drivers must support flow control as well

8
STREAMS

Benefit of using STREAMS


 It provides a framework for a modular and incremental approach to writing device

drivers and network protocols.


 Modules may be used by different streams and hence by different devices

 Treating character-device I/O as an unstructured byte stream

9
STREAMS

Benefit of using STREAMS


 STREAMS allows support for message boundaries and control information when

communicating between modules.


 Most UNIX variants support STREAMS

 It is the preferred method for writing protocols and device drivers.

 System V UNIX and Solaris implement the socket mechanism using STREAMS

10
IO Performance
 It places heavy demands on the CPU to execute device-driver code and to schedule processes

fairly and efficiently as they block and unblock


 The resulting context switches stress the CPU and its hardware caches

 I/O also exposes any inefficiencies in the interrupt-handling mechanisms in the kernel

 I/O loads down the memory bus

 During data copies between controllers and physical memory

 During copies between kernel buffers and application data space

 Coping gracefully with all these demands is one of the major concerns of a computer architect

11
IO Performance

 interrupt handling

 Handle many thousands of interrupts per second,

 Interrupt handling is a relatively expensive task

 Each interrupt causes the system to perform


 State change,
 Execute the interrupt handler
 Restore state.
 Programmed I/O can be more efficient than interrupt-driven I/O

 If the number of cycles spent in busy waiting is not excessive


 An I/O completion typically unblocks a process,
 Leading to the full overhead of a context switch.

12
IO Performance

Network traffic
 Cause a high context-switch rate.
 A remote login from one machine to another.

 Each character typed on the local machine must be transported to the remote machine.

 On the local machine,

 The character is typed


 A keyboard interrupt is generated
 And the character is passed through the interrupt handler to the device driver, to the kernel

 Then to the user process

 The user process issues a network I/O system call to send the character to the remote machine.

13
IO Performance

Network traffic
 Through the network layers that construct a network packet into the network device driver.

 The network device driver transfers the packet to the network controller,

 which sends the character and generates an interrupt.

 The interrupt is passed back up through the kernel to cause the network I/O system call to complete.

14
IO Performance
 Now, the remote system’s network hardware receives the packet

 an interrupt is generated.

 The character is unpacked from the network protocols

 given to the appropriate network daemon.

 The network daemon

 identifies which remote login session is involved

 passes the packet to the appropriate sub daemon for that session.

 Throughout this flow, there are context switches and state switches

 To eliminate the context switches involved in moving each character between daemons and the kernel,

 the Solaris developers re implemented the telnet daemon using in-kernel threads.

 Improvement increased the maximum number of network logins from a few hundred to a few thousand on a

large server.

15
IO Performance
 Other systems use separate front-end processors for terminal I/O to reduce the interrupt burden

on the main CPU.


 For instance, a terminal concentrator can multiplex the traffic from hundreds of remote terminals

into one port on a large computer.


 An I/O channel is a dedicated, special-purpose CPU found in mainframes and in other high-end

systems.
 The job of a channel is to offload I/O work from the main CPU.

 The idea is that the channels keep the data flowing smoothly, while the main CPU remains free to

process the data. Like the device controllers and DMA controllers found in smaller computers,
 a channel can process more general and sophisticated programs,

 so channels can be tuned for particular workloads..

16
IO Performance
 I/O devices vary greatly in complexity.

 For instance, a mouse is simple.


 The mouse movements and button clicks are converted into numeric values that are passed from

hardware, through the mouse device driver, to the application.

 By contrast, the functionality provided by the Windows disk device driver is complex.
 It not only manages individual disks but also implements RAID arrays

 To do so, it converts an application’s read or write request into a coordinated set of disk I/O

operations. It implements sophisticated error-handling and data-recovery algorithms and takes


many steps to optimize disk performance.

17
IO Performance
 Where should the I/O functionality be implemented

 In the device hardware,

 In the device driver,

 Or in application software?

18
IO Performance
 Initially, we implement experimental I/O algorithms at the application level, because

 Application code is flexible

 Application bugs are unlikely to cause system crashes

 By developing code at the application level, we avoid the need to reboot or reload device drivers after every change to the

code.
 An application-level implementation can be inefficient, because

 of the overhead of context switches

 application cannot take advantage of internal kernel data structures and kernel functionality (such as efficient in-kernel

messaging, threading, and locking)


 •we may Re implement it in the kernel.

 This can improve performance

 but the development effort is more challenging


 operating-system kernel is a large, complex software system
 an in-kernel implementation must be thoroughly debugged to avoid data corruption and system Crashes

19
IO Performance
 Initially, we implement experimental I/O algorithms at the application level, because

 Application code is flexible

 Application bugs are unlikely to cause system crashes

 By developing code at the application level, we avoid the need to reboot or reload device

drivers after every change to the code.


 An application-level implementation can be inefficient, because

 of the overhead of context switches

 application cannot take advantage of internal kernel data structures and kernel

functionality (such as efficient in-kernel messaging, threading, and locking)

20
IO Performance
 We may Re implement it in the kernel.

 This can improve performance

 But the development effort is more challenging


 Operating-system kernel is a large, complex software system
 An in-kernel implementation must be thoroughly debugged to avoid data corruption
and system crashes

21
Performance
 The highest performance may be obtained through a specialized implementation in hardware,

 Either in the device or in the controller.

 The disadvantages of a hardware implementation include the

 Difficulty and expense of making further improvements or of fixing bugs,

 The increased development time (months rather than days),

 the decreased flexibility.

 For instance, a hardware RAID controller may not provide any means for the kernel to

influence the order or location of individual block reads and writes,


 Even if the kernel has special information about the workload that would enable it to improve

the I/O performance.

22
Improving the efficiency of I/O
 We can utilize several principles to improve the efficiency of I/O:

• Reduce the number of context switches.

• Reduce the number of times that data must be copied in memory while passing between
device and application.

• Reduce the frequency of interrupts by using large transfers, smart controllers and
polling (if busy waiting can be minimized).

23
Summary
 STREAMS

 Performance

 Efficiency

24
THANKS

25

You might also like