0% found this document useful (0 votes)
28 views18 pages

CS 303 Chapter1, Lecture 3

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 18

CS 303

Chapter1, Lecture 3

1
Storage Structure
• CPU can load instructions only from main memory.
• Any program to be run must be stored there.
• All memory provides an array of bytes.
• Each byte is addressable using a memory location.
• Memory – CPU: load instruction.
• CPU – memory: store instruction.
• Main memory is too small to accommodate all programs.
• Hence, we have external memory, also known as disks.

2
Storage Structure
• Main memory – only large storage that the CPU can access
directly.
• Random access (any memory cell accessed in same amount of time).
• Typically volatile (contents lost on power down).

• Secondary storage – extension of main memory that provides


large nonvolatile storage capacity. This consists of:
• Hard disks (HD) – rigid metal or glass platters covered with
magnetic recording material
• Disk surface is logically divided into tracks, which are subdivided into sectors
• The disk controller takes care of the interaction between the device and the computer.

• Solid-state disks (SSD) – faster than hard disks, nonvolatile.


• Various technologies.
• Becoming more popular.
• More on this later in chapters on memory management.

3
Storage Definitions and Notation Review
The basic unit of computer storage is the bit . A bit can contain one of two
values, 0 and 1. All other storage in a computer is based on collections of bits.
Given enough bits, it is amazing how many things a computer can represent:
numbers, letters, images, movies, sounds, documents, and programs, to name
a few. A byte is 8 bits, and on most computers it is the smallest convenient
chunk of storage. For example, most computers don’t have an instruction to
move a bit but do have one to move a byte. A less common term is word,
which is a given computer architecture’s native unit of data. A word is made
up of one or more bytes. For example, a computer that has 64-bit registers and
64-bit memory addressing typically has 64-bit (8-byte) words. A computer
executes many operations in its native word size rather than a byte at a time.

Computer storage, along with most computer throughput, is generally


measured and manipulated in bytes and collections of bytes. A kilobyte , or
KB , is 1,024 bytes; a megabyte , or MB , is 1,0242 bytes; a gigabyte , or GB , is
1,0243 bytes; a terabyte , or TB , is 1,0244 bytes; and a petabyte , or PB , is 1,0245
bytes. Computer manufacturers often round off these numbers and say that
a megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking
measurements are an exception to this general rule; they are given in bits
(because networks move data a bit at a time).

4
Storage-Device Hierarchy

5
Caching
• Would have studied this in CSL 211.
• What is caching?
• Important principle, performed at many levels in a
computer (in hardware, operating system, other
software).
• Data in use copied from slower to faster storage
temporarily.
• Faster storage (cache) checked first to determine if data
is there.
• If it is, data used directly from the cache (fast).
• If not, data may be copied to cache and used.
• Challenge: cache smaller than data being cached.
• Cache management important design problem.
• Cache size and replacement policy.

6
Operating-System Operations
• After review of architecture, let’s focus back on operating
systems.
• Bootstrap program – simple code to initialize the system,
load the kernel (discussed earlier).
• No user programs to run, no I/O to process, kernel simply
waits for an event, signaled by in interrupt.
• Interrupt driven (hardware and software)
• Hardware interrupt by one of the devices.
• Software interrupt (exception or trap):
• Software error (e.g., division by zero).
• Request for operating system service.
• Other process problems -> infinite loop, processes modifying each other or
the operating system.
7
Multiprogramming and Multitasking
• Multiprogramming.
• Single user cannot keep CPU and I/O devices busy at all times.
• Multiprogramming organizes jobs (code and data) so CPU always has one to execute.
• A subset of total jobs in system is kept in memory.
• One job selected and run via job scheduling.
• When it has to wait (for I/O for example), OS switches to another job.
• Advantage: > CPU utilization, keeps users satisfied.

• Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently


that users can interact with each job while it is running, creating interactive computing.
• Illusion is that system executes multiple programs “simultaneously”.
• Response time should be less, say, < 1 second.
• Each user has at least one program executing in memory [process.
• If several jobs ready to run at the same time [ CPU scheduling.
• If processes don’t fit in memory, swapping moves them in and out to run.
• Virtual memory allows execution of processes not completely in memory.

8
Memory Layout for Multiprogrammed System

9
Dual Mode
• Software interrupt from a user program so that an OS service be provided.
This interrupt is called a system call.
• The OS and user programs share computer hardware.
• Incorrect or malicious program should not cause other programs, or the OS
to operate incorrectly.
• E.g. process hogging an IO device or the CPU.
• Need a security mechanism that distinguishes between user code and OS
code.
• Such a mechanism is called – dual mode operation.
• Dual-mode operation allows OS to protect itself and other programs from
each other.
• Two modes: User mode and kernel mode.
• In kernel mode, CPU can execute all instructions & access all areas of memory
(unrestricted access).
• In user mode, CPU is restricted to certain instructions & accesses certain areas of
memory.
10
Dual Mode Operation
• Mode bit provided by hardware.
• Provides ability to distinguish when system is running user code or kernel code.
• Some instructions designated as privileged, only executable in kernel mode.
• System call changes mode from user -> kernel, return from call resets it back to
user.
• What if user code tries to execute a privileged instruction?
• Hardware traps to the OS, which may terminate the process, dump it’s
code to memory for examination (core dump).
• Example of privileged instructions:
• Go from user à kernel mode.
• Interrupt management.
• I/O device management.

11
Transition from User to Kernel Mode

12
Source: Julia Evans, Twitter.

13
UNIX time utility
• Try this command: time find /usr.
• Three kinds of time shown in the output:
• real 0m0.007s
• user 0m0.001s
• sys 0m0.004s
• User time –> time spent executing user code.
• Sys time –> time spent executing kernel code.
• Real time –> total time spent, including wait times.
• User + Sys = CPU time used by process, across all CPUs.
14
How a Modern Computer Works

A von Neumann architecture

15
1. Single CPU systems
! We now briefly discuss some popular computer architectures.
! When hardware was expensive, most systems used to have a single
general-purpose processor (CPU).
! As opposed to general purpose CPUs, we now have special purpose
CPUs.
! For example, for graphics (GPUs) or for capturing motion (motion co-
processor) on the iPhone.
! The motion co-processor collects data from motion sensors (accelerometer,
compass), used by various motion based apps.
! In a single CPU machine, at any point of time, only one program may
run (instructions of only one program may be executed).
" However, we can still simultaneously support multiple programs, even with a
single CPU. How?
" Using multiprogramming.

16
2. Multiprocessor Systems
• Multiprocessors (parallel) systems growing in use and
importance.
• Have multiple CPUs on the same machine.
• Advantage of this?
• Can support more programs at a time.
• However, the CPUs do share the memory & I/O devices (contention
reduces expected gain).
• Multiprocessors first appeared in servers, but are now available on basic
desktops, laptops, & even mobile phones.
• Advantages of multiprocessors:
1. Increased throughput (more work done/unit time).
2. Economy of scale (multiple CPUs sharing memory & IO devices vs. single CPU
machines, each with their own memory & IO devices, good for programs that
share data).
3. Increased reliability – graceful degradation or fault tolerance. If one of the CPUs
breaks down, load of that CPU can be re-distributed to remaining CPUs.
17
Symmetric Multiprocessing Architecture

18

You might also like