0% found this document useful (0 votes)
29 views137 pages

01 Introduction v1.2

The document outlines an introductory course on computer organization and architecture, taught by Aatka Ali at Air University Multan. It covers various topics including basic concepts, x86 architecture, assembly language fundamentals, and programming in MS Windows, with a focus on both theoretical and practical lab sessions. Grading is based on quizzes, assignments, class participation, and exams, while communication and resources are managed through Google Classroom.

Uploaded by

m72647264
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views137 pages

01 Introduction v1.2

The document outlines an introductory course on computer organization and architecture, taught by Aatka Ali at Air University Multan. It covers various topics including basic concepts, x86 architecture, assembly language fundamentals, and programming in MS Windows, with a focus on both theoretical and practical lab sessions. Grading is based on quizzes, assignments, class participation, and exams, while communication and resources are managed through Google Classroom.

Uploaded by

m72647264
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 137

Module: Introduction

Instructor: Aatka Ali


Air University Multan
Campus
About Me
• Faculty member in Department of
Computer Science and
Engineering Air University
Multan Campus
• MS(CS) in AI from UET Lahore,
• Worked at UET Lahore
• Reaserch Insterests: Machine
Learning, AI, ERP Design and
Development.
What we will cover
• Introduction
• Basic Concepts
• X86 Architecture
• Assembly Language Fundamentals
• Data Transfer Addressing and
Arithmetic
• Procedures
• Conditional Processing
• Strings and Arrays
• Floating point Processing
• MS Windows Programming
Course Format
• Lectures TF: 10:40 to 11:00 am
– R# G105, online on Schedule meetings using
ZOOM software
• Lab: on Thursday 9:00 am to 11:00 am
• Home assignments and quizzes will be
organized using online resources/physical
Communications
• Google Class Room
• https://classroom.google.com/u/2/c/MzgyMjg3MzIwM
jY4
• Syllabus is there
• Lectures will be available
• Assignments and quizzes will be organized there
Text Book

• Assembly Language for Intel-Based Computers,


4th Edition by Kip R. Irvine
• Essentials of Computer Organization and
Architecture by Linda Null and Julia Lobur ,4th
Edition
• Computer Organization and Architecture, 6th
Edition by William Stallings, Prentice Hall
• Computer Organization and Design, 4th
Edition by David A. Patterson and John
L. Hennessy
Grading
• Quizzes+ Assignment + Class Participation
+Attendance 30%
• Final term 45 %
• Mid Term- 25 %
Lab Practices
• Lab01:Introduction to Visual Studio and MASM Configuration
• Lab02: x86 Registers
• Lab03: Introduction to x86 Assembly
• Lab04: Data Types and Data Transfers
• Lab05: Integer Arithmetic and Data Addressing
• Lab06: Stacks and Procedures
• Lab07: Conditional Statements
• Lab08: Loop Instructions
• Lab09: Logical Instructions
• Lab10: Finite State Machine
Content of Todays Lecture
• What is Computer Organization
• Instruction Cycle
CS223COA

INTRODUCTION TO COMPUTER
SYSTESM
Computing Machines
Ubiquitous ( = everywhere)
• General purpose: servers, desktops, laptops, PDAs, etc.
• Special purpose: cash registers, ATMs, games, Mobile
Phones, etc.
• Embedded: cars, door locks, printers, digital players,
industrial machinery, medical equipment, etc.
Distinguishing Characteristics
• Speed
• Cost
• Ease of use, software support & interface
• Scalability
Computer

Hardware Software
Electronics circuit boards Program consists of
that provide sets of instructions
functionality of the that control the
system system
Inside the Computer

• Application software
• Written in high-level language
• System software
• Compiler: translates HLL code to machine
code
• Operating System: service code
• Handling input/output
• Managing memory and storage
• Scheduling tasks & sharing resources
• Hardware
• Processor, memory, I/O controllers
Abstraction Layers in Modern Systems
– Applications software
– Systems software Application
– Assembly Language Algorithm
– Machine Language
Programming Language
– Architectural Approaches:
Caches, Virtual Operating System/Virtual Machines
Memory, Instruction Set
Pipelining
– Sequential logic, finite state Architecture (ISA)
machines Microarchitecture
– Combinational logic, arithmetic Gates/Register-Transfer
circuits
Level (RTL)
– Boolean logic, 1s and 0s
– Transistors used to build logic Circuits
gates (e.g. CMOS) Devices
– Semiconductors/Silicon used to
Physics
build transistors
14
– Properties of atoms, electrons,
and quantum dynamics
Functions of a Computer

Functions of all Computers


are:

• Data processing
• Data storage
• Data movement
• Control
Function Units in a Computer
1
Irvine, Kip R. Assembly Language for Intel-Based
Computers, 2003.
7
1
Irvine, Kip R. Assembly Language for Intel-Based
Computers, 2003.
8
1
Irvine, Kip R. Assembly Language for Intel-Based
Computers, 2003.
9
20
21
22
23
A Programmer’s View of a Computer

Application Programs

Machine-independent High-Level Languages High-Level Languages

Machine-Specifi Low-Level Language


Assembly Language

Machine Language

Microprogram Control

Hardware
Levels of Program Code

• High-level language
• Level of abstraction
closer to problem
domain
• Provides productivity
and portability
• Assembly language
• Textual
representation of
instructions
• Hardware
representation
• Binary digits (bits)
• Encoded instructions
and data
Below the Program
Applications software
Systems software

Hardware

• System software
– Operating system – supervising program that interfaces
the user’s program with the hardware (e.g., Linux,
MacOS, Windows)
• Handles basic input and output operations
• Allocates storage and memory
• Provides for protected sharing among multiple
applications
– Compiler – translate programs written in a high-level
language (e.g., C, Java) into instructions that the
hardware can execute
Below the Program
• High Level Language Program in C
void swap(int v[], int k)
{
int temp;
temp = v[k];
v[k] =
v[k+1];
v[k+1] =
temp;
}

• AssAssembly
sll $2, $5, 4
Languageadd
lw
$2, $4, $2
$15, 0($2)

Programlwswfor $16, 4($2)


$16, 0($2)

Mips sw jr
$15, 4($2)
$31 0000000010100001
swap: 0…
0000000000011000
• Machine Language 0…
1000110001100010
0…
1000110011110010
0…
1010110011110010
0…
Advantages of HLLs
• Higher-level languages (HLLs)

• Allow the programmer to think in a more natural language


and tailored for the intended use (Fortran for scientific
computation, Cobol for business programming, Lisp for
symbol manipulation, Java for web programming, …)
• Improve programmer productivity – more understandable
code that is easier to debug and validate
• Improve program maintainability
• Allow programs to be machine independent of the
computer on which they are developed (compilers and
assemblers can translate high-level language programs to
the binary instructions of any machine)
• Emergence of optimizing compilers that produce very
efficient assembly code optimized for the target
machine

• As a result, very little programming is done


today at the assembler level.
Compiler Basics
• High-level languages
– Programmers do not think in 0 and 1s
• Languages can also be specific to target
applications, such as Cobol (business) or Fortran
(scientific)
– Applications are more concise  fewer bugs
– Programs can be independent of system on which
they are developed
• Compilers convert source code to object code
• Libraries simplify common tasks
Levels of Representation
temp = v[k];
High Level Language
Program v[k] =
v[k+1];
Compiler
v[k+1] =
lw $15, 0($2)
Assembly Language temp;
lw $16, 4($2)
sw $16, 0($2)
Program sw $15, 4($2)

Assemb
ler
0000 1001 1100 0110 1010 1111 0101 1000
1010 1111 0101 1000 0000 1001 1100 0110
Machine Language 1100 0110 1010 1111 0101 1000 0000 1001
Program 0101 1000 0000 1001 1100 0110 1010 1111

Machin
Control Signal
e ALUOP[0:3] <= InstReg[9:11] &
SpecificationInterpr MASK
° etation [i.e.high/low on control lines]
°
Execution Cycle

Instruction
Fetch Obtain instruction from program storage

Instruction
Determine required actions and instruction size
Decode

Operand Locate and obtain operand data


Fetch

Execute Compute result value or status

Result
Deposit results in storage for later use
Store

Next
Instruction Determine successor instruction
Program Performance
• Program performance is measured in terms of
time!

• Program execution time depends on:


• Number of instructions executed to complete a
job
• How many clock cycles are needed to execute a
single instruction
• The length of the clock cycle (clock cycle time)

32
Clock, Clock Cycle Time
• Circuits in computers are “clocked”
• At each clock rising (or falling) edge, some specified actions are done,
usually within the next rising (or falling) edge
• Instructions typically require more than one cycle to execute

Function block
(made of
clock cycle
time
circuits)
cloc
k

33
Program Performance

• time = (# of clock cycles)  (clock cycle time)

• # of clock cycles =
(# of instructions executed) 
(average cycles per instruction)
Moore’s Law

• In 1965, Intel founder Gordon Moore stated:


“The density of transistors in an integrated circuit
will double every
year”

• Current version of Moore’s Law predicts


doubling of density of silicon chips every 18
months
• Moore originally thought this postulate would
hold for 10 years; advances in chip
manufacturing processes have allowed the
law to hold for 40 years, and it is expected to
last for perhaps another 10
COMPUTER ORGANIZATION AND
ARCHITECTURE
Computer Organization and Architecture
COMPUTER ORGANIZATION

• How components fit together to create working computer


system
• Includes physical aspects of computer systems
• Concerned with how computer hardware works

COMPUTER ARCHITECTURE

• Structure and behavior of computer system


• Logical aspects of system implementation as seen by
programmer
• Concerned with how computer is designed
• Combination of hardware components with Instruction
Set Architecture (ISA): ISA is interface between software
that runs on machine & hardware that executes it
Why learn his Stuff?

• You want to call yourself a “computer engineer”


• You want to build software people use (need performance)
• You need to make a purchasing decision or offer “expert” advice

• Both Hardware and Software affect performance:


– Algorithm determines number of source-level statements
– Language/Compiler/Architecture determine number of machine
instructions
– Processor/Memory determine how fast instructions are
executed

– I/O and Number_of_Cores determine overall system


performance
Classes of Computers
• Desktop computers: Designed to deliver good
performance to a single user at low cost usually executing
3rd party software, usually incorporating a graphics
display, a keyboard, and a mouse
• Servers: Used to run larger programs for multiple,
simultaneous users typically accessed only via a
network and that places a greater emphasis on
dependability and (often) security
• Supercomputers: A high performance, high cost class of
servers with hundreds to thousands of processors,
terabytes of memory and petabytes of storage that are
used for high-end scientific and engineering applications
• Embedded computers (processors): A computer inside
another device, used for running one predetermined
application
Computer Organization Logic Designer's View

ISA Level
• Capabilities & Performance Characteristics of FUs & Interconnect
Principal Functional Units (e.g., Registers, ALU,
Shifters, Logic Units, ...)
• Ways in which these components are
interconnected
• Information flows between
components
• Logic and means by which such information
flow is controlled.
• Choreography of FUs to realize the ISA
• Register Transfer Level (RTL) Description
Organization of a Computer
• Five classic components of a computer – input, output, memory,
datapath, and control
Computer Organization
Organization of a Computer
Computer Organization
• Components:
– input (mouse, keyboard, camera, microphone...)
– output (display, printer, speakers....)
– memory (caches, DRAM, SRAM, hard disk drives, Flash....)
– network (both input and output)
• Our primary focus: the processor (datapath and control)
– implemented using billions of transistors
– Impossible to understand by looking at each transistor
– We need...abstraction!

An abstraction omits unneeded detail,


helps us cope with complexity.
THE VON NEUMANN MODEL
• John Mauchly and J. Presper
W.
principle Eckert (1946) the
first all-electronic,
inventors of general-purpose
ENIAC, digital computer.
• Mauchly and Eckert recognized
came up with asthe idea,
the of
store program instructions
• A mathematician named John von
Neumann
readin Mauchly After Eckert’s proposal
g and von for the published
EDVAC, Neumann and publicized
idea.stored-program computers
• All the have come to be
known as von Neumann systems using the
von Neumann architecture.
13
THE VON NEUMANN MODEL

Today’s version of the stored-program machine architecture


satisfies at least the following characteristics :Consists of three
hardware systems:

➢ A central processing unit (CPU)


with a control unit, an arithmetic logic unit (ALU), registers (small
storage areas), and a program counter; a main-memory
system, which holds programs that control the computer’s
operation; and an I/O system.

➢ Capacity to carry out sequential instruction processing

➢ Contains a single path, either physically or logically,


between the main memory system and the control unit of the
CPU, forcing alternation of instruction and execution cycles.
This single path is often referred to as the von Neumann 14

bottleneck.
THE VON NEUMANN MODEL

15
THE VON NEUMANN MODEL

16
System Bus Architecture

17
18
Assembly Language
• How does assembly language (AL) relate to
machine language? One to one
• How do C++ and Java relate to AL? one-to-
many
• Is AL portable? no
• Why learn AL?

Irvine, Kip R. Assembly Language for Intel-Based 19


Computers, 2003.
Assembly Language Applications
• Some representative types of
applications:
• Business application for single
platform
• Hardware device driver
• Business application for multiple
platforms
• Embedded systems & computer
(see next
games panel)

Irvine, Kip R. Assembly Language for Intel-Based 20


Computers, 2003.
Comparing ASM to High-Level Languages

Irvine, Kip R. Assembly Language for Intel-Based 21


Computers, 2003.
22
23
24
Virtual Machine Concept
• Virtual Machines
• Specific Machine
Levels

Irvine, Kip R. Assembly Language for Intel-Based 25


Computers, 2003.
Virtual Machines
• Tanenbaum: Virtual machine concept
• Programming Language analogy:
• Each computer has a native machine language (language L0)
that runs directly on its hardware
• A more human-friendly language is usually
constructed above machine language, called
Language L1

• Programs written in L1 can run two different


ways:
• Interpretation – L0 program interprets and executes L1
instructions one by one
• Translation – L1 program is completely translated into an L0
program,
Irvine, which
Kip R. Assembly Languagethen runs on the computer hardware
for Intel-Based 26
Computers, 2003.
Translating Languages
English: Display the sum of A times B plus C.

C++: cout << (A * B + C);

Assembly Intel Machine


Language: mov Language: A1
eax,A 00000000
mul B F7 25 00000004
add eax,C
03 05 00000008
call WriteInt
E8 00500000

Irvine, Kip R. Assembly Language for Intel-Based 27


Computers, 2003.
Specific Machine Levels
High-Level Level
Language 5

Assembly Level
Language 4

Operating
Level
System
3
Instruction
Set Level
Architecture 2
Microarchitect Level
ure 1
Digital
Level
Logic
0

(descriptions of individual
levels follow . . . )

Irvine, Kip R. Assembly Language for Intel-Based 28


Computers, 2003.
High-Level Language
• Level 5
• Application-oriented languages
• C++, Java, Pascal, Visual Basic . . .
• Programs compile into assembly
language (Level 4)

Irvine, Kip R. Assembly Language for Intel-Based 29


Computers, 2003.
Assembly Language
• Level 4
• Instruction mnemonics that have a
one-to- one correspondence to
machine language
• Calls functions written at the
operating system level (Level 3)
• Programs are translated into
machine language (Level 2)

Irvine, Kip R. Assembly Language for Intel-Based 30


Computers, 2003.
Operating System
• Level 3
• Provides services to Level 4
programs
• Translated and run at the
instruction set architecture level
(Level 2)

Irvine, Kip R. Assembly Language for Intel-Based 31


Computers, 2003.
Instruction Set Architecture
• Level 2
• Also known as conventional
machine language
• Executed by Level 1
(microarchitecture) program

Irvine, Kip R. Assembly Language for Intel-Based 32


Computers, 2003.
The Instruction Set: a Critical Interface

software

instruction set architecture

hardware
ISA and Computer Architecture

Application
Operating
System
Compiler
Firmware
Instruction Set Architecture

Instr. Set Proc. I/O system


Computer
Logic Design
Implementation
Circuit Design
Architecture
Layout
Instruction Set Architecture
• ISA, or simply architecture, the abstract interface between the hardware
and lowest level software that encompasses all information necessary to
write a machine language program including instruction, Registers,
Memory Access, I/O,
– Enable implementations of varying cost and
performance to run identical software
• The Combination of Instruction set architecture and operating
system interface is called Application Binary Interface(ABI)
– ABI –The User portion of the instruction set plus the operation
system interfaces used by the programmers, Defines a standard for
binary portability across the computers,
Instruction Set Architecture (ISA)
• “Attributes of the Computer system seen by the programmer i.e. conceptual
Structure and functional behavior as distinct from the organization of the dataflow
and controls and logic design and the physical implementation”
- Amdhal, Blaauw, and Brooks ,1964

• ISA Includes
– Organization of storage
– Data types
– Encoding and representing instructions
– Instruction Set (i.e. opcodes)
– Modes of addressing data items/instructions
– Program visible exception handling

• ISA together with OS interface specifies the requirements for binary compatibility
across implementations (ABI: application binary interface)
Instruction Set Architecture

• A very important abstraction


– interface between hardware and low-level software
– standardizes instructions, machine language bit patterns, etc.
– advantage: different implementations of the same architecture
– disadvantage: sometimes prevents using new innovations

• Common instruction set architectures:


– IA-64, IA-32, PowerPC, MIPS, SPARC, ARM, and others
– All are multi-sourced, with different implementations for the same ISA
Case Study: x86 ISA

• Instruction Categories
– Load/Store
– Computational
– Jump and Branch
– Floating Point
– Memory Management
– Special
Microarchitecture
• Level 1
• Interprets conventional
machine instructions (Level
2)
• Executed by digital
hardware (Level 0)

Irvine, Kip R. Assembly Language for Intel-Based 33


Computers, 2003.
Digital Logic
• Level 0
• CPU, constructed from digital logic
gates
• System bus
• Memory
• Implemented using bipolar
transistors

next: Data
Representatio
n
Irvine, Kip R. Assembly Language for Intel-Based 34
Computers, 2003.
35
Java Virtual Machine

Irvine, Kip R. Assembly Language for Intel-Based 36


Computers, 2003.
37
38
DATA REPRESENTATIONS
Data Representation
• Binary Numbers
• Translating between binary and decimal
• Binary Addition
• Integer Storage Sizes
• Hexadecimal Integers
• Translating between decimal and
hexadecimal
• Hexadecimal subtraction
• Signed Integers
• Binary subtraction
• Character Storage

Irvine, Kip R. Assembly Language for Intel-Based 21


Computers, 2003.
Binary Numbers

• Digits are 1 and 0


• 1 = true
• 0 = false
• MSB – most significant
bit
• LSB – least significant
MSB LSB
• bit
Bit 10110010100111
00
numbering: 15 0

Irvine, Kip R. Assembly Language for Intel-Based 22


Computers, 2003.
Binary Numbers

• Each digit (bit) is either 1 or 1 1 1 1 1 1 1 1


0 27 26 25 24 23 22 21 20
• Each bit represents a power
of 2:

Every binary
number is a
sum of
powers of 2

Irvine, Kip R. Assembly Language for Intel-Based 23


Computers, 2003.
Translating Binary to Decimal

Weighted positional notation shows how to


calculate the decimal value of each binary bit:
dec = (Dn-1  2n-1) + (Dn-2  2n-2) + ... + (D1  21) + (D0  20)
D = binary digit

binary 00001001 =
decimal 9:
(1  23) + (1  20) = 9
Irvine, Kip R. Assembly Language for Intel-Based
Computers, 2003.
24
Translating Unsigned Decimal to Binary
• Repeatedly divide the decimal integer by
2. Each remainder is a binary digit in the
translated value:

37.6875 =
100101.1011
Irvine, Kip R. Assembly Language for Intel-Based
Computers, 2003.
Exercises

Irvine, Kip R. Assembly Language for Intel-Based 26


Computers, 2003.
Binary Addition
• Starting with the LSB, add each pair of digits,
include the carry if present.
carry:
1
0 0 0 0 0 1 0 0 (4
)
+ 0 0 0 0 0 1 1 1 (7
)

0 0 0 0 1 0 1 1 (11)

bit position: 7 6 5 4 3 2 1 0

Irvine, Kip R. Assembly Language for Intel-Based 27


Computers, 2003.
Integer Storage Sizes
byte

Standard 1
8 word 6 3
sizes: doublewor 2
6
d 4

quadwor
d

What is the largest unsigned integer that may be stored in 20 bits?

Irvine, Kip R. Assembly Language for Intel-Based 28


Computers, 2003.
Hexadecimal Integers
Binary values are represented in
hexadecimal.

Irvine, Kip R. Assembly Language for Intel-Based 29


Computers, 2003.
Translating Binary to Hexadecimal

• Each hexadecimal digit corresponds to 4


binary bits.
• Example: Translate the binary integer
000101101010011110010100 to
hexadecimal:

Irvine, Kip R. Assembly Language for Intel-Based 30


Computers, 2003.
Converting Hexadecimal to Decimal

• Multiply each digit by its corresponding


power of 16:
dec = (D3  163) + (D2  162) + (D1  161) + (D0  160)

• Hex 1234 equals (1  163) + (2  162) + (3  161) + (4  160), or


decimal 4,660.

• Hex 3BA4 equals (3  163) + (11 * 162) + (10  161) + (4  160), or


decimal 15,268.

Irvine, Kip R. Assembly Language for Intel-Based 31


Computers, 2003.
Powers of 16

Used when calculating hexadecimal values up to


8 digits long:

Irvine, Kip R. Assembly Language for Intel-Based 32


Computers, 2003.
Converting Decimal to
Hexadecimal

decimal 422 = 1A6


hexadecimal

Irvine, Kip R. Assembly Language for Intel-Based 33


Computers, 2003.
Convert Decimal Fraction to Octal
Fraction

Irvine, Kip R. Assembly Language for Intel-Based 34


Computers, 2003.
Hexadecimal Addition

• Divide the sum of two digits by the number base


(16). The quotient becomes the carry value, and
the remainder is the sum digit.
1 1
36 28 28
6A
42
78 45
6D 58 4B
B
80 5

21 / 16 = 1,
rem 5

Important skill: Programmers frequently add and


subtract the addresses of variables and instructions.

Irvine, Kip R. Assembly Language for Intel-Based 35


Computers, 2003.
Hexadecimal Subtraction

• When a borrow is required from the digit to the


left, add 16 to the current digit's value:

16 + 5 = 21 – 7 = 14 = E

–1
C6
75
A2
47
24
Practice: The address of2E
var1 is 00400020. The address of
the next variable after var1 is 0040006A. How many bytes
are used by var1?

Irvine, Kip R. Assembly Language for Intel-Based 36


Computers, 2003.
Addition and Multiplication Examples

Irvine, Kip R. Assembly Language for Intel-Based 37


Computers, 2003.
Hexadecimal Complement

Irvine, Kip R. Assembly Language for Intel-Based 38


Computers, 2003.
Signed Integers
The highest bit indicates the sign. 1 =
negative, 0 = positive
sign
bit

1 1 1 1 0 1 1 0
Negative

0 0 0 0 1 0 1 0
Positive

If the highest digit of a hexadecimal integer is > 7,


the value is negative. Examples: 8A, C5, A2, 9D

Irvine, Kip R. Assembly Language for Intel-Based 39


Computers, 2003.
Ranges of Signed Integers

The highest bit is reserved for the sign. This limits


the range:

Practice: What is the largest positive value that may be stored in 20


bits?

Irvine, Kip R. Assembly Language for Intel-Based 40


Computers, 2003.
1s, 2s, 9s and 10s Complement

Irvine, Kip R. Assembly Language for Intel-Based 41


Computers, 2003.
Forming the Two's Complement
• Negative numbers are stored in two's
complement notation
• Represents the additive Inverse

Note that 00000001 + 11111111 =


00000000

Irvine, Kip R. Assembly Language for Intel-Based 42


Computers, 2003.
Binary Subtraction
• When subtracting A – B, convert B to its
two's complement
• Add A to (–B)

00001100 0000110
– 0000001 0
1 1
0101
0101
1100
1
01

Practice: Subtract 0101 from 1001.

Irvine, Kip R. Assembly Language for Intel-Based 43


Computers, 2003.
Subtraction using Complements

Irvine, Kip R. Assembly Language for Intel-Based 44


Computers, 2003.
Subtraction using Complements

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Learn How To Do the Following:

• Form the two's complement of a hexadecimal


integer
• Convert signed binary to decimal
• Convert signed decimal to binary
• Convert signed decimal to hexadecimal
• Convert signed hexadecimal to decimal

Irvine, Kip R. Assembly Language for Intel-Based 46


Computers, 2003.
BCD

Irvine, Kip R. Assembly Language for Intel-Based 47


Computers, 2003.
BCD Addition Example

Irvine, Kip R. Assembly Language for Intel-Based 48


Computers, 2003.
Other Decimal Codes

Irvine, Kip R. Assembly Language for Intel-Based 49


Computers, 2003.
Character Storage
• Character sets
• Standard ASCII (0 – 127)
• Extended ASCII (0 – 255)
• ANSI (0 – 255)
• Unicode (0 – 65,535)
• Null-terminated String
• Array of characters followed by a
null byte
• Using the ASCII table
• back inside cover of book

Irvine, Kip R. Assembly Language for Intel-Based 50


Computers, 2003.
Numeric Data Representation

• pure binary
• can be calculated
directly
• ASCII binary
• string of digits:
"01010101"
• ASCII decimal
• string of digits: "65"
• ASCII hexadecimal
• string of digits: "9C"
next:
Boolean
Operations
Irvine, Kip R. Assembly Language for Intel-Based 51
Computers, 2003.
ASCII

Irvine, Kip R. Assembly Language for Intel-Based 52


Computers, 2003.
ASCII

Irvine, Kip R. Assembly Language for Intel-Based 53


Computers, 2003.
Irvine, Kip R. Assembly Language for Intel-Based 54
Computers, 2003.
Irvine, Kip R. Assembly Language for Intel-Based 55
Computers, 2003.
Irvine, Kip R. Assembly Language for Intel-Based 56
Computers, 2003.
Irvine, Kip R. Assembly Language for Intel-Based 57
Computers, 2003.
FLOATING POINT
REPRESENTATION'S
Floating Point
• Representation for non-integer numbers
❑ Including very small and very large numbers
• Like scientific notation
❑ –2.34 × 1056 normalized
 +0.002 × 10–4
not normalized
 +987.02 × 109

• In binary
❑ ±1.xxxxxxx2 × 2yyyy
• Types float and double
in C
Floating Point

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating Point Standard
• Defined by IEEE Std 754-1985
• Developed in response to divergence of
representations
❑ Portability issues for scientific code
• Now almost universally adopted
• Two representations
❑ Single precision (32-bit)
❑ Double precision (64-bit)
IEEE Floating-Point Format
single: 8 bits single: 23
double: 11 bits
bits double: 52
bits
S Exponent
Fraction
x = ( – 1) S
 (1 + 2 (Exponent –

Fraction) Bias)

• S: sign bit (0  non-negative, 1  negative)


• Normalize significand: 1.0 ≤ |significand| < 2.0
❑ Always has a leading pre-binary-point 1 bit, so no need to represent it
explicitly (hidden bit)
❑ Significand is Fraction with the “1.” restored
• Exponent: excess representation: actual exponent + Bias
❑ Ensures exponent is unsigned
❑ Single: Bias = 127; Double: Bias = 1023
Single-Precision Range
• Exponents 00000000 and 11111111 reserved
• Smallest value
❑ Exponent: 00000001
 actual exponent = 1 – 127 = –126
❑ Fraction: 000…00  significand = 1.0
❑ ±1.0 × 2–126 ≈ ±1.2 × 10–38
• Largest value
❑ exponent: 11111110
 actual exponent = 254 – 127 = +127
❑ Fraction: 111…11  significand ≈ 2.0
❑ ±2.0 × 2+127 ≈ ±3.4 × 10+38
IEEE 754 Double Precision

• Double precision number represented in 64 bits


• MIPS Format
Exponent: Significand:
bias 1023 magnitude, normalized
binary integer binary significand with
0 < E < 2047 hidden bit (1): 1.F
sign 1 11 20
S E F
F
32
(-1)S × S × 2E
or (-1)S × (1 + Fraction) ×
2(Exponent-Bias)
Double-Precision Range
• Exponents 0000…00 and 1111…11 reserved
• Smallest value
❑ Exponent: 00000000001
 actual exponent = 1 – 1023 = –1022
❑ Fraction: 000…00  significand = 1.0
❑ ±1.0 × 2–1022 ≈ ±2.2 × 10–308
• Largest value
❑ Exponent: 11111111110
 actual exponent = 2046 – 1023 = +1023
❑ Fraction: 111…11  significand ≈ 2.0
❑ ±2.0 × 2+1023 ≈ ±1.8 × 10+308
IEEE 754 FP Standard Encoding
• Special encodings are used to represent unusual events
– ± infinity for division by zero
– NAN (not a number) for the results of invalid operations such as 0/0
– True zero is the bit string all zero

Single Precision Double Precision Object


Represente
E (8) F (23) E (11) F (52) d

0000 0000 0 0000…0000 0 true zero (0)


0000 0000 nonzero 0000…0000 nonzero ±
denormalized
number
0000 0001 anythin 0000…0001 anythin ± floating
to g to g point
1111 1110 1111 …1110 number
1111 1111 0 1111 … 1111 0 ± infinity
1111 1111 nonzero 1111 … 1111 nonzero not a
number
Floating-Point Precision
• Relative precision
❑ all fraction bits are significant
❑ Single: approx 2–23
• Equivalent to 23 × log102 ≈ 23 × 0.3 ≈ 6 decimal
digits of precision
❑ Double: approx 2–52
• Equivalent to 52 × log102 ≈ 52 × 0.3 ≈ 16
decimal digits of precision
Floating-Point Example
• What number is represented by the single-
precision float
11000000101000…00
❑ S=1
❑ Fraction = 01000…002
❑ Exponent = 100000012 = 129
• x = (–1)1 × (1 + .012) × 2(129 – 127)
= (–1) × 1.25 × 22
= –5.0
Floating-Point Example

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating-Point Example

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating-Point Addition

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating-Point Addition

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating-Point Addition
• Consider a 4-digit decimal example
➢ 9.999 × 101 + 1.610 × 10–1
• 1. Align decimal points
➢ Shift number with smaller exponent
➢ 9.999 × 101 + 0.016 × 101
• 2. Add significands
➢ 9.999 × 101 + 0.016 × 101 = 10.015 × 101
• 3. Normalize result & check for over/underflow
➢ 1.0015 × 102
• 4. Round and renormalize if necessary
➢ 1.002 × 102
Floating-Point Addition
• Now consider a 4-digit binary example
➢ 1.000 × 2–1 + –1.110 × 2–2 (0.5 + –0.4375)
2 2

• 1. Align binary points


➢ Shift number with smaller exponent

➢ 1.000 × 2–1 + –0.111 × 2–1


2 2

• 2. Add significands
➢ 1.000 × 2–1 + –0.111 × 2–1 = 0.001 × 2–1
2 2 2

• 3. Normalize result & check for over/underflow


➢ 1.000 × 2–4, with no over/underflow
2

• 4. Round and renormalize if necessary


➢ 1.000 × 2–4 (no change) = 0.0625
2
FP Adder Hardware
• Much more complex than integer adder
• Doing it in one clock cycle would take too long
– Much longer than integer operations
– Slower clock would penalize all instructions
• FP adder usually takes several cycles
– Can be pipelined
FP Adder Hardware

Step 1

Step 2

Step 3

Step 4
Floating-Point Multiplication

Irvine, Kip R. Assembly Language for Intel-Based 45


Computers, 2003.
Floating-Point Multiplication
• Now consider a 4-digit binary example
➢ 1.0002 × 2–1 × –1.1102 × 2–2 (0.5 × –0.4375)
• 1. Add exponents
➢ Unbiased: –1 + –2 = –3
➢ Biased: (–1 + 127) + (–2 + 127) = –3 + 254 – 127 = –3 + 127
• 2. Multiply significands
➢ 1.0002 × 1.1102 = 1.1102  1.1102 × 2–3
• 3. Normalize result & check for over/underflow
➢ 1.1102 × 2–3 (no change) with no over/underflow
• 4. Round and renormalize if necessary
➢ 1.1102 × 2–3 (no change)
• 5. Determine sign: if same, +; else, -
 –1.1102 × 2
–3 =–
0.21875
Floating-Point Multiplication
• Consider a 4-digit decimal example
➢ 1.110 × 1010 × 9.200 × 10–5
• 1. Add exponents
➢ For biased exponents, subtract bias from sum
➢ New exponent = 10 + –5 = 5
• 2. Multiply significands
➢ 1.110 × 9.200 = 10.212  10.212 × 105
• 3. Normalize result & check for over/underflow
➢ 1.0212 × 106
• 4. Round and renormalize if necessary
➢ 1.021 × 106
• 5. Determine sign of result from signs of operands
 +1.021 ×
106
Floating-Point Multiplication

You might also like