0% found this document useful (0 votes)
601 views

Module 3 Basic Computer Concepts

The document provides information about basic computer concepts including hardware, software, and the components of a computer system. It discusses the CPU and its main components - the control unit, arithmetic logic unit, and registers. The control unit manages program execution by fetching instructions from memory, decoding and executing them with help from the ALU. Registers are used to store temporary results and data during processing. The document also outlines the basic components of a computer - input, processing, storage, and output.

Uploaded by

Ken Mark Young
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
601 views

Module 3 Basic Computer Concepts

The document provides information about basic computer concepts including hardware, software, and the components of a computer system. It discusses the CPU and its main components - the control unit, arithmetic logic unit, and registers. The control unit manages program execution by fetching instructions from memory, decoding and executing them with help from the ALU. Registers are used to store temporary results and data during processing. The document also outlines the basic components of a computer - input, processing, storage, and output.

Uploaded by

Ken Mark Young
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

1 | Lecture Note-Intro to Computing

UNIT 3| Basic Computer Concepts


A. Learning Outcome:

At the end of the topic, the students must have:


• Familiarized with basic components of computers and computer systems.
• Understood how the computer-processed data.
• Learn how data is represented on a computer.
• Performed conversion of the different number system

B. Pre-Test

A. Directions: Tell whether the following is an example of computer hardware or software. Put the letter
A if it is Hardware and the letter B if it is software.

________ 1 CPU
________ 2 MICROSOFT WORD
________ 3 KEYBOARD
________ 4 LINUX
________ 5 YOUTUBE
________ 6 HARD DISK
________ 7 MOTHERBOARD
________ 8 CHROME
________ 9 WINDOWS 10.2
________ 10 MICROSOFT DISK OPERATING
SYSTEM

B. Directions: Using the table below, group the given devices into their components.

INPUT OUTPUT PROCESSING STORAGE

1 Digital Camera 6 Keyboard 11 Light Pen


2 Flash Drive 7 Monitor 12 Motherboard
3 CPU 8 Network card 13 Projector
4 RAM 9 Sound Card 14 Scanner
5 Printer 10 Solid-State Drives (SSD) 15 Headphones

C. Content

3.1 COMPUTER ORGANIZATION

A Computer is a device or set of devices that works under the control of a stored program, and
automatically accepts and processes data to provide information.

Computer basic characteristics:


• Automatic: it carries out instructions with minimum human intervention
• Re-programmable: it stores instruction (the program)
• A data processor: it carries out operations on data (numbers or words) made up of a combination of
digits to produce information.

ELEMENTS OF THE COMPUTER SYSTEM

• Hardware- The physical components of a computer constitute its Hardware. These include a keyboard,
mouse, monitor, and processor. The hardware consists of input devices and output devices that make
a complete computer system.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


2 | Lecture Note-Intro to Computing

• Software- A set of programs that form an interface between the hardware and the user of a computer
system is referred to as Software. The major types of software are:
o System software- A set of programs to control the internal operations such as reading data
from input devices, giving results to output devices, and ensuring proper functioning of
components is called system software. System software includes the following:
▪ Operating System
▪ Development Tools, Programming Language software, and Language processors
▪ Device Drivers
▪ Firmware
o Application software- Programs designed by the user to perform a specific function, such as
accounting software, payroll software, etc.
▪ Utility software
▪ General Business Productivity Application
▪ Home Use Applications
▪ Cross-industry application software
▪ Vertical Market Application Software
• People- The most important element of a computer system is its users. They are also called liveware of
the computer system.
• Data and Information- Data is the name given to facts while information is processed and useful data
that is relevant, accurate, up to date, and can be used to make decisions.

COMPONENTS OF COMPUTER

A computer accepts and then processes input data according to the instructions it is given. The
components of any sort of processing are INPUT, PROCESSING, STORAGE, and OUTPUT which can be
depicted as shown in the following diagram.

Storage

Memory Unit

CPU
Input Control Unit Output
Register

ALU

Figure 3.1 Basic Organization of Computer

• Input- This is the process of entering data and programs into the computer system. This is possible
through what we call input devices like a keyboard, mouse, joystick, etc.
• Processing- The task of performing operations like arithmetic and logical operations is called
processing. The Central Processing Unit (CPU) takes data and instructions from the storage unit and
makes all sorts of calculations based on the instructions given and the type of data provided. It is then
sent back to the storage unit.
• Storage- The process of saving data and instructions permanently is known as storage. Data must be
fed into the system before the actual processing starts. It is because the processing speed of the CPU
is so fast that the data has to be provided to the CPU at the same speed. Therefore, the data is first
stored in the storage unit for faster access and processing, this kind of storage is called primary storage.
It provides space for storing data and instructions.
The storage unit performs the following major functions:
▪ All data and instructions are stored here before and after processing
▪ Intermediate processing is also stored here.
• Output- This is a process of producing results from the data for getting useful information. Similarly, the
output produced by the computer after processing must also be kept somewhere inside the computer
before being in the form of a human-readable format. Output can also be stored inside the computer for
further processing.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


3 | Lecture Note-Intro to Computing

The Control Processing Unit

CPU is commonly called as brain of the computer. It is called the brain because it takes all major
decisions, makes all sorts of calculations, and controls different parts of the computer elements. CPU has four
key parts such as:
▪ Control Unit- All operations like input, processing, and output are performed by the control unit. It takes
care of the step-by-step processing of all operations inside the computer. It extracts instructions from
memory then decodes it and finally executes them, calling on the ALU when necessary.
o The basic function of a computer system is program execution which is performed by the
Control Unit. The process of program execution is the retrieval of instructions and data from
memory, and the execution of various operations. The cycles with complex instructions typically
utilized the following steps:
1. Fetch the Instruction from the Main Memory
2. Decode the Instruction
3. Fetch Data from Main Memory
4. Execute the Instruction
5. Store Result
▪ Arithmetic Logic Unit (ALU)- as its name implies, it is the portion of the CPU which performs arithmetic
and logical operations on the binary data. The ALU contains an Adder which can combine the contents
of two registers in accordance with the logic of binary arithmetic.
▪ Registers A register is a small high-speed memory inside the CPU. It is used to store temporary results.
Registers are designed to be assessed at a much higher speed than conventional memory. Some
registers are general purpose while others are classified according to the function they perform. Here is
the classification of CPU registers:
o Accumulator- This is the most frequently used register used to store data taken from memory.
It is indifferent numbers in different microprocessors.
o Memory Address Registers (MAR)- These hold the address of the location to be accessed
from memory. MAR and MDR (Memory Data Register) together facilitate the communication of
the CPU and the main memory.
o Memory Data Registers (MDR)- These contain data to be written into or to be read out from
the addressed location.
o General Purpose Registers- These are numbered R0, R1, R2….Rn-1, and used to store
temporary data during any ongoing operation. Its content can be accessed by assembly
programming. Modern CPU architectures tend to use more GPR so that register-to-register
addressing can be used more, which is comparatively faster than other addressing modes.
o Program Counter (PC)- It is used to keep track of the execution of the program. It contains the
memory address of the next instruction to be fetched. PC points to the address of the next
instruction to be fetched from the main memory when the previous instruction has been
successfully completed. The program Counter (PC) also functions to count the number of
instructions. The incrementation in PC depends on the type of architecture being used.
o Instruction Register (IR)- The IR holds the instruction which is just about to be executed. The
instruction from the PC is fetched and stored in IR. As soon as the instruction is placed in IR,
the CPU starts executing the instruction and the PC points to the next instruction to be
executed.
▪ Clock- A circuit in a processor that generates a regular sequence of electronic pulses used to
synchronize operations of the processor’s components. The time between pulses is the cycle time and
the number of pulses per second is the clock rate (or frequency). The execution times of instructions on
a computer are usually measured by several clock cycles rather than seconds. The higher the clock rate,
the quicker speed of instruction processing.

The Memory

Memory refers to computer components, devices,


and recording media that retain digital data used for WHAT DO YOU THINK?
computing for some interval of time. Computer memory
includes internal and external memory. What is the difference between
▪ Internal Memory- The internal memory is memory and storage?
accessible by a processor without the use of the
input-output channels. It usually includes several types of storage, such as storage, cache memory, and
special registers, all of which can be directly main by the processor.
o Cache Memory- acts as a buffer. Smaller and faster than main storage, cache memory is used
to hold a copy of instructions and data in main storage that are likely to be needed next by the
processor and that have been obtained automatically from main storage.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


4 | Lecture Note-Intro to Computing

o Main Memory (Main Storage)- Main memory is addressable storage from which instructions
and other data may be loaded directly into registers for subsequent execution or processing.
The storage capacity of the main memory is the total amount of stored information that the
memory can hold. It is expressed as several bits or bytes. Main memory consists of the
following:
▪ Random Access Memory (RAM): The primary storage is referred to as random
access memory (RAM) because it is possible to randomly select and use any location
of the memory directly store and retrieve data. It takes the same time to any address
of the memory as the first address. It is also called read/write memory. The storage of
data and instructions inside the primary storage temporarily disappears from RAM as
soon as the power to the computer is switched off.
▪ Read Only Memory (ROM): There is another memory in computers which is called
Read Only Memory (ROM). Again, it is the ICs inside the PC that are from the ROM.
The storage of the programs and data in the ROM is permanent. The ROM stores
some standard processing programs supplied by the manufacturers to operate the
personal computer. The ROM can only be read by the CPU, but it cannot be changed.
The basic input/output program is stored in the ROM that examines and initializes
various equipment attached to the PC when the switch is made ON.
o External Memory-The external memory holds information too large for storage in main
memory. Information on external memory can only be accessed by the CPU if it is first
transferred to the main memory. External memory is slow and virtually unlimited in capacity. It
retains information when the computer is switched off and is used to keep a permanent copy
of programs and data. The following are examples of external memory:
▪ Hard disk ▪ Optical Disk
▪ Solid-State Drive ▪ Memory Stick (Flash Drive)
▪ Floppy Disk ▪ Memory Cards

Input-Output Devices

A computer is only useful when it can communicate with the external environment. This is possible
through what we call input and output devices.
▪ Input Devices are necessary to convert the data or information into a form that can be understood by
the computer. A good input device should provide timely, accurate, and useful data to the main memory
for processing the following are the most useful input devices:
o Keyboard o Camera
o Mouse o Microphone
o Scanner
▪ Output Devices- the devices served as a medium to show the output after processing some data or
information. The following are the most common output devices:
o Monitor
o Printer
o Speaker
In some cases, there are input-output devices that are used to perform specific tasks but are not
necessarily needed in basic computer operation. For example, your computer can work properly even without a
joystick but when playing role-playing games (RPG), it is easy to control the character when using a joystick rather
than a keyboard. This kind of device is what we called peripheral device. Even without these devices, the
computer will work properly. Some examples of peripheral devices are:
o Speaker
o Scanner
o Projector
o Printer
o Camera
o Microphone
o Joystick

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


5 | Lecture Note-Intro to Computing

The Computer Motherboard

The motherboard serves as a single platform to connect all the parts of a computer together. It connects
the CPU, memory, hard drives, optical drives, video card, sound cards, and other ports and expansion cards
directly or via cables. It can be considered the backbone of a computer.

Features of Motherboard
• Motherboard varies greatly in supporting various types of components.
• Motherboard supports a single type of CPU and a few types of memory.
• Video cards, hard disks, and sound cards must be compatible with the motherboard to function properly.
• Motherboards, cases, and power supplies must be compatible to work properly together.

Figure 3.2 Parts of Motherboard

3.2 DATA REPRESENTATION IN A COMPUTER

Computers must not only be able to carry out computations, but they must also be able to do them
quickly and efficiently. There are several data representations, typically for integers, real numbers, and
characters.

NUMBER REPRESENTATIONS IN VARIOUS NUMERAL SYSTEMS

A numeral system is a collection of symbols used to represent small numbers, together with a system of
rules for representing larger numbers. Each numeral system uses a set of digits. The number of various unique
digits, including zero, that a numeral system uses to represent numbers is called base or radix.

Human beings use decimal (base 10) and duodecimal (base 12) number systems for counting and
measurements (probably because we have 10 fingers and two big toes). Computers use binary (base 2) number
systems, as they are made from binary digital components (known as transistors) operating in two states - on and
off. In computing, we also use hexadecimal (base 16) or octal (base 8) number systems, as a compact form for
representing binary numbers.
6 | Lecture Note-Intro to Computing

Decimal (Base 10) Number System

The decimal number system (also known as decimal notation) has ten Note
symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9, called digits. It uses positional notation. That
We shall denote a
is, the least-significant digit (right-most digit) is of the order of 100 (units or ones),
decimal number with an
the second right-most digit is of the order of 10^1 (tens), the third right-most digit is
optional suffix D if
of the order of 10^2 (hundreds), and so on, where ^ denotes exponent. For
ambiguity arises.
example,

735 = 700 + 30 + 5 = 7×102 + 3×101 + 5×100


No. 1 Try It!
Write the number into decimal notation.

1. 635= ____+____+____=____________+______________+__________
2. 1255= ____+____+____+____=____________+______________+__________+__________
Note: Remember that you can only use 0-9 digits in representing decimal notation meaning 10 x 100 is wrong.

Binary (Base 2) Number System


Note
The binary number system has two symbols: 0 and 1, called bits. Eight bits is We shall denote a
called a byte. It is also a positional notation, for example, binary number
with a suffix B.
10110B = 10000B + 0000B + 100B + 10B + 0B = 1×2^4 + 0×2^3 + 1×2^2 + 1×2^1 +
0×2^0

Binary numbers are convertible to decimal numbers. Here’s an example of a binary number, 11101.11(2),
and its representation in decimal notation.

Binary Number 1 1 1 0 1 1 1
Position 4 3 2 1 0 -1 -2
Place Value 24 23 22 21 20 2-1 2-2
Decimal number 16 8 4 2 1 0.5 (1/2) 0.25 (1/4)

So,
11101.11(2)= (1x16) + (1x8) + (1x4) + (0x2) + (1x1) + (1x0.5) + (1x0.25)
16 + 8 + 4 + 0 + 1 + 0.5 + 0.25 = 29.75(10)

11101.11(2)=29.7510)

Another example:

110101(2)= (1x32) + (1x16) + (0x8) + (1x4) + (0x2) + (1x1)


= 32 + 16 + 0 + 4 + 0 + 1 = 53(10)

10110001
1 0 1 1 0 0 0 1
128 64 32 16 8 4 2 1
128 0 32 16 0 0 0 1
=177

11.1011
1 1 1 0 1 1
2^1 2^0 2^-1 2^-2 2^-3 2^-4
2 1 0.5 0.25 0.125 0.0625
2 1 0.5 0 0.125 0.0625
3.6875

1101.01
1 1 0 1 0 1
8 4 2 1 0.5 0.25
8 4 0 1 0.25
13.25

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


7 | Lecture Note-Intro to Computing

No. 2 Try It!


Convert the following binary to decimal

1. 111(2)=_______+________+_______=________
2. 1110111(2)=______+______+______+______+______+______+______+______=______
3. 10101.011(2)______+______+______+______+______+______+______+______=______

If you still don’t get the conversion, here are simple steps to make the conversion easy.

1. Draw a table with two rows namely binary and decimal numbers like shown below. The decimal row
starts with 20, 21, 22, 23… or equivalent to 1, 2, 4, 8… notation which is placed from right to left.

Binary
Decimal 128 64 32 16 8 4 2 1
2. Place the given binary on the binary row like in the table shown below. For example the given binary is
1101.
Binary 1 1 0 1
Decimal 128 64 32 16 8 4 2 1
3. Multiple the value of binary to the corresponding decimal
value. Note
Binary 1 1 0 1 If in case of decimal point in binary like
Decimal 8 4 2 1 the first example, you can add columns
Product 8 4 0 1
½, ¼,1⁄8… after the decimal value ‘1’.
4. Then add all the products.
8 + 4 + 0 + 1 = 13

How about converting the decimal to binary?

Figure 3.3 Two methods of decimal to binary conversion

You can use any of the two methods which you are comfortable with. For uniformity, the following
examples used the first method.

Example No. 1
Convert the 75 to binary.

We need to disperse the decimal value that belongs to the power of 2, from highest to lowest

75(10) = 64 + 11
75(10) = 64 + 8 + 3
75(10) = 64 + 8 + 2 + 1
Then, each value must be multiplied by 1 like what is shown below
= 1x64 + 1x8 + 1x2 + 1x1
Convert the decimal value into exponential form
= 1x26 + 1x23 + 1x21 + 1x20

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


8 | Lecture Note-Intro to Computing

The exponential form must be sequential but as you can see above there are missing values such as 25, 24, 22,
21, 20. This missing value will be inserted in its proper place in the equation, but it will be multiplied by zero like
what is shown below.
= 1x26 + 0x25 + 0x24 + 1x23+ 0x22 + 1x21 + 1x20
Then, we will remove the exponential form to get the final binary value which is 1001011(2).

Example No. 2

890 = 512 + 378


= 512 + 256 + 122
= 512 + 256 + 64 + 58
= 512 + 256 + 64 + 32 + 26
= 512 + 256 + 64 + 32 + 16 + 10
= 512 + 256 + 64 + 32 + 16 + 8 + 2
= 1x29 + 1x28 + 1x26 + 1x24 + 1x23 + 1x21
= 1x29 + 1x28 + 0x27 1x26 + 0x25 + 1x24 + 1x23 + 0x22 + 1x21 + 0x20
= 110101010(2)

Example No. 3 (Shortcut)

110(10) = 64 + 32 + 8 + 4 + 2
=1x26 + 1x25 + 1x23 + 1x21
=1x26 + 1x25 + 0x24 + 1x23 + 0x22 + 1x21 + 0x20
=1101010(2)

57. 25= 32+ 25


= 32 + 16 + 9
= 32 + 16 + 8 + 1
= 2^5 + 2^4 + 2^3 + 2^0
= 1 x 2^5 + 1x2^4 + 1x 2^3 + 1x2^0 + 1x2^-2
= 111001. 01

No. 3 Try It!


Convert the following decimal to binary

1. 275(10) =_____________________________
=_____________________________
=_____________________________
=_____________________________
2. 111(10) =_____________________________
=_____________________________
=_____________________________
=_____________________________

Hexadecimal (Base 16) Number System

Hexadecimal number system uses 16 symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, called hex digits.


It is a positional notation, for example,

A3EH = A00H + 30H + EH = 10×162 + 3×161 + 14×160


We shall denote a hexadecimal number (in short, hex) with a suffix H. Some programming languages
denote hex numbers with the prefix 0x or 0X (e.g., 0x1A3C5F), or refix x with hex digits quoted (e.g.,
x'C3A4D98B').

Each hexadecimal digit is also called a hex digit. Most programming languages accept lowercase 'a to
'f' as well as uppercase 'A' to 'F'.

Computers use binary systems in their internal operations, as they are built from binary digital electronic
components with 2 states - on and off. However, writing or reading a long sequence of binary bits is cumbersome
and error-prone (try to read this binary string: 1011 0011 0100 0011 0001 1101 0001 1000B, which is the same
as hexadecimal B343 1D18H). The hexadecimal system is used as a compact form or shorthand for binary bits.
Each hex digit is equivalent to 4 binary bits, i.e., shorthand for 4 bits, as follows:

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


9 | Lecture Note-Intro to Computing

Table 3.1 Hex and its corresponding values in binary and decimal

Hexadecimal Binary Decimal


0 0000 0
1 0001 1
2 0010 2
3 0011 3
4 0100 4
5 0101 5
6 0110 6
7 0111 7
8 1000 8
9 1001 9
A 1010 10
B 1011 11
C 1100 12
D 1101 13
E 1110 14
F 1111 15

Conversion from Hexadecimal to Binary

Replace each hex digit by the 4 equivalent bits (as listed in the above table), for example,

A3C5 = 1010 0011 1100 0101B=10100011110001011


102A = 0001 0000 0010 1010B=0001000000101010

The conversion is a little bit easy because of the guide table, for this reason, further examples will not
be provided.

Conversion from Binary to Hexadecimal

Starting from the right-most bit (least-significant bit), replace each group of 4 bits by the equivalent hex
digit (pad the left-most bits with zero if necessary), for example,

1001001010B = 0010 0100 1010B = 24A


10 0010 1100 1011B = 0010 0010 1100 1011B = 22CB

0001 1110= 1E

0001 0101 0000= 150

0011 0000 1111= 30F

It is important to note that hexadecimal number provides a compact form or shorthand for representing
binary bits.
No. 4 Try It!
Convert the following binary to hexadecimal.
1. 110=_________________
2. 10101011=_____________
3. 111001=_______________

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


10 | Lecture Note-Intro to Computing

Converting Hexadecimal to Decimal

There are various indirect or direct methods to convert a hexadecimal number into a decimal number.
In an indirect method, you need to convert a hexadecimal number into a binary or octal number, then you can
convert it into a decimal number.

First, convert it into a binary or octal number (which be discussed in the next section),
For now, will use the binary
= F1
= (1111 0001)2 or (011 110 001)2

Because in binary, the value of F and 1 are 1111 and 0001 respectively. Then convert it into a decimal
number by multiplying the power of its position of the base.

= (1x27+1x26+1x25+1x24+0x23+0x22+0x21+1x20)10
= (128 + 64 + 32 + 16 + 0 + 0 + 0 + 1 )10 = 241(10)

However, there is a simple direct method to convert a hexadecimal number to a decimal number. Since there are
only 16 digits (from 0 to 9 and A to F) in the hexadecimal number system, we can represent any digit the
hexadecimal number system using only 4 bit like Table 3.1 shows.

The hexadecimal number system provides a convenient way of converting large binary numbers into
more compact and smaller groups. These are weights of hexadecimal of respective position of hexadecimal
(value of base is 16).

Most Significant Bit (MSB) Hexa Point Least Significant Bit (LSB)

162 161 160 16-1 16-2 16-3

256 16 1 1/16 1/256 1/4096

Since number numbers are a type of the positional number system. That means the weight of the
positions from right to left are 160, 161, 162, 163and so on. for the integer part and weight of the positions from
left to right are as 16-1, 16-2, 16-3and so on. for the fractional part.

You can directly convert a hexadecimal number into decimal number using the reverse method of
decimal to hexadecimal number.

Example-1 − Convert the hexadecimal number ABCDEF into a decimal number.


Since values of Symbols − A, B, C, D, E, F are 10, 11, 12, 13, 14, 15 respectively. Therefore, the equivalent
decimal number is,

= (ABCDEF)16
= (10x165+11x164+12x163+13x162+14x161+15x160)10
=(10x1,048,576)+(11x65,536)+(12x4,096)+(13x256)+(14x16)+(15x1)
= (10,485,760+720,896+49,152+3,328+224+15)10
= 11,259,375(10)

Example-2 − Convert the hexadecimal number 1F.01B into a decimal number.

Since the values of Symbols: B and F are 11 and 15 respectively. Therefore, the equivalent decimal number is,

= (1F.01B)16
= (1x161+15x160 +0x16-1+1x16-2+11x16-3)10
= 31.0065918(10)

FF2 = 15x162+15x161+2x160

=3,840+240+2
=4,082

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


11 | Lecture Note-Intro to Computing

No. 5 Try It!


Convert the following hexadecimal to decimal.

1. FFF =________________________________
=________________________________
=________________________________
2. F001 =_______________________________
=________________________________
=________________________________

Octal (base 8) Number System

The Octal Numbering System is very similar in principle to the previous hexadecimal numbering system
except that in Octal, a binary number is divided up into groups of only 3 bits, with each group or set of bits having
a distinct value of between 000 (0) and 111 (4+2+1 = 7).

Octal numbers, therefore, have a range of just “8” digits, (0, 1, 2, 3, 4, 5, 6, 7) making them a Base-8
numbering system, and therefore, q is equal to “8”.

Then the main characteristic of an Octal Numbering System is that there are only 8 distinct counting
digits from 0 to 7 with each digit having a weight or value of just 8 starting from the least significant bit (LSB). In
the earlier days of computing, octal numbers and the octal numbering system were very popular for counting
inputs and outputs because as it works in counts of eight, inputs and outputs were in counts of eight, a byte at a
time.

MSB Octal Number LSB

88 87 86 85 84 83 82 81 80

16M 2M 262k 32k 4k 512 64 8 1

As the base of an Octal Numbers system is 8 (base 8), which also represents the number of individual
numbers used in the system, the subscript 8 is used to identify a number expressed in octal. For example, an
octal number is expressed as 2378

Just like the hexadecimal system, the “octal number system” provides a convenient way of converting
large binary numbers into more compact and smaller groups. However, these days the octal numbering system
is used less frequently than the more popular hexadecimal numbering system and has almost disappeared as a
digital base number system.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


12 | Lecture Note-Intro to Computing

Table 3.2 Octal and its corresponding values in binary and decimal

Decimal Number 3-bit Binary Number Octal Number

0 000 0

1 001 1

2 010 2

3 011 3

4 100 4

5 101 5

6 110 6

7 111 7

8 001 000 10 (1+0)

9 001 001 11 (1+1)

Continuing upwards in groups of three

Then we can see that 1 octal number or digit is equivalent to 3 bits, and with two octal numbers, 778 we
can count up to 63 in decimal, with three octal numbers, 7778 up to 511 in decimal and with four octal numbers,
77778 up to 4095 in decimal and so on.

Conversion of Binary to Octal

Like hexadecimal conversion, there are some easy steps to convert binary to octal. For example,
11010101110011112 converts this binary number to its octal equivalent,
1. Write the Binary Digit Value 001 101 010 111 001 111
2. Group the bits into three´s starting from the right-hand side 001 101 010 111 001 111
3. Look at the table of the corresponding Octal Number form 1 5 2 7 1 7
Final Answer: 1527178
101 0102=528

Conversion of Octal to Decimal


Convert the octal number 23228 to its decimal number equivalent.

Octal Digit Value 2 3 2 2


In polynomial form = (2×83) + (3×82) + (2×81) + (2×80)
Add the results = (1024) + (192) + (16) + (2)
Decimal number form equals: 123410

5478= (5x82) + (4x 81) +(7x80)


= (320 + 32 + 7)
=359

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


13 | Lecture Note-Intro to Computing

DATA REPRESENTATION: UNITS OF INFORMATION

Data representation refers to the methods used internally to represent information stored in the
computer. Computers store lots of different types of information such as:
• Numbers
• Text
• Graphics of many varieties (Still, videos, animations)
• Sound
Though the above-mentioned seem different, all of these are stored in computer in the same format: a
sequence of binary such as 1 and 0 or machine language. How can this binary represent things as diverse as
your selfies, your favorite song, a movie, or even your document file?

Depending on the nature of its internal representation, data items are divided into:
• Basic Types (simple or type primitive): the standard scalar predefined types that one
would expect to find ready for immediate use in any programming language.
• Structured Types (Higher Level Type): it is made up of such basic types or other existing
level types.

Basic Unit of Information

The basic unit of information in digital computers is called a bit, which is a construction of binary digits.
In the concrete sense, a bit is nothing more than a state of “on” or “off” (high and low) within a computer circuit.
In 1964, the designers of the IBM System/360 mainframe computer established a convention of using groups of
8 bits as the basic unit of addressable computer storage which is the byte.

Computers consist of two or more adjacent bytes that are sometimes addressed and almost always
manipulated collectively. The word size represents the data size that is handled most efficiently by a particular
architecture. The following are other units of information and its description.
• Nibble- A nibble is a unit of memory made up of 4 bits. This means it can store 16 possible binary values,
from 0000 to 1111. Numbers encoded using the binary coded decimal (BCD) system uses 1 nibble to
encode each digit of the number (rather than converting the whole number into binary). For example, to
encode the denary number 75 using the BCD system would mean the 7 would be encoded as 0111 and
the 5 as 0101, using 2 nibbles of memory.
• Kilobyte- 1024 bytes are called kilobyte (kB). When talking about computer storage rather than
computer memory a kilobyte is often referred to as 1000 (103) bytes. 1kB of memory could store roughly
one full A4 page of text.
• Megabyte- 1024 kilobytes are called a megabyte (MB). When talking about computer storage rather
than computer memory a megabyte is often referred to as 1000 kilobytes (106 bytes). A typical MP3
music file is around 4MB.
• Gigabyte- 1024 megabytes are called a gigabyte (GB). When talking about computer storage rather
than computer memory a gigabyte is often referred to as 1000 megabytes (109 bytes). A typical DVD
can store around 4.7GB of data.
• Terabyte- 1024 gigabytes are called a terabyte (TB). When talking about computer storage rather than
computer memory a terabyte is often referred to as 1000 gigabytes (1012 bytes). The first 1TB hard
drive was produced in 2007.

REPRESENTATION OF INTEGERS

Integers are whole numbers or fixed-point numbers with the radix point fixed after the least-significant
bit. They contrast with real numbers or floating-point numbers, where the position of the radix point varies. It is
important to take note that integers and floating-point numbers are treated differently in computers. They have
different representations and are processed differently (e.g., floating-point numbers are processed in a so-called
floating-point processor). Floating-point numbers will be discussed later.

Computers use a fixed number of bits to represent an integer. The commonly used bit-lengths for
integers are 8-bit, 16-bit, 32-bit, or 64-bit. Besides bit-lengths, there are two representation schemes for integers:

1. Unsigned Integers: can represent zero and positive integers, but not negative integers. The value of
an unsigned integer is interpreted as "the magnitude of its underlying binary pattern".

Example 1: Suppose that n=8 and the binary pattern is 0100 0001B, the value of this unsigned integer is 1×2 0 +
1×26 = 65D.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


14 | Lecture Note-Intro to Computing

Example 2: Suppose that n=16 and the binary pattern is 0001 0000 0000 1000B, the value of this unsigned integer
is 1×23 + 1×212 = 4104D.

Example 3: Suppose that n=16 and the binary pattern is 0000 0000 0000 0000B, the value of this unsigned integer
is 0.

An n-bit pattern can represent 2n distinct integers. An n-bit unsigned integer can represent integers from
0 to (2n)-1,as tabulated below:
n Minimum Maximum
8 0 (28)-1 (=255)
16 0 (216)-1 (=65,535)
32 0 (232)-1 (=4,294,967,295) (9+ digits)
64 0 (264)-1 (=18,446,744,073,709,551,615) (19+ digits)

2. Signed Integers- Signed integers can represent zero, positive integers, as well as negative integers.
Three representation schemes are available for signed integers:
a. Sign-Magnitude representation
The most-significant bit (msb) is the sign bit, with the value of 0 representing positive integer
and 1 representing a negative integer. The remaining n-1 bits represents the magnitude
(absolute value) of the integer. The absolute value of the integer is interpreted as "the
magnitude of the (n-1)-bit binary pattern".

Example 1: Suppose that n=8 and the binary representation is 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D

Example 2: Suppose that n=8 and the binary representation is 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0001B = 1D
Hence, the integer is -1D

Example 3: Suppose that n=8 and the binary representation is 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation is 1 000 0000B.
Sign bit is 1 ⇒ negative
Absolute value is 000 0000B = 0D
Hence, the integer is -0D

Figure 3.4 Signed Magnitude Representation

The drawbacks of sign-magnitude representation are:


• There are two representations (0000 0000B and 1000 0000B) for the number zero,
which could lead to inefficiency and confusion.
• Positive and negative integers need to be processed separately.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


15 | Lecture Note-Intro to Computing

b. 1's Complement representation


In 1's complement representation:
• Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive
integers and 1 representing negative integers.
• The remaining n-1 bits represents the magnitude of the integer, as follows:
o for positive integers, the absolute value of the integer is equal to "the
magnitude of the (n-1)-bit binary pattern".
o for negative integers, the absolute value of the integer is equal to "the
magnitude of the complement (inverse) of the (n-1)-bit binary pattern" (hence
called 1's complement).

Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D

Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B, i.e., 111 1110B = 126D
Hence, the integer is -126D

Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B, i.e., 000 0000B = 0D
Hence, the integer is -0D

Figure 3.5 1’s Complements Representation

The major drawback is similar to sign-magnitude representation.

c. 2's Complement representation


In 2's complement representation:
• Again, the most significant bit (msb) is the sign bit, with the value of 0 representing
positive integers and 1 representing negative integers.
• The remaining n-1 bits represent the magnitude of the integer, as follows:
o for positive integers, the absolute value of the integer is equal to "the
magnitude of the (n-1)-bit binary pattern".
o for negative integers, the absolute value of the integer is equal to "the
magnitude of the complement of the (n-1)-bit binary pattern plus one" (hence
called 2's complement).
Example 1: Suppose that n=8 and the binary representation 0 100 0001B.
Sign bit is 0 ⇒ positive
Absolute value is 100 0001B = 65D
Hence, the integer is +65D

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


16 | Lecture Note-Intro to Computing

Example 2: Suppose that n=8 and the binary representation 1 000 0001B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 000 0001B plus 1, i.e., 111 1110B + 1B = 127D
Hence, the integer is -127D

Example 3: Suppose that n=8 and the binary representation 0 000 0000B.
Sign bit is 0 ⇒ positive
Absolute value is 000 0000B = 0D
Hence, the integer is +0D

Example 4: Suppose that n=8 and the binary representation 1 111 1111B.
Sign bit is 1 ⇒ negative
Absolute value is the complement of 111 1111B plus 1, i.e., 000 0000B + 1B = 1D
Hence, the integer is -1D

Figure 3.6 2’s Complements Representation

ARITHMETIC OPERATIONS ON INTEGERS

Binary arithmetic is an essential part of all digital computers and many other digital systems. We are all
familiar with basic arithmetic operations such as Addition, Subtract, Multiplication and Division. Let’s see how
this operation will be performed in the form signed binary.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


17 | Lecture Note-Intro to Computing

Binary Addition- It is a key for binary subtraction, multiplication, and division. There are four rules of binary
addition.

In the fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e. 0 is written in the given column and a
carry of 1 over to the next column.

Addition Example

No. 6 Try It!


Add the following binary.

1. 0001 2. 1110 3. 1 1 1 1 0 1
+1000 +1100 +111111

Binary Subtraction- Subtraction and Borrow, these two words will be used very frequently for the binary
subtraction. There are four rules of binary subtraction.

Example − Subtraction

No. 7 Try It!


Subtract the following binary.

1. 0001 2. 1 1 1 0 3. 1 1 1 1 0 1
-1000 -1100 -111111

Binary Multiplication- It is similar to decimal multiplication. It is simpler than decimal multiplication because only
0s and 1s are involved. There are four rules of binary multiplication.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


18 | Lecture Note-Intro to Computing

No. 8 Try It!


Get the product of the following binary.

1. 0001 2. 1110 3. 1 1 1 1 0 1
x1000 x1100 x 000111

Binary Division- It is similar to decimal division. It is called as the long division procedure.
Example − Division

No. 9 Try It!


Get the quotient of the following binary.

1. 0001√1001 2. 101√111 3. 0011√1100

LOGICAL OPERATIONS ON BINARY NUMBERS

The logical functions work on single-bit operands. Because most programming languages manipulate
groups of 8, 16, or 32 bits, we need to extend the definition of these logical operations beyond single-bit operands.
We can easily extend logical functions to operate on a bit-by-bit (or bitwise) basis. Given two values, a bitwise
logical function operates on bit zero of both operands producing bit zero of the result; it operates on a bit one of
both operands producing bit one of the results, and so on. Basically, there are logical operations such as AND,
OR, XOR, and NOT.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


19 | Lecture Note-Intro to Computing

Table 3.3 Truth Table of Different Logical Operators


X y x AND y x OR y x XOR y NOT x
1 1 1 1 0 0
1 0 0 1 1 0
0 1 0 0 1 1
0 0 0 0 0 1

Example -AND
0101
AND 1 1 1 1
0101
Example- OR
0101
OR 1 1 1 1
1111
Example- OR
0101
XOR 1 1 1 1
1010
Example-NOT

NOT 0 1 0 1
1010

No. 10 Try It!


Answer the following.

1. 10101010 2. 1 0 1 0 1 0 3. 10111 4. NOT 1 0 0 0


AND 1 1 1 1 0 0 0 1 OR 1 1 1 1 1 0 XOR 1 1 0 0 0

FLOATING-POINT NUMBER REPRESENTATION

A floating-point number (or real number) can represent a very large (1.23×10^88) or a very small
(1.23×10^-88) value. It could also represent very large negative number (-1.23×10^88) and very small negative
number (-1.23×10^88), as well as zero, as illustrated:

A floating-point number is typically expressed in scientific notation, with a fraction (F), and
an exponent (E) of a certain radix (r), in the form of F×r^E. Decimal numbers use radix of 10 (F×10^E); while
binary numbers use a radix of 2 (F×2^E).

The representation of the floating-point number is not unique. For example, the number 55.66 can be
represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on. The fractional part can be normalized. In
the normalized form, there is only a single non-zero digit before the radix point. For example, decimal
number 123.4567 can be normalized as 1.234567×10^2; binary number 1010.1011B can be normalized
as 1.0101011B×2^3.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


20 | Lecture Note-Intro to Computing

It is important to note that floating-point numbers suffer from loss of precision when represented with a
fixed number of bits (e.g., 32-bit or 64-bit). This is because there are an infinite number of real numbers (even
within a small range of says 0.0 to 0.1). On the other hand, an n-bit binary pattern can represent finite 2^n distinct
numbers. Hence, not all the real numbers can be represented. The nearest approximation will be used instead,
resulting in a loss of accuracy.

It is also important to note that floating number arithmetic is very much less efficient than integer
arithmetic. It could be sped up with a so-called dedicated floating-point co-processor. Hence, use integers if your
application does not require floating-point numbers.

In computers, floating-point numbers are represented in the scientific notation of fraction (F)
and exponent (E) with a radix of 2, in the form of F×2^E. Both E and F can be positive as well as negative. Modern
computers adopt IEEE 754 standard for representing floating-point numbers. There are two representation
schemes: 32-bit single-precision and 64-bit double-precision.

IEEE-754 32-bit Single-Precision Floating-Point Numbers


In 32-bit single-precision floating-point representation:
• The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
• The following 8 bits represent exponent (E).
• The remaining 23 bits represent fraction (F).

Normalized Form
Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000 0000 0000 0000,
with:
• S=1
• E = 1000 0001
• F = 011 0000 0000 0000 0000 0000
In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of 1.F. In this
example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 1×2^-2 + 1×2^-3 = 1.375D.
The sign bit represents the sign of the number, with S=0 for the positive and S=1 the for a negative number. In
this example with S=1, this is a negative number, i.e., -1.375D.
In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is because we need to
represent both positive and negative exponents. With an 8-bit E, ranging from 0 to 255, the excess-127 scheme
could provide an actual exponent of -127 to 128. In this example, E-127=129-127=2D.
Hence, the number represented is -1.375×2^2=-5.5D.

De-Normalized Form

The normalized form has a serious problem, with an implicit leading 1 for the fraction, it cannot represent the
number zero! Convince yourself of this!

The de-normalized form was devised to represent zero and other numbers.

For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1) is used for the fraction,
and the actual exponent is always -126. Hence, the number zero can be represented
with E=0 and F=0 (because 0.0×2^-126=0).

We can also represent very small positive and negative numbers in de-normalized form with E=0. For example,
if S=1, E=0, and F=011 0000 0000 0000 0000 0000. The actual fraction is 0.011=1×2^-2+1×2^-3=0.375D.
Since S=1, it is a negative number. With E=0, the actual exponent is -126. Hence the number is -0.375×2^-126 =
-4.4×10^-39, which is an extremely small negative number (close to zero).

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


21 | Lecture Note-Intro to Computing

In summary, the value (N) is calculated as follows:


• For 1 ≤ E ≤ 254, N = (-1)^S × 1.F × 2^(E-127). These numbers are in the so-called normalized form.
The sign-bit represents the sign of the number. The fractional part (1. F) is normalized with an implicit
leading 1. The exponent is biased (or in excess) of 127, so as to represent both positive and negative
exponent. The range of exponent is -126 to +127.
• For E = 0, N = (-1)^S × 0.F × 2^(-126). These numbers are in the so-called denormalized form. The
exponent of 2^-126 evaluates to a very small number. A denormalized form is needed to represent zero
(with F=0 and E=0). It can also represent very small positive and negative numbers close to zero.
• For E = 255, it represents special values, such as ±INF (positive and negative infinity) and NaN (not a
number). This is beyond the scope of this article.

Example 1: Suppose that IEEE-754 32-bit floating-point representation pattern is 0 10000000 110 0000 0000
0000 0000 0000.
Sign bit S = 0 ⇒ positive number
E = 1000 0000B = 128D (in normalized form)
Fraction is 1.11B (with an implicit leading 1) = 1 + 1×2^-1 + 1×2^-2 = 1.75D
The number is +1.75 × 2^(128-127) = +3.5D

Example 2: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 100 0000 0000
0000 0000 0000.
Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.1B (with an implicit leading 1) = 1 + 2^-1 = 1.5D
The number is -1.5 × 2^(126-127) = -0.75D

Example 3: Suppose that IEEE-754 32-bit floating-point representation pattern is 1 01111110 000 0000 0000
0000 0000 0001.
Sign bit S = 1 ⇒ negative number
E = 0111 1110B = 126D (in normalized form)
Fraction is 1.000 0000 0000 0000 0000 0001B (with an implicit leading 1) = 1 + 2^-23
The number is -(1 + 2^-23) × 2^(126-127) = -0.500000059604644775390625 (may not be exact in decimal!)

Example 4 (De-Normalized Form): Suppose that IEEE-754 32-bit floating-point representation pattern
is 1 00000000 000 0000 0000 0000 0000 0001.
Sign bit S = 1 ⇒ negative number
E = 0 (in de-normalized form)
The fraction is 0.000 0000 0000 0000 0000 0001B (with an implicit leading 0) = 1×2^-23
The number is -2^-23 × 2^(-126) = -2×(-149) ≈ -1.4×10^-45

4.2 Exercises (Floating-point Numbers)


1. Compute the largest and smallest positive numbers that can be represented in the 32-bit normalized
form.
2. Compute the largest and smallest negative numbers can be represented in the 32-bit normalized form.
3. Repeat (1) for the 32-bit denormalized form.
4. Repeat (2) for the 32-bit denormalized form.

Hints:
1. Largest positive number: S=0, E=1111 1110 (254), F=111 1111 1111 1111 1111 1111.
Smallest positive number: S=0, E=0000 00001 (1), F=000 0000 0000 0000 0000 0000.
2. Same as above, but S=1.
3. Largest positive number: S=0, E=0, F=111 1111 1111 1111 1111 1111.
Smallest positive number: S=0, E=0, F=000 0000 0000 0000 0000 0001.
4. Same as above, but S=1.

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


22 | Lecture Note-Intro to Computing

CHARACTER ENCODING (Symbol Representation)

It is important to handle character data. Character data is not just alphabetic characters, but also numeric
characters, punctuations, spaces, etc. They need to be represented in binary. There aren’t mathematical
properties for character data, so assigning binary codes for characters is somewhat necessary. This process is
called character encoding. Character encoding is a way to convert text data into binary numbers. In nutshell, we
can assign unique numeric values to specific characters and convert those numbers into binary language. These
binary numbers later can be converted back to original characters based on their values. These can be done
through what we called charset or character set. The charset is a table of unique numbers assigned to different
characters like letters, numbers, and other symbols. There are many characters set such as the following:
• American Standard Code for Information Interchange (ASCII) is a character-encoding scheme, and
it was the first character encoding standard. It is a code for representing English characters as numbers,
with each letter assigned a number from 0 to 127. Most modern character-encoding schemes are based
on ASCII, though they support many additional characters. It is a single-byte encoding that only uses
the bottom 7 bits. In an ASCII file, each alphabetic, numeric, or special character is represented with a
7-bit binary number.
• ANSI (American National Standards Institute) codes are standardized numeric or alphabetic codes
issued by the American National Standards Institute to ensure uniform identification of geographic
entities through all federal government agencies. It has served as coordinator of the U.S. private sector,
voluntary standardization system for more than 90 years. This is essentially an extension of the ASCII
character set in that it includes all the ASCII characters with an additional 128-character code. ASCII
just defines a 7-bit code page with 128 symbols. ANSI extends this to 8 bits and there are several
different code pages for the symbols 128 to 255.
• Unicode is a universal character set, ie. a standard that defines, in one place, all the characters needed
for writing most living languages in use on computers. It aims to be, and to a large extent already is, a
superset of all other character sets that have been encoded.
Text in a computer or on the Web is composed of characters. Characters represent letters of the
alphabet, punctuation, or other symbols.
In the past, different organizations have assembled different sets of characters and created encodings
for them – one set may cover just Latin-based Western European languages (excluding EU countries
such as Bulgaria or Greece), another may cover a particular Far Eastern language (such as Japanese),
others may be one of many sets devised in a rather ad hoc way for representing another language
somewhere in the world.

Unicode assigns each character a unique number, or code point. It defines two mapping methods, the
UTF (Unicode Transformation Format) encodings, and the UCS (Universal Character Set)
encodings. Unicode-based encodings implement the Unicode standard and include UTF-8, UTF-16, and
UTF-32/UCS-4. They go beyond 8-bits and support almost every language in the world. UTF-8 is gaining
traction as the dominant international encoding of the web. UTF-8, UTF-16, and UTF-32 are probably
the most used encodings.
o UTF-8 - uses 1 byte to represent characters in the ASCII set, two bytes for characters in several
more alphabetic blocks, and three bytes for the rest of the BMP. Supplementary characters use
4 bytes.
o UTF-16 - uses 2 bytes for any character in the BMP, and 4 bytes for supplementary characters.
o UTF-32 - uses 4 bytes for all characters.

D. Learning Activities

Directions: Create a 3–5-minute video on how computers process data. You can refer to the following
videos to get an idea. Rubrics will be used to grade your output.

https://youtu.be/8xoOLerFOwg

https://youtu.be/IqzbiPuMYFQ

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT


23 | Lecture Note-Intro to Computing

RUBRICS

Criteria 5 points 4 points 3 points 2 points


Content There is very The main idea is The main idea is The main idea is not
clear main idea clear and the somewhat clean but clear or well-
that is well- development needs more developed.
developed with throughout the development
lots of detail presentation can throughout the
throughout the be seen. presentation.
presentation.
Organization The information is Information is The information is The information
very organized with organized with organized but the appeared to be
well-constructed video well-constructed video is not well organized.
continuity. video continuity. constructed.
Creativity The student clearly The student The student project is Students follow a
expressed and project is explored original but mostly set of directions to
explored multiple and expresseda in based on an existing complete a project
ideas in a unique fairly original way. idea. but did not expire
way. new ways to alter
the idea

Total points – 50

REFERENCES

https://sciencerack.com/types-of-cpu-registers/
https://www.geeksforgeeks.org/different-classes-of-cpu-registers/
https://www.kingston.com/en/memory/difference-between-memory-storage
https://www.yourarticlelibrary.com/accounting/computerized-accounting/computer-system-elements-and-
components-with-diagram/63263
https://www.tutorialspoint.com/computer_fundamentals/computer_motherboard.htm
https://turbofuture.com/computers/The-Five-Types-of-System-Software
https://www3.ntu.edu.sg/home/ehchua/programming/java/datarepresentation.html
https://www.tutorialspoint.com/how-to-convert-hexadecimal-to-decimal
https://teachcomputerscience.com/data-units/
https://www.electronics-tutorials.ws/binary/bin_4.html
https://www3.ntu.edu.sg/home/ehchua/programming/java/datarepresentation.html
https://www.tutorialspoint.com/computer_logical_organization/binary_arithmetic.htm
https://flylib.com/books/en/1.330.1.26/1/
https://medium.com/jspoint/introduction-to-character-encoding-3b9735f265a6
https://www.w3.org/International/articles/definitions-characters/
http://net-informations.com/q/faq/encoding.html

Unit 3| Subject Instructor: GENER M. CUPO, MBA, MSIT

You might also like