0% found this document useful (0 votes)
11 views

Computer Architecture

The document provides an overview of data representation in computers, focusing on number systems such as binary, denary, and hexadecimal, including their conversions and applications. It also discusses data transmission methods, computer architecture, types of software, and the distinction between the Internet and the World Wide Web. Key concepts include the role of the CPU, operating systems, interrupts, and error detection methods.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Computer Architecture

The document provides an overview of data representation in computers, focusing on number systems such as binary, denary, and hexadecimal, including their conversions and applications. It also discusses data transmission methods, computer architecture, types of software, and the distinction between the Internet and the World Wide Web. Key concepts include the role of the CPU, operating systems, interrupts, and error detection methods.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Data Representation in Computers

Data representation is a fundamental concept in computer science, as it


explains how computers store and process various forms of data. Below is a
detailed exploration of the number systems, including binary, denary
(decimal), and hexadecimal, as well as their conversions and applications.
1. Understanding Binary Representation
1.1 Why Computers Use Binary
Computers utilize the binary number system (base-2) because it aligns with
the two states of electronic circuits: on (1) and off (0). This simplicity allows
for reliable data processing and storage using transistors, which can easily
represent these two states

1.2 Number Systems


 Denary (Decimal): A base-10 system using digits from 0 to 9.
 Binary: A base-2 system using only 0 and 1.
 Hexadecimal: A base-16 system using digits from 0 to 9 and letters A
to F.
1.3 Conversions Between Number Systems
(a) Positive Denary to Positive Binary
To convert a positive denary number to binary, repeatedly divide the number
by 2 and record the remainders. For example, converting denary 10:
1. 10÷2=510÷2=5 remainder 0
2. 5÷2=25÷2=2 remainder 1
3. 2÷2=12÷2=1 remainder 0
4. 1÷2=01÷2=0 remainder 1
Reading the remainders from bottom to top gives 1010 in binary

.
(b) Positive Denary to Positive Hexadecimal
To convert denary to hexadecimal, divide by 16 and record the remainders:
For example, converting denary 255:
1. 255÷16=15255÷16=15 remainder 15 (F)
2. 15÷16=015÷16=0 remainder 15 (F)
Thus, denary 255 is represented as FF in hexadecimal

.
(c) Positive Hexadecimal to Positive Binary
Each hexadecimal digit can be converted directly to a four-bit binary
equivalent:
 F → 1111
 A → 1010
For example, hexadecimal FA converts to binary as 11111010

.
2. Hexadecimal Representation
Hexadecimal is beneficial for humans because it provides a more compact
representation of binary data. For instance, one hexadecimal digit
corresponds to four binary digits (bits), making it easier to read large binary
numbers

. It's commonly used in programming and memory addressing.


3. Binary Addition and Overflow
(a) Adding Two Positive 8-bit Binary Integers
To add binary numbers, align them like decimal addition:
text
11001010
+ 00101101
-------------
11110111
The result is 11110111, which is equivalent to denary 247
.
(b) Understanding Overflow
Overflow occurs when the result of an addition exceeds the maximum value
that can be represented within the available bits. In an 8-bit system, the
maximum value is 255255. If an operation results in a number greater than
this, it wraps around, leading to incorrect results
.
4. Logical Shifts
Performing logical shifts on binary integers can manipulate their values:
 Left Shift: Multiplies the number by 2n2n, where nn is the number of
positions shifted.
 Right Shift: Divides the number by 2n2n.
For example:
 Left shifting 00001010 by one position results in 00010100 (20 in
denary).
 Right shifting 00001010 by one position results in 00000101 (5 in
denary)
.
5. Two’s Complement Representation
Two’s complement is used for representing negative integers in binary:
 To find the two's complement of a positive number:
 Invert all bits.
 Add one to the least significant bit.
For example, for +5 (00000101):
1. Invert bits: 11111010
2. Add one: 11111011 represents -5 in an 8-bit two's complement system
.
This method simplifies arithmetic operations involving both positive and
negative numbers.

Data Transmission: Types and Methods


Data transmission is a crucial aspect of computer networks, enabling devices
to communicate effectively. This section covers the fundamental concepts of
data packets, methods of transmission, and error detection.
1. Data Packets
1.1 Structure of a Packet
Data sent over a network is broken down into smaller units called packets.
Each packet consists of:
 Packet Header: Contains essential information for routing, including:
 Destination Address: Where the packet is headed.
 Packet Number: Indicates the order of the packet in the
sequence.
 Originator’s Address: The address of the sender.
 Payload: The actual data being transmitted.
 Trailer: Often includes error-checking information.
1.2 Packet Switching Process
Packet switching is a method where data packets are sent independently
across the network. Each packet can take different routes to reach the
destination, allowing for efficient use of network resources. The process
involves:
1. Breaking down data into packets.
2. Each packet being routed through various nodes (routers) based on its
destination address.
3. Packets may arrive out of order; they are reassembled at the
destination using the packet numbers.
This method contrasts with circuit switching, where a dedicated path is
established for the entire communication duration, making it less flexible and
efficient for data-heavy applications like the internet
1

2
.
2. Methods of Data Transmission
2.1 Types of Transmission
Data can be transmitted using various methods, each with its advantages
and disadvantages:
 Serial Transmission: Data is sent one bit at a time over a single
channel.
 Advantages: Simplicity and reduced cost.
 Disadvantages: Slower than parallel transmission.
 Parallel Transmission: Multiple bits are sent simultaneously over
multiple channels.
 Advantages: Faster than serial transmission.
 Disadvantages: More complex and susceptible to interference.
 Simplex: Data can only flow in one direction (e.g., keyboard to
computer).
 Half-Duplex: Data can flow in both directions but not simultaneously
(e.g., walkie-talkies).
 Full-Duplex: Data can flow in both directions simultaneously (e.g.,
telephone calls).
2.2 Suitability of Transmission Methods
The choice of transmission method depends on specific scenarios:
 For high-speed connections over short distances, parallel
transmission may be suitable (e.g., internal computer buses).
 For long-distance communication, serial transmission is often
preferred due to its simplicity and reliability.
3. Universal Serial Bus (USB) Interface
The USB interface is a widely used standard for connecting devices and
transmitting data:
 It allows for both data transfer and power supply to connected devices.
 USB supports multiple data transfer modes, including low-speed (1.5
Mbps), full-speed (12 Mbps), and high-speed (480 Mbps).
 It simplifies connections by allowing plug-and-play functionality and
supports multiple devices through hubs.
4. Error Detection Methods
4.1 Need for Error Checking
Errors can occur during data transmission due to various factors like
interference or signal degradation. To ensure data integrity, error detection
methods are employed.
4.2 Common Error Detection Techniques
1. Parity Check:
 Uses an additional bit (parity bit) to ensure that the total number
of bits with value one is even (even parity) or odd (odd parity).
 Simple but limited in detecting multiple errors.
2. Checksum:
 A value calculated from a data set that is sent along with the
data; the receiver recalculates it to check for errors.
3. Echo Check:
 The sender receives back a copy of the transmitted data to verify
correctness.
4. Check Digit:
 A digit added to a number (like ISBN or barcodes) used to
validate that the number has been entered correctly by
performing calculations based on its digits.
5. Automatic Repeat Request (ARQ):
 A method where acknowledgments are used to confirm receipt of
packets; if an acknowledgment isn’t received within a specified
time, the packet is retransmitted

Computer Architecture
Computer architecture refers to the design and organization of a computer's
components, particularly the central processing unit (CPU). This section
covers the roles of the CPU, its components, and how they interact within a
Von Neumann architecture.

1. The Role of the Central Processing Unit (CPU)


1.1 Understanding the CPU
The CPU is often referred to as the "brain" of the computer. It processes
instructions and data input into the system, executing commands that allow
programs to function. The CPU performs arithmetic calculations, logical
operations, and controls input/output (I/O) operations.

1.2 What is a Microprocessor?


A microprocessor is a type of CPU that integrates all essential functions onto
a single semiconductor chip. It contains the arithmetic logic unit (ALU),
control unit (CU), and various registers, making it compact and efficient for
processing tasks

2. Components of a CPU in Von Neumann Architecture


2.1 Purpose of CPU Components
In a computer following Von Neumann architecture, the CPU consists of
several key components:
 Arithmetic Logic Unit (ALU): Performs mathematical calculations
and logical operations.
 Control Unit (CU): Directs operations within the CPU by fetching
instructions from memory and decoding them.
 Registers: Small, fast storage locations within the CPU used to hold
temporary data and instructions. Key registers include:
 Program Counter (PC): Holds the address of the next
instruction.
 Memory Address Register (MAR): Stores the address in
memory where data will be read from or written to.
 Memory Data Register (MDR): Holds the data being
transferred to or from memory.
 Current Instruction Register (CIR): Contains the currently
executing instruction.
 Accumulator (ACC): Stores intermediate results of calculations.
2.2 Fetch-Decode-Execute Cycle
The fetch-decode-execute cycle is fundamental to how CPUs operate:
1. Fetch: The CU retrieves an instruction from memory using the address
in the PC.
2. Decode: The instruction is interpreted to determine which operation is
required.
3. Execute: The ALU performs the operation, and results are stored in
registers or sent back to memory.
This cycle repeats continuously as long as the computer is running
.
3. Performance Factors in CPUs
3.1 Core, Cache, and Clock
 Core: Refers to individual processing units within a CPU. More cores
allow for parallel processing, improving performance for multitasking.
 Cache: A small amount of high-speed memory located within or close
to the CPU that stores frequently accessed data and instructions,
reducing latency. Levels include L1 (fastest), L2, and L3 caches.
 Clock Speed: Measured in hertz (Hz), it indicates how many cycles per
second a CPU can perform. Higher clock speeds generally lead to
better performance but also increase power consumption.
Together, these factors significantly influence a CPU's overall performance

4. Instruction Set for a CPU


An instruction set is a collection of commands that a CPU can execute
directly. It defines how software communicates with hardware by specifying
operations such as arithmetic calculations, data movement, and control
commands. Each instruction corresponds to machine code that the processor
understands

.
5. Embedded Systems
5.1 Purpose and Characteristics
Embedded systems are specialized computing systems designed to perform
dedicated functions within larger systems. They are typically found in
devices such as:
 Domestic appliances (e.g., microwaves)
 Automotive systems (e.g., engine control units)
 Security systems
 Industrial machines
Unlike general-purpose computers, embedded systems are optimized for
specific tasks, often with constraints on power consumption and processing
capabilities

Types of Software and Interrupts


Understanding the different types of software and their roles is essential in
computer science. This section focuses on the distinctions between system
software and application software, the functions of operating systems, and
the concept of interrupts.
1. Difference Between System Software and Application Software
1.1 System Software
System software is designed to manage and control computer hardware and
resources. It acts as an interface between the hardware and application
software, enabling the latter to function effectively. Key characteristics
include:
 Purpose: Manages hardware resources and provides a platform for
running application software.
 Examples: Operating systems (e.g., Windows, macOS), device drivers,
and utility programs.
 Programming Language: Typically written in low-level languages to
allow direct interaction with hardware.
1.2 Application Software
Application software is developed to perform specific tasks for users. It
operates on top of system software and is more user-focused. Key
characteristics include:
 Purpose: Performs specific user tasks such as word processing,
gaming, or web browsing.
 Examples: Microsoft Office, Adobe Photoshop, web browsers.
 Programming Language: Usually written in high-level languages for
easier development and usability.
Summary of Differences
Aspect System Software Application Software
Purpose Manages hardware resources Performs specific tasks for users

Interactio
n Operates in the background Directly interacts with users

Installatio Can be installed or removed as


n Installed during system setup needed

Examples Operating systems, device drivers Word processors, media players

Complexit More complex, deep hardware


y integration Simpler, user-focused functionality
2. Role and Basic Functions of an Operating System
The operating system (OS) is a crucial component of system software that
manages computer hardware and software resources. Its basic functions
include:
 Managing Files: Organizes data storage and retrieval.
 Handling Interrupts: Responds to events like user inputs or hardware
signals.
 Providing an Interface: Offers a user interface for interaction (e.g.,
GUI).
 Managing Peripherals: Controls devices like printers and scanners
through drivers.
 Memory Management: Allocates memory space for applications and
processes.
 Multitasking Management: Allows multiple applications to run
simultaneously.
 System Security: Protects against unauthorized access.
 User Account Management: Handles user profiles and permissions.
3. Hardware, Firmware, and Operating Systems
To run application software effectively:
 Hardware: The physical components of a computer (CPU, memory,
etc.).
 Firmware: Low-level software embedded in hardware (e.g., BIOS) that
initializes hardware during boot-up.
 Operating System: Provides the necessary environment for
application software to run by managing hardware resources.
4. Role and Operation of Interrupts
4.1 What are Interrupts?
Interrupts are signals that inform the CPU about events requiring immediate
attention. They can be classified into two main types:
 Hardware Interrupts: Generated by external devices (e.g., keyboard
presses, mouse movements).
 Software Interrupts: Generated by programs (e.g., division by zero
errors).
4.2 Handling Interrupts
When an interrupt occurs:
1. The CPU pauses its current operations.
2. It saves the state of the current process.
3. An Interrupt Service Routine (ISR) is executed to handle the
interrupt.
4. After handling the interrupt, the CPU resumes its previous operations.
This mechanism ensures that critical tasks are addressed promptly without
disrupting overall system performance.

. The Internet and Its Uses


Understanding the distinctions between the Internet and the World Wide Web
(WWW) is fundamental in grasping how digital communication and
information sharing occur. This section explores these concepts, as well as
related topics such as URLs, web browsers, cookies, digital currency, and
cybersecurity.
1. Difference Between the Internet and the World Wide Web
1.1 The Internet
The Internet is a vast infrastructure that connects millions of computers
worldwide. It is a global network of networks that facilitates data transfer and
communication between devices using various protocols (e.g., TCP/IP). Key
characteristics include:
 Infrastructure: Comprises hardware components like servers, routers,
and cables.
 Functionality: Supports various services such as email, file sharing,
online gaming, and web browsing.
1.2 The World Wide Web (WWW)
The World Wide Web is a system of interlinked hypertext documents
accessed via the Internet. It allows users to navigate and share information
through web pages using web browsers. Key characteristics include:
 Application Layer: Operates on top of the Internet infrastructure.
 Protocols: Primarily uses Hypertext Transfer Protocol (HTTP) for data
transmission.
Summary of Differences
Aspect Internet World Wide Web (WWW)
A global network of
Definition interconnected devices A collection of interlinked documents

Nature Hardware-based infrastructure Software-oriented application

Functionali Supports various digital Focused on accessing information via


ty communication forms web pages

Uses multiple protocols (TCP/IP,


Protocols FTP, etc.) Primarily uses HTTP/HTTPS
2. Understanding Uniform Resource Locator (URL)
A Uniform Resource Locator (URL) is a text-based address used to access
resources on the web. It typically contains:
 Protocol: Indicates how data will be transferred (e.g., HTTP or HTTPS).
 Domain Name: Identifies the server hosting the resource.
 Path/File Name: Specifies the location of the resource on the server.
For example, in the URL https://www.example.com/page, https is the
protocol, www.example.com is the domain name, and /page is the specific
resource path.
3. Hypertext Transfer Protocol (HTTP) and HTTPS
3.1 HTTP
HTTP is a protocol used for transmitting hypertext over the web. It defines
how messages are formatted and transmitted between clients (e.g., web
browsers) and servers.
3.2 HTTPS
HTTPS (HTTP Secure) is an extension of HTTP that adds a layer of security by
encrypting data exchanged between clients and servers using SSL/TLS
protocols. This ensures secure transactions, especially for sensitive
information like credit card details.
4. Purpose and Functions of a Web Browser
A web browser is software that allows users to access and interact with
content on the World Wide Web. Its primary functions include:
 Rendering HTML: Displays web pages formatted in HTML.
 Storing Bookmarks/Favorites: Allows users to save links to
frequently visited pages.
 Recording User History: Keeps track of previously visited sites.
 Multiple Tabs: Enables users to open several web pages
simultaneously.
 Cookies Management: Stores small pieces of data to enhance user
experience.
 Navigation Tools: Provides tools like back/forward buttons and an
address bar for easy navigation.
5. Locating, Retrieving, and Displaying Web Pages
When a user enters a URL:
1. The browser sends a request to a Domain Name Server (DNS) to
resolve the domain name into an IP address.
2. The browser then sends an HTTP request to the corresponding web
server.
3. The server processes the request and sends back the requested HTML
page.
4. The browser renders the HTML content for display.
6. Understanding Cookies
Cookies are small pieces of data stored by a web browser that help improve
user experience by remembering information about users' preferences or
sessions. There are two main types:
 Session Cookies: Temporary cookies that expire once the user closes
their browser.
 Persistent Cookies: Remain on the user's device for a specified
period or until manually deleted; used for remembering login details or
preferences.
7. Digital Currency
7.1 Concept of Digital Currency
Digital currency exists only electronically and can be used for online
transactions without needing physical forms like cash or coins.
7.2 Blockchain Technology
Blockchain is a decentralized digital ledger that records all transactions
across a network securely and transparently. Each transaction is time-
stamped and linked to previous transactions, making it nearly impossible to
alter past records.
8. Cybersecurity
8.1 Cybersecurity Threats
Cybersecurity threats include various attacks aimed at compromising data
integrity or availability:
 Brute-force Attack: Attempting to guess passwords through trial and
error.
 Data Interception: Unauthorized access to data during transmission.
 DDoS Attack: Overwhelming a server with traffic to disrupt services.
 Malware: Malicious software designed to damage or gain
unauthorized access to systems.
8.2 Solutions for Data Protection
To safeguard against cybersecurity threats:
 Access Levels: Limit permissions based on user roles.
 Anti-malware Software: Protects against viruses, spyware, etc.
 Authentication Methods: Use strong passwords, biometrics, or two-
step verification.
 Firewalls: Monitor incoming/outgoing traffic based on security rules.
 SSL Security Protocols: Encrypt data transmitted over networks.

Automated and Emerging Technologies


This section explores automated systems, robotics, and artificial intelligence
(AI), focusing on their components, advantages, disadvantages, and
applications.
6.1 Automated Systems
1. Collaboration of Sensors, Microprocessors, and Actuators
Automated systems rely on the integration of sensors, microprocessors,
and actuators to perform tasks without human intervention:
 Sensors: Devices that detect changes in the environment and convert
physical phenomena (like temperature, light, or motion) into signals.
For example, a temperature sensor can monitor conditions in a
greenhouse.
 Microprocessors: These act as the brain of the automated system.
They process data from sensors and make decisions based on
programmed instructions. For instance, a microprocessor can analyze
data from a weather sensor to determine if irrigation is needed.
 Actuators: These are devices that carry out actions based on the
microprocessor's commands. For example, in an automated irrigation
system, actuators can open or close valves to control water flow.
2. Advantages and Disadvantages of Automated Systems
Advantages:
 Increased Efficiency: Automated systems can operate continuously
without breaks, leading to higher productivity (e.g., in manufacturing).
 Improved Accuracy: Automation reduces human error, ensuring more
consistent output quality (e.g., in assembly lines).
 Enhanced Safety: Automation can take over dangerous tasks in
industries like mining or chemical processing, reducing workplace
injuries.
 Cost Savings: Over time, automation can lower labor costs and
increase throughput (e.g., automated warehouses).
Disadvantages:
 Job Displacement: Automation can lead to unemployment as
machines replace human workers in repetitive tasks.
 High Initial Costs: Implementing automated systems can require
significant upfront investment for equipment and training.
 Technical Limitations: Automated systems may struggle with
complex tasks requiring human judgment or creativity.
 Reduced Human Interaction: Automation can diminish personal
customer service experiences (e.g., in retail).
6.2 Robotics
1. Understanding Robotics
Robotics is a branch of computer science focused on designing, constructing,
and operating robots. Robots are programmable machines capable of
carrying out tasks autonomously or semi-autonomously.
2. Characteristics of a Robot
Key characteristics include:
 Mechanical Structure: The physical framework that allows
movement.
 Electrical Components: Includes sensors (for perception),
microprocessors (for processing), and actuators (for action).
 Programmability: Robots can be programmed to perform specific
tasks or adapt to new ones.
3. Roles and Applications of Robots
Robots are used across various sectors:
 Industry: In manufacturing for assembly lines or quality control.
 Transport: Autonomous vehicles for logistics and delivery.
 Agriculture: Drones for crop monitoring and automated harvesters.
 Medicine: Surgical robots assisting in precise operations.
 Domestic Use: Robotic vacuum cleaners or lawn mowers.
Advantages:
 Increased Efficiency: Robots can work faster and longer than
humans.
 Precision: High accuracy in repetitive tasks reduces waste.
Disadvantages:
 High Initial Investment: Costly to design and implement robotic
systems.
 Limited Flexibility: Robots may not adapt easily to new tasks without
reprogramming.
6.3 Artificial Intelligence
1. Understanding Artificial Intelligence (AI)
AI refers to the simulation of human intelligence processes by machines,
particularly computer systems. This includes learning, reasoning, problem-
solving, perception, and language understanding.
2. Main Characteristics of AI
Key characteristics include:
 Data Collection & Rules Application: AI systems gather data and
apply predefined rules to make decisions.
 Reasoning Ability: AI can analyze information to draw conclusions or
make predictions.
 Learning & Adaptation: Through machine learning algorithms, AI
systems improve their performance over time based on new data.
3. Basic Operation and Components of AI Systems
AI systems typically consist of:
 Expert Systems: These have a knowledge base (information) and an
inference engine (rules for applying that knowledge). They are used for
decision-making in fields like medical diagnosis.
 Machine Learning: A subset of AI where algorithms learn from data
patterns without explicit programming. For example, recommendation
systems use machine learning to suggest products based on user
behavior.

You might also like