0% found this document useful (0 votes)
7 views3 pages

Untitled

The document discusses various algorithms and models in computing, including PageRank for web pages, software development models like spiral, RAD, and waterfall, and CPU performance factors. It also covers storage types, cloud storage operations, BIOS startup steps, fragmentation issues, and hashing algorithms. Each section highlights key concepts and their implications in technology.

Uploaded by

Random Person
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

Untitled

The document discusses various algorithms and models in computing, including PageRank for web pages, software development models like spiral, RAD, and waterfall, and CPU performance factors. It also covers storage types, cloud storage operations, BIOS startup steps, fragmentation issues, and hashing algorithms. Each section highlights key concepts and their implications in technology.

Uploaded by

Random Person
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 3

(a) The PageRank algorithm assigns a numerical weight to web pages based on the

number and quality of inbound links, iteratively distributing rank from linked pages.

(b) A web page’s PageRank score increases with more high-quality inbound links, a
well-connected link structure, and lower outbound link dilution.

A web crawler systematically scans the internet by following links, retrieving web
page content, and sending it to a search engine for indexing.

It helps search engines build an index by analysing page metadata, keywords, and
structure to determine relevance for search queries.

(a) The spiral model is an iterative software development process that incorporates
risk assessment, repeated prototyping, and gradual refinement. It is suited for
complex projects with evolving requirements and high-risk factors.

(b) Rapid application development (RAD) focuses on fast prototyping, iterative


feedback, and minimal planning, making it ideal for projects requiring quick delivery,
user involvement, and adaptability to changing requirements.

(c) The waterfall model follows a linear, sequential approach with distinct phases
such as analysis, design, implementation, and testing. It is suitable for well-defined
projects with stable requirements and minimal expected changes.

(a) (i) Addition of A to B: 11011101


(ii) Bitwise AND operation: 01100000
(iii) Bitwise XOR operation: 00011101

(b) Hexadecimal equivalents:


A = 69
B = 74

(c) 27.75 in normalised floating point form (12-bit mantissa, 4-bit exponent):
101111000000 1011

(c) Sum of given floating point numbers in normalised form (6-bit mantissa, 4-bit
exponent):
1 0111

(a) Fetch phase component functions:


- Program counter: Holds the memory address of the next instruction and
increments after fetching.
- Memory address register: Stores the address of the instruction being fetched
from memory.
- Memory data register: Temporarily holds the fetched instruction before passing
it to the CPU.
- Current instruction register: Stores the current instruction being executed after
fetching.
- Data bus: Transfers the fetched instruction from memory to the CPU.
- Address bus: Sends the memory address from the MAR to main memory to
locate the instruction.

(b) Three features of the von Neumann architecture:


1. Uses a single memory for both data and instructions.
2. Executes instructions sequentially using a fetch-decode-execute cycle.
3. Uses a single set of buses for data and instructions.

(c) Three factors affecting CPU performance:


1. Clock speed: Higher clock speeds allow the CPU to execute more instructions
per second.
2. Number of cores: More cores enable parallel processing, improving
multitasking and computational performance.
3. Cache size: Larger cache reduces memory access times, speeding up data
retrieval and execution.

(d) Two differences between a CPU and a GPU:


1. A CPU is optimised for general-purpose processing with fewer, powerful cores,
whereas a GPU has many smaller cores designed for parallel tasks like graphics
rendering.
2. A CPU excels at sequential tasks and complex calculations, while a GPU is
optimised for handling massive parallel computations, such as machine learning and
image processing.

(a) Two types of local secondary storage and their uses:


1. Hard Disk Drive (HDD) – Used for storing large amounts of data such as
documents, software, and multimedia files.
2. Solid-State Drive (SSD) – Used for fast access to operating systems,
applications, and frequently accessed data.

(b) (i) Operation of cloud-based storage:


- Cloud storage allows users to store data on remote servers managed by a
third-party provider.
- Data is uploaded via the internet, stored across multiple servers, and can be
accessed from any device with an internet connection.

(ii) Two advantages and two disadvantages of cloud-based storage:


Advantages:
1. Accessible from anywhere with an internet connection.
2. Automatic backups reduce the risk of data loss.

Disadvantages:
1. Requires a stable internet connection for access.
2. Potential security risks due to data being stored on external servers.

4. Steps carried out by BIOS when a device is powered up:


1. Performs the Power-On Self-Test (POST) to check hardware components like
RAM and storage.
2. Initialises system hardware, including input/output devices and peripherals.
3. Locates and loads the bootloader from the primary storage device.
4. Transfers control to the operating system to complete the startup process.

5. (a) (i) Explanation of fragmentation and why it is undesirable:


- Fragmentation occurs when files are broken into non-contiguous blocks across
a storage device due to repeated file creation, deletion, and modification.
- It is undesirable because it slows down read/write operations as the system
must retrieve scattered file fragments, reducing overall performance.

(b) (i) Explanation of hashing algorithm and overflow area operation:


- A hashing algorithm generates a unique address (hash value) for a record
based on its key, determining its storage location.
- If a collision occurs (two records hashing to the same address), an overflow
area is used to store the extra record.
- When retrieving a record, the system applies the same hash function and
checks both the primary storage location and the overflow area if necessary.

(ii) Two criteria for comparing hashing algorithms:


1. Collision handling efficiency – How effectively the algorithm minimises and
resolves hash collisions.
2. Speed of hashing – The time taken to generate and retrieve hash values
impacts performance.

You might also like