Computer Architecture
Computer Architecture
SC/COM/0054/23
COMP 216
ASSIGNMENT TWO:
i. Syntax rules
-Defines the structre and format of the data being transmitted.
-Ensures that both sender and receiver understand how the data is organized for
example packet headers, data payload.
-Example: HTTP request headers follow a specific structure.
ii. Semantics rules
-Specify the meaning of the data and how it should be interpreted.
-Ensure that the actions performed after receiving data are consistent.
-Example: Interpreting an HTTP GET request to retrieve a resource.
iii. Timing rules
-Control the synchronization and timing of data exchange.
-Ensure that devices communicate at the appropriate time and avoid conflicts or
delays.
-Example: TCP uses acknowledgments and retransmissions to handle timing issues.
iv. Error detection and correction rules
-Establish methods to identify and correct errors in data transmission.
-Ensure data integrity and reliability over potentially unreliable networks.
-Example: Cyclic Redundancy Check (CRC) in data frames.
v. Flow control rules
-Manage the rate of data transmission to prevent overwhelming the receiver.
-Example: TCP’s window size adjusts the data flow between sender and receiver.
vi. Addressing rules
-Define how devices identify and locate each other within the network.
-Example: IP addressing and DNS for mapping domain names to IP addresses.
vii. Session management rules
-Maintain and manage communication sessions, ensuring consistent connections.
-Example: Establishing and terminating sessions in protocols like HTTP or
WebSocket.
viii. Security rules
-Specify how to secure data during transmission to protect against unauthorized
access or tampering.
Example: Encryption rules in HTTPS.
Protocols are essential in network communication because they establish the rules and
guidelines for data exchange between devices, ensuring effective, reliable, and secure
communication.
i. Standardization
Protocols provide a common framework for communication, allowing devices from
different manufacturers and platforms to interact seamlessly thus promotes
interoperability across diverse systems.
ii. Data Integrity
Protocols include mechanisms for error detection and correction, ensuring that data is
transmitted accurately without corruption or loss.
This reliability is critical for applications like file transfers or financial transactions.
iii. Efficiency
They regulate data flow, prevent congestion, and optimize resource usage, ensuring
that communication is fast and efficient even on busy networks.
For example, TCP manages data flow to avoid overwhelming slower devices.
iv. Synchronization
Timing rules in protocols help synchronize the sender and receiver, ensuring that data
is sent and processed in the correct order.
A protocol suite is a collection of related protocols that work together to facilitate data
transmission across networks.
It serves as the foundation for effective and reliable network communication in the following
ways:
i. Interoperability
Protocol suites ensure that devices from different manufacturers and systems can
communicate seamlessly. By following a common set of protocols, diverse hardware and
software platforms can work together.
ii. Standardization
Adherence to a protocol suite promotes consistency across networks, simplifying the
development, deployment, and maintenance of communication systems.
iii. Modularity
Protocol suites are often designed in layers, each serving a specific purpose (e.g., data
transmission, addressing, encryption). Adhering to a suite ensures that these layers function
cohesively, enabling the reuse and substitution of individual components without affecting
the entire system.
v. Efficiency
By adhering to a suite, communication processes like addressing, routing, and data flow
control are streamlined, reducing overhead and optimizing network performance.
vi. Security
Protocol suites often include encryption and authentication protocols to protect data during
transmission, ensuring confidentiality, integrity, and authenticity.
5. Explain how the TCP/IP model and the OSI model are used to facilitate
standardization in the communication process.
The TCP/IP and OSI models standardize network communication by providing structured
frameworks for data transmission.
The TCP/IP model defines protocols for reliable communication across networks, while the
OSI model divides the process into seven layers, ensuring clear and consistent data handling.
Both models use a layered approach to ensure interoperability between devices, allowing
communication across different systems.
They also allow protocol flexibility, making it easier to adapt to new technologies without
disrupting communication.
The separation of layers simplifies troubleshooting and supports scalability, helping networks
grow while maintaining standardized procedures.
6. Explain how data encapsulation allows data to be transported across the network.
Data encapsulation is a process used to prepare data for transmission across a network.
It involves wrapping data in a specific format at each layer of the network protocol stack,
ensuring that it can be sent securely and efficiently from the sender to the receiver.
i. Application Layer
The process starts at the Application Layer, where data is generated by the user or
application.
The Data Link Layer ensures that the data can reach the correct device on the local network.
v. Physical Layer
Finally, at the Physical Layer, the frames are converted into electrical or optical signals,
depending on the transmission medium, and transmitted over the network.
7. Describe the purpose and functions of the physical layer in the network.
The Physical Layer is the lowest part of the network model and is responsible for transmitting
raw data over physical connections:
i. Data Transmission
It turns digital data into signals (electrical, optical, or radio) so that it can travel over cables or
wireless networks.
This includes deciding whether the signal will be electrical (in copper cables), optical (in
fiber optics), or radio waves (in wireless systems).
The choice of medium affects the speed, distance, and reliability of the data transmission.
They specify the bandwidth of the physical medium, ensuring data is sent at an efficient rate.
This could involve techniques like amplitude modulation (AM), frequency modulation (FM),
or digital encoding methods.
v. Error Detection
While error correction is not the Physical Layer’s main task, standards may define ways to
detect transmission issues, such as signal loss or distortion, to help ensure reliable data
transfer.
This affects how data flows and how devices are physically arranged.
vii. Distance Limitations
These standards also set limits on how far signals can travel over the medium without losing
quality or integrity.
For example, copper cables have a distance limit for Ethernet connections.
9. Describe fiber optic cabling and its main advantages over other media.
Fiber optic cabling is a type of network cabling that uses light signals to transmit data. It
consists of thin strands of glass or plastic fibers, surrounded by a protective layer.
The light travels through these fibers by reflecting off the walls of the fiber, which allows
data to be transmitted over long distances with high speed and minimal loss.
Here are the main advantages of fiber optic cabling over other media:
i. High Speed
Fiber optic cables can transmit data at much higher speeds than copper cables.
They support high-bandwidth applications, making them ideal for internet connections, video
streaming, and other data-heavy tasks.
This makes them suitable for connecting distant locations, such as between buildings or
across cities, without the need for signal boosters.
This makes them more reliable in environments with heavy electrical equipment or other
sources of interference.
iv. Greater Security
Fiber optic cables are more secure than copper cables because they are difficult to tap into
without being detected.
The light signals inside the cable are harder to intercept than electrical signals, making fiber a
preferred choice for sensitive data transmission.
This also saves space in cable trays and reduces the overall weight of the cabling system.
10. Describe the purpose and function of the data link layer in preparing
communication for transmission on specific media.
-The Data Link Layer is the second layer in the OSI model and plays a crucial role in
preparing communication for transmission over specific media, such as cables or wireless
connections. Its purpose is to ensure that data can be transferred reliably across the physical
medium.
i. Framing
The Data Link Layer breaks up large blocks of data from the Network Layer into smaller,
manageable units called frames.
Each frame contains not just the data but also control information, such as addresses and
error-checking data, to ensure successful communication.
These addresses identify the source and destination devices on the local network, ensuring
that data reaches the correct device.
iii. Error Detection and Correction
The Data Link Layer ensures data integrity by detecting and correcting errors that may occur
during transmission.
It uses techniques such as checksums or cyclic redundancy checks (CRC) to check for errors
in the frames.
This layer controls the flow of data between devices to prevent congestion.
If a sender is transmitting data too quickly for the receiver to process, the Data Link Layer
can manage this by regulating the rate of transmission.
It manages access to the shared transmission medium, especially in networks where multiple
devices are trying to send data at the same time.
The Data Link Layer uses protocols (e.g., CSMA/CD for Ethernet) to decide when a device
can send its data to avoid collisions.
Before data transmission begins, the Data Link Layer establishes a link between devices,
ensuring they are ready to communicate.
After the data is sent, the link is properly terminated to signal the end of the communication
session.