0% found this document useful (0 votes)
5 views

Advance Computer Network Assignment

The document outlines key concepts in advanced computer networking, including the OSI and TCP/IP models, LAN technologies like Ethernet and Token Ring, multiplexing techniques, network topologies, and protocols such as IP, TCP, BGP, and SNMP. It discusses the differences between circuit switching and packet switching, as well as ISDN implementations and ATM technology. The content serves as a comprehensive overview for students in a Master of Computer Applications program, focusing on both theoretical frameworks and practical applications in networking.

Uploaded by

poojaridarshan12
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Advance Computer Network Assignment

The document outlines key concepts in advanced computer networking, including the OSI and TCP/IP models, LAN technologies like Ethernet and Token Ring, multiplexing techniques, network topologies, and protocols such as IP, TCP, BGP, and SNMP. It discusses the differences between circuit switching and packet switching, as well as ISDN implementations and ATM technology. The content serves as a comprehensive overview for students in a Master of Computer Applications program, focusing on both theoretical frameworks and practical applications in networking.

Uploaded by

poojaridarshan12
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

INTERNAL ASSIGNMENT

SESSION MARCH 2024

PROGRAM MASTER OF COMPUTER APPLICATIONS (MCA)

SEMESTER II

COURSE CODE & NAME DCA6204– ADVANCED COMPUTER NETWORK

NAME DARSHAN SHEKARA POOJARY

ROLL NO 2314515632

1) a) The OSI (Open Systems Interconnection) model and the TCP/IP (Transmission
Control Protocol/Internet Protocol) reference model are two fundamental frameworks
for understanding network communication.

OSI Model:
 Layers: Consists of seven layers: Physical, Data Link, Network, Transport,
Session, Presentation, and Application.
 Functionality: Provides a detailed and granular approach to network
interactions. Each layer has specific functions, such as error handling (Data
Link), path determination (Network), and session management (Session).
 Development: Conceptualized by the International Organization for
Standardization (ISO) as a theoretical framework.
 Flexibility: Protocol-independent; serves as a universal set of standards.
 Usage: More of a teaching and theoretical model, rarely used directly in network
implementations.

TCP/IP Model:
 Layers: Consists of four layers: Link, Internet, Transport, and Application.
 Functionality: More pragmatic and directly aligned with the protocols used in
the internet. For instance, the Internet layer corresponds to the Network layer in
OSI, using protocols like IP for routing.
 Development: Developed by the U.S. Department of Defense to create a robust,
fault-tolerant network communication framework.
 Flexibility: Protocol-specific; built around the suite of TCP/IP protocols.
 Usage: Widely used in practical implementations and forms the foundation of
the modern internet.
 Layering Concept: Both models use a layered approach to simplify the complex
process of networking by breaking it down into manageable pieces.
 Modularization: Both models facilitate troubleshooting, protocol design, and
understanding by compartmentalizing functions.

Differences:
 Granularity: OSI has seven layers, offering more granularity, while TCP/IP has
four layers.
 Development Purpose: OSI was developed as a theoretical guide, whereas
TCP/IP was designed for practical implementation.
 Protocol Dependency: OSI is protocol-agnostic, while TCP/IP is closely tied to
the protocols within its suite.

1) b) Ethernet and Token Ring are two distinct LAN technologies that differ significantly
in their architectures and access methods.
Ethernet uses a bus or star topology and employs Carrier Sense Multiple Access with
Collision Detection (CSMA/CD) as its access method. In Ethernet, devices listen for
network traffic before transmitting. If the channel is clear, they send data. If a collision
occurs, devices stop transmitting, wait for a random time, and retry. This method is
decentralized and allows for flexible network expansion, but can lead to increased
collisions and reduced efficiency in high-traffic situations.
Token Ring, developed by IBM, uses a ring topology where data circulates in one
direction. It employs a token-passing access method. A special packet called a token
circulates the network, and only the device holding the token can transmit data. Once
transmission is complete, the token is released for the next device. This method
provides deterministic access times and efficient performance under heavy loads, but
can be less efficient with light traffic due to token overhead.
Ethernet has become the dominant LAN technology due to its simplicity, cost-
effectiveness, and ability to evolve to higher speeds. Token Ring, while offering
advantages in certain scenarios, has largely fallen out of use due to higher costs and
limited scalability. The choice between these technologies historically depended on
factors such as network size, traffic patterns, and specific application requirements.

2) a) Multiplexing is a technique used in telecommunications and computer networks to


combine multiple signals or data streams into a single transmission medium. This
allows for more efficient use of bandwidth and resources, enabling multiple users or
devices to share the same communication channel simultaneously.
Wavelength Division Multiplexing (WDM) and Frequency Division Multiplexing
(FDM) are two specific types of multiplexing techniques, but they differ in their
approach and application:
WDM is primarily used in optical fiber communications. It works by transmitting
multiple optical signals on a single fiber, with each signal carried on a different
wavelength (color) of light. These wavelengths are separated by narrow gaps to prevent
interference. At the receiving end, a demultiplexer separates the individual
wavelengths, allowing each signal to be processed independently.
FDM, on the other hand, is typically used in radio and telephone communications. It
divides the total bandwidth of a transmission medium into separate frequency bands.
Each band is then used to carry a separate signal. The signals are modulated onto
different carrier frequencies, allowing them to be transmitted simultaneously without
interfering with each other.
The key difference lies in the domain of operation: WDM operates in the optical
domain, dealing with light wavelengths, while FDM operates in the electrical domain,
dealing with frequencies. WDM is more scalable and offers higher capacity, making it
ideal for long-distance, high-bandwidth fiber optic networks. FDM is better suited for
lower-bandwidth applications and is commonly used in radio broadcasting and older
telephone systems.

2) b) Network topology refers to the physical or logical arrangement of devices in a


computer network. The main types of network topologies are:

 Bus Topology: All devices connect to a single cable, called the bus. Data travels
along this cable, and each device checks if the data is intended for it. While
simple and cost-effective, it's vulnerable to single points of failure.
 Star Topology: Devices connect to a central hub or switch. All communication
passes through this central point, which can be both an advantage (easy
management) and a disadvantage (single point of failure).
 Ring Topology: Devices are connected in a circular pattern, with each device
linked to two others. Data travels in one direction around the ring. It's efficient
but can be disrupted if one connection fails.
 Mesh Topology: Every device is connected to every other device. This provides
redundancy and fault tolerance but is complex and expensive to implement.
Partial mesh topologies, where only some devices have multiple connections,
are more common.
 Tree Topology: A hierarchical structure where a root node connects to lower-
level nodes, which in turn connect to even lower levels. It's scalable and
manageable but can suffer if the root node fails.
 Hybrid Topology: Combines two or more different topologies to meet specific
network requirements. For example, a star-bus topology combines elements of
both star and bus topologies.

Each topology has its advantages and disadvantages in terms of cost, reliability,
scalability, and performance. The choice depends on factors like network size, purpose,
budget, and required reliability. Modern networks often use combinations of these
topologies to leverage their respective strengths.
3) a) The Internet Protocol (IP) and Transmission Control Protocol (TCP) are fundamental
components of the TCP/IP suite, which forms the basis of internet communication.
IP operates at the network layer of the OSI model. Its primary function is to handle
addressing and routing of data packets across networks. IP assigns unique addresses (IP
addresses) to devices on a network, allowing them to be identified and located. It also
determines the best path for data packets to travel from source to destination, potentially
across multiple networks. IP is connectionless and doesn't guarantee delivery or proper
sequencing of packets.
TCP, on the other hand, operates at the transport layer. It provides reliable, ordered, and
error-checked delivery of data between applications running on hosts communicating
over an IP network. TCP achieves this through several mechanisms:

 Connection-oriented communication: It establishes a connection before data


transfer begins.
 Sequencing: It numbers of packets to ensure proper ordering at the destination.
 Error checking: It uses checksums to detect corrupted data.
 Flow control: It manages the rate of data transmission to prevent overwhelming
the receiver.
 Retransmission: It resends lost or corrupted packets.

Together, TCP and IP form a powerful combination. IP handles the routing of data
across networks, while TCP ensures that the data arrives completely and in the correct
order. This abstraction allows application developers to focus on their specific tasks
without worrying about the complexities of network communication.
While alternatives exist (like UDP for faster, less reliable communication), TCP/IP
remains the backbone of most internet communications due to its reliability and
widespread adoption.

3) b) Circuit switching and packet switching are two fundamental methods of data
transmission in telecommunications networks.
Circuit switching establishes a dedicated physical path between the sender and receiver
for the duration of the communication. This path remains exclusive to that connection
until it's terminated. Traditional telephone networks primarily use this method.
Advantages of circuit switching include:

 Guaranteed bandwidth and quality of service


 Low and constant latency
 Simplicity in implementation
However, circuit switching has disadvantages:
 Inefficient use of network resources, as the circuit remains reserved even when
idle
 Limited flexibility in handling different types of data
 Higher cost for long-distance communications

Packet switching, in contrast, breaks data into smaller packets, each routed
independently through the network. The Internet primarily uses this method.
Advantages of packet switching include:

 Efficient use of network resources, as multiple communications can share the


same paths
 Greater flexibility in handling various data types and volumes
 Better fault tolerance, as packets can be rerouted if a path fails
 More cost-effective for data transmission

Disadvantages of packet switching include:


 Potential for variable latency and jitter
 Possibility of packet loss or out-of-order delivery
 More complex implementation requiring additional protocols for reliability

In modern networks, packet switching dominates due to its efficiency and flexibility,
especially for data communications. However, circuit switching still finds use in
scenarios requiring guaranteed quality of service, like some real-time applications.
Hybrid approaches also exist, combining elements of both to leverage their respective
strengths for specific use cases.

4) a) Border Gateway Protocol (BGP) is the primary protocol for inter-domain routing on
the Internet, facilitating communication between different autonomous systems (AS).
BGP operates as a path-vector protocol, exchanging routing information between BGP
speakers (routers) in different AS. Unlike intra-domain protocols that focus on finding
the shortest path within a network, BGP's primary concern is policy-based routing
between networks.

Key aspects of BGP operation include:


 Establishing TCP-based connections between BGP peers
 Exchanging network reachability information, including AS path and other
attributes
 Making routing decisions based on path attributes and local policies
 Maintaining routing tables and propagating updates as network conditions
change
BGP differs from intra-domain protocols like OSPF (Open Shortest Path First) and RIP
(Routing Information Protocol) in several ways:

 Scope: BGP operates between AS, while OSPF and RIP work within a single
AS.
 Scalability: BGP is designed to handle the massive scale of the Internet, unlike
intra-domain protocols.
 Metrics: BGP uses multiple attributes for path selection, whereas OSPF uses
cost and RIP uses hop count.
 Convergence: BGP converges slower but is more stable, while OSPF and RIP
converge faster but may be less stable in large networks.
 Policy control: BGP allows extensive policy-based routing decisions, which is
limited in OSPF and RIP.
 Protocol type: BGP is path-vector, OSPF is link-state, and RIP is distance-
vector.

These differences reflect the distinct requirements of inter-domain and intra-domain


routing, with BGP prioritizing policy and scalability over finding the absolute shortest
path.

4) b) Integrated Services Digital Network (ISDN) is a set of communication standards for


digital transmission over ordinary telephone copper wire as well as over other media.
ISDN was designed to provide a digital alternative to analog telephone systems,
offering faster data transmission speeds and the ability to carry voice and data
simultaneously.
ISDN provides end-to-end digital connectivity for delivering a wide range of services,
including voice and non-voice services. It offers circuit-switched connections for voice
and data services and packet-switched connections for data communications.
There are two main ISDN implementations:

Basic Rate Interface (BRI):


 Consists of two 64 Kbps B (Bearer) channels and one 16 Kbps D (Delta)
channel
 Total bandwidth: 144 Kbps
 Typically used for residential and small business applications
 Can support two simultaneous connections (voice or data)

Primary Rate Interface (PRI):


 In North America and Japan: 23 B channels and 1 D channel (23B+D)
 In Europe and Australia: 30 B channels and 1 D channel (30B+D)
 Each B channel: 64 Kbps; D channel: 64 Kbps
 Total bandwidth: 1.544 Mbps (T1) or 2.048 Mbps (E1)
 Used by larger organizations with higher capacity needs

The main differences between BRI and PRI are:


 Capacity: PRI offers significantly more bandwidth and channels than BRI
 Application: BRI is for smaller-scale use, while PRI is for larger organizations
 Cost: PRI is more expensive due to its higher capacity
 Hardware: PRI requires more complex equipment to manage the additional
channels

While ISDN technology has been largely superseded by newer broadband technologies
in many areas, it still finds use in specific applications, particularly in telephony and as
a backup for other communication systems.

5) a) Simple Network Management Protocol (SNMP) is a widely used protocol for


monitoring and managing network devices. It operates on the application layer and
follows a manager-agent model.
Key components of SNMP operation include:

 Managers: Network management stations that collect and process information


from agents.
 Agents: Software on managed devices that collect and store management
information.
 Management Information Base (MIB): A hierarchical database of information
about managed devices.
 Protocol Data Units (PDUs): Messages exchanged between managers and
agents.

SNMP operations primarily involve:


 GET: Managers request information from agents.
 SET: Managers modify values on agents.
 TRAP: Agents send unsolicited notifications to managers.

SNMPv3 significantly enhances security compared to SNMPv2 in several ways:

 Authentication: SNMPv3 implements user-based security, requiring username


and password authentication. This prevents unauthorized access to device
information and controls.
 Encryption: It provides privacy through encryption of SNMP messages,
protecting sensitive data from eavesdropping.
 Integrity: SNMPv3 ensures that messages haven't been tampered with during
transmission.
 Access Control: It offers fine-grained access control, allowing administrators to
define what operations and what information each user can access.
 Engine ID: SNMPv3 uses a unique engine ID for each SNMP entity, enhancing
security and preventing message replay attacks.

These security features address major vulnerabilities in SNMPv2, which relied on


community strings (essentially cleartext passwords) for authentication and lacked
encryption.
While SNMPv3's enhanced security comes at the cost of increased complexity and
computational overhead, it's crucial for protecting sensitive network management
operations, especially in environments where security is a priority.

5) b) Asynchronous Transfer Mode (ATM) is a high-speed, connection-oriented switching


and multiplexing technology designed to support various types of traffic including
voice, video, and data.
The ATM protocol architecture consists of three main layers:

 Physical Layer: Handles the transmission of bits over the physical medium.
 ATM Layer: Responsible for cell switching and multiplexing. It manages cell
header generation/extraction, cell routing, and traffic management.
 ATM Adaptation Layer (AAL): Adapts higher-layer protocols to the ATM layer,
segmenting data into cells and reassembling them at the destination.

ATM uses fixed-size 53-byte cells (48 bytes payload, 5 bytes header) for all
communications, allowing for efficient switching and multiplexing.
Virtual Channel Connections (VCCs) in ATM networks are established through a
process involving several steps:

 Signaling: The source node sends a setup message specifying the destination
and required Quality of Service (QoS) parameters.
 Route Selection: The network determines the best path based on available
resources and QoS requirements.
 Resource Allocation: Network switches along the chosen path allocate
necessary resources (bandwidth, buffers) for the connection.
 Virtual Path Identifier (VPI) and Virtual Channel Identifier (VCI) Assignment:
Each switch assigns unique VPI/VCI values for the connection on its input and
output ports.
 Connection Table Update: Switches update their connection tables with the new
VPI/VCI mappings.
 Confirmation: The destination node sends a confirmation message back to the
source.
 Data Transfer: Once established, data can flow through the VCC.
6) a) Web security is crucial for protecting sensitive information transmitted over the
Internet. Key requirements include:

 Confidentiality: Ensuring that data remains private and inaccessible to


unauthorized parties.
 Integrity: Guaranteeing that data isn't altered during transmission.
 Authentication: Verifying the identity of communicating parties.
 Non-repudiation: Preventing denial of sent messages or transactions.
 Availability: Ensuring that web services remain accessible to legitimate users.

Secure Socket Layer (SSL), and its successor Transport Layer Security (TLS), play a
vital role in meeting these requirements. SSL/TLS operates between the application and
transport layers, providing a secure channel for communication.

Key aspects of SSL/TLS include:


 Encryption: SSL uses symmetric encryption for data transfer, protecting
confidentiality.
 Digital Certificates: These authenticate the identity of the server (and optionally
the client), addressing the authentication requirement.
 Public Key Cryptography: Used for secure key exchange and digital signatures,
supporting both confidentiality and integrity.
 Message Authentication Codes (MACs): Ensure data integrity by detecting any
alterations during transmission.
 Handshake Protocol: Negotiates encryption algorithms and keys, and
authenticates the server before data transfer begins.

SSL/TLS provides a standardized, widely-supported method for secure


communication, crucial for e-commerce, online banking, and any application requiring
secure data transfer. It protects against various attacks like eavesdropping, tampering,
and impersonation.
However, SSL/TLS is not a complete security solution. It must be complemented by
other measures such as secure coding practices, regular updates, and proper server
configuration to provide comprehensive web security.

6) b) Cryptography is the practice of secure communication in the presence of adversaries.


Its core principles include confidentiality, integrity, authentication, and non-
repudiation. The goal is to protect information from unauthorized access or
modification.
Symmetric Key Encryption and Public Key Encryption are two fundamental
approaches in cryptography:
Symmetric Key Encryption uses a single secret key for both encryption and decryption.
It's fast and efficient but requires secure key exchange between parties. Examples
include AES and DES.
Public Key Encryption, also known as asymmetric encryption, uses a pair of keys: a
public key for encryption and a private key for decryption. It solves the key distribution
problem of symmetric encryption but is computationally more intensive. Examples
include RSA and ECC.
RSA (Rivest-Shamir-Adleman) is a widely used public key algorithm. It ensures secure
communication through:

 Key generation: Creating large prime numbers to derive public and private keys.
 Encryption: Using the recipient's public key to encrypt messages.
 Decryption: Using the recipient's private key to decrypt messages.

RSA's security relies on the difficulty of factoring large numbers. The public key can
be freely distributed, while the private key remains secret. This allows for secure
communication without prior key exchange, making it suitable for various applications,
including digital signatures and secure data transmission.

You might also like