0% found this document useful (0 votes)
153 views68 pages

DATA COMMUNICATION Summary

The document provides information about the OSI model and TCP/IP model, including definitions of key terms and an overview of the functions of each layer. It discusses the seven layers of the OSI model from the Physical Layer to the Session Layer, describing the main responsibilities of each layer, such as physical addressing, data encapsulation, routing, error detection, and session establishment.

Uploaded by

nitin kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views68 pages

DATA COMMUNICATION Summary

The document provides information about the OSI model and TCP/IP model, including definitions of key terms and an overview of the functions of each layer. It discusses the seven layers of the OSI model from the Physical Layer to the Session Layer, describing the main responsibilities of each layer, such as physical addressing, data encapsulation, routing, error detection, and session establishment.

Uploaded by

nitin kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

RAVI- LEARNING – CENTER ( HYDERABAD)

LICE SDE ( T) TO AGM ( T) COACHING STUDY MATERIAL


TOPIC :- DATA COMMUNICATION

OSI AND TCP /IP MODEL / NETWORK ELEMENTS


1. ISO means International standards of Organization

2. OSI means Open Systems Interconnection

3. Protocol means set of rules that governs the data communication

4. Protocol contains syntax , semantics and timeliness

5. De facto standard approved by any standards body and may be controlled by a


single company.

6. De jure standards refer to standards that are established by law

7. OSI model can be remembered as APSTNDP

8. Data Encapsulation is the process in which some extra information is added to


the data item to add some features to it.

9. Data De-encapsulation is the reverse process of data encapsulation. The


encapsulated information is removed from the received data to obtain the
original data.

10. Data Encapsulation and De- Encapsulation .

1
11.

12. The Physical Layer is the lowest layer in the OSI (Open Systems Interconnection)
model. Its primary function is to define the hardware elements and physical
medium used for communication

Physical Medium Specification: The Physical Layer specifies the


characteristics of the hardware used for transmitting data. This includes details
such as the type of cables, connectors, voltages, and other physical attributes
required for communication.

Data Encoding and Signaling: The Physical Layer is responsible for encoding
the binary data into electrical, optical, or radio wave signals for transmission
over the physical medium.

Data Transmission Rate (Bit Rate): The Physical Layer defines the transmission
rate at which bits are sent over the network. This is often referred to as the bit
rate or data rate and is measured in bits per second (bps).

Synchronization of Bits: Ensures that the sender and receiver are synchronized
in terms of when bits are transmitted and when they are received.

Physical Topology: Specifies the arrangement of devices on the network and


how they are physically connected. This includes concepts like bus, ring, star,
and mesh topologies.

Transmission Mode: The Physical Layer defines whether communication is


simplex (one-way), half-duplex (two-way, but only one direction at a time), or full-
duplex (two-way, simultaneous communication).

2
Bit Order: Determines the order in which bits are transmitted over the medium,
whether it is transmitted least significant bit (LSB) first or most significant bit
(MSB) first.

Error Detection and Correction: In some cases, the Physical Layer may include
mechanisms for detecting and correcting errors that can occur during data
transmission

Physical Addressing: Assigns physical addresses (like MAC addresses in


Ethernet) to devices on the network to facilitate the delivery of data to the correct
destination.

Topology Changes: The Physical Layer deals with the physical connection and
disconnection of devices on the network, managing changes in the network
topology.

13. The Data Link Layer essentially acts as a bridge between the Physical Layer and
the Network Layer, providing a reliable link for the transmission of data between
directly connected nodes in a network. Popular protocols operating at this layer
include Ethernet, Wi-Fi, and PPP (Point-to-Point Protocol).

Frame Synchronization: Ensures proper alignment and synchronization of


frames (blocks of data) for accurate transmission and reception.

Addressing (MAC Addressing): Assigns unique Media Access Control (MAC)


addresses to devices on the network to facilitate the identification of source and
destination devices.

Error Detection and Correction: Implements error detection mechanisms to


identify and, in some cases, correct errors that may occur during data
transmission.

Flow Control: Manages the flow of data between devices to prevent congestion
and ensure efficient communication.

Access Control: Controls access to the physical medium to avoid data


collisions in shared network segments. It employs protocols like Carrier Sense
Multiple Access with Collision Detection (CSMA/CD) in Ethernet networks.

Logical Link Control (LLC): Responsible for the establishment, maintenance,


and termination of logical links between devices. It also handles error recovery
and flow control.

3
Error Handling: Detects errors and takes appropriate action, such as requesting
the retransmission of frames that were not received correctly.

Address Resolution Protocol (ARP): Maps IP addresses to MAC addresses,


enabling devices to communicate on a local network.

14. The Network Layer, which is the third layer in the OSI model, provides logical
addressing, routing, and forwarding of data between devices across different
networks. Here are the key functions of the Network Layer:

Logical Addressing: Assigns logical addresses (such as IP addresses in the case


of the Internet) to devices for network identification.

Routing: Determines the best path for data to travel from the source to the
destination across multiple interconnected networks. Routing is based on logical
addresses.

Path Determination: Selects the most efficient route for data transmission,
considering factors like network topology, traffic load, and available resources.

Packet Forwarding: Forwards data packets from the source to the destination
through intermediate routers based on the destination address.

Logical-Physical Address Mapping: Translates logical addresses into physical


addresses (like MAC addresses) to facilitate data link layer operations within a
local network.

Fragmentation and Reassembly: Divides large packets into smaller fragments


for transmission over networks with smaller Maximum Transmission Unit (MTU)
sizes. The Network Layer at the destination then reassembles these fragments
into the original packets.

Congestion Control: Monitors and manages network congestion to ensure


efficient data flow and prevent network saturation.

Subnetting: Divides large networks into smaller subnets, allowing for better
organization, management, and routing efficiency.

Inter-Network Communication: Enables communication between devices on


different networks. The Network Layer is responsible for creating a seamless
end-to-end communication path.

4
15. The Transport Layer is the fourth layer in the OSI model and is responsible for
ensuring reliable end-to-end communication and data transfer between devices
on different hosts. Here are the key functions of the Transport Layer:

Segmentation and Reassembly: Divides data from the upper layers into smaller
segments for efficient transmission. At the receiving end, these segments are
reassembled into the original data.

End-to-End Communication: Provides end-to-end communication services


between applications on different devices. It abstracts the complexities of the
underlying network and ensures that data is delivered reliably and in the correct
order.

Flow Control: Manages the rate at which data is sent between devices to
prevent overwhelming the receiver. Flow control mechanisms prevent
congestion and ensure that the sender does not flood the receiver with more
data than it can handle.

Error Detection and Correction: Implements error detection mechanisms to


identify any errors that may occur during data transmission. Depending on the
protocol used (such as TCP), the Transport Layer may also include error
correction through retransmission of lost or corrupted data.

Reliability: Ensures reliable data delivery by acknowledging the receipt of data


segments and requesting retransmission of any lost or corrupted segments.

Multiplexing and Demultiplexing: Multiplexing allows multiple communication


streams to be combined into a single connection, and demultiplexing separates
incoming data streams to deliver them to the correct higher-layer protocol or
application.

Connection Establishment and Termination: Establishes, maintains, and


terminates connections between applications on different devices. This can be
connection-oriented (as in TCP) or connectionless (as in UDP).

Port Addressing: Uses port numbers to distinguish between different services or


applications on the same device. This allows multiple services to run
concurrently on the same device, with each service being identified by its unique
port number.

Quality of Service (QoS) Management: Manages the quality of service provided


to applications by controlling factors such as bandwidth, latency, and reliability.

5
Congestion Control: Monitors and manages network congestion to ensure
optimal performance and prevent degradation of service.

16. The Session Layer is the fifth layer in the OSI model, and its primary function is to
establish, manage, and terminate communication sessions between two
devices. A communication session can be thought of as a dialog between two
processes, applications, or systems.

Session Establishment, Maintenance, and Termination: Manages the process


of setting up, maintaining, and tearing down sessions between applications. This
involves establishing a connection, maintaining it during data transfer, and
closing it when the communication is complete.

Dialog Control: Coordinates and manages the orderly exchange of data


between two devices or applications. It ensures that communication occurs in
an organized and synchronized manner.

Synchronization: Synchronizes the data exchange between devices, ensuring


that data is delivered in a coherent and orderly fashion, even if there are
interruptions or delays.

Token Management: Manages the use of tokens, which control access to the
communication channel. A token is a unique identifier that allows a device or
application to control the right to send data.

Full-Duplex and Half-Duplex Communication: upports both full-duplex


(simultaneous two-way communication) and half-duplex (one-way
communication at a time) modes of communication between devices.

Session Recovery: Implements mechanisms for recovering from


communication failures, ensuring that a session can be reestablished or
continued after an interruption.

Checkpointing: Provides the ability to periodically save the current state of a


session. In case of a failure, the session can be restored from the latest
checkpoint rather than starting from the beginning.

Negotiation of Communication Parameters: Facilitates the negotiation of


communication parameters between devices, such as data transfer rates, data
formats, and error-checking methods.

6
Data Flow Control: Regulates the flow of data between devices to prevent
overwhelming the receiver and ensure that communication occurs at a pace that
both devices can handle.

Session Termination and Closure: Coordinates the orderly termination of a


session, ensuring that both devices are aware that the session is ending and that
any necessary cleanup tasks are performed.

17. The Presentation Layer is the sixth layer in the OSI model, and its primary
function is to translate data between the Application Layer and the lower layers
of the OSI model, ensuring that data is presented in a readable and usable
format

Data Translation: Converts data from the format used by the application layer
into a common format that can be understood by both the sender and receiver.
This involves character set translation, data encoding, and format conversion.

Character Code Translation: Handles the translation of character sets between


devices with different encoding schemes. This is crucial for ensuring that data is
interpreted correctly across different systems.

Data Compression: Compresses data to reduce the amount of bandwidth


required for transmission. This can improve network efficiency and speed up
data transfer.

Encryption and Decryption: Provides encryption of data for secure


transmission over the network. It also handles decryption of received data to
make it readable for the application layer.

Data Syntax Control: Manages the syntax of data, ensuring that the structure
and format are compatible between the communicating devices.

Data Formatting and Parsing: Prepares data for presentation to the application
layer by formatting it according to the specified syntax. This includes parsing
received data to extract the relevant information.

Graphics and Multimedia Handling: Manages the representation of graphics,


images, audio, and video. It may involve data compression, decompression, and
conversion to ensure compatibility between different systems.

7
Protocol Conversion: Converts data between different communication
protocols to facilitate communication between systems that may use different
standards.

Data Translation for Heterogeneous Systems: Addresses the differences in


data formats and representations between heterogeneous systems, allowing
them to communicate effectively.

ASCII to EBCDIC Conversion: In some legacy systems, the Presentation Layer


may be involved in converting data between ASCII (American Standard Code for
Information Interchange) and EBCDIC (Extended Binary Coded Decimal
Interchange Code) formats.

18. The Application Layer is the topmost layer in the OSI model, and it represents the
interface between the network and the software applications that communicate
over the network. This layer provides network services directly to end-users or
application processes.

Network Virtual Terminal: Provides a virtual terminal abstraction, allowing a


user to log in and interact with a remote system as if they were directly
connected.

File Transfer and Management: Facilitates the transfer and management of


files between devices. Protocols like FTP (File Transfer Protocol) operate at this
layer.

Email Services: Supports email communication between clients and servers.


Protocols like SMTP (Simple Mail Transfer Protocol) and POP3 (Post Office
Protocol version 3) operate at the Application Layer.

Remote Login: Allows users to log into remote systems and execute commands.
Protocols like Telnet operate at this layer.

Directory Services: Provides directory and naming services, allowing users to


find resources on the network. LDAP (Lightweight Directory Access Protocol) is
an example.

Web Browsing Enables users to access and interact with websites using
protocols like HTTP (Hypertext Transfer Protocol).

Distributed Data Processing: Supports distributed computing and data


processing across multiple devices on the network.

8
Presentation of Data: Translates and presents data in a format that is
understandable to the user or application. This includes tasks such as data
formatting, encryption, and compression.

Network Management: Includes network management protocols (SNMP -


Simple Network Management Protocol) that allow for monitoring and
management of network devices.

Collaboration Services: Provides services for collaborative applications, such


as shared whiteboards, video conferencing, and collaborative document editing.

Virtual Private Network (VPN) Services: Supports secure communication over


public networks by providing VPN services.

Support for User Authentication: Facilitates user authentication and


authorization services, ensuring secure access to network resources.

Support for Network Security: Implements security mechanisms and protocols


to protect data during transmission and ensure the integrity and confidentiality
of communication.

19. The hardware address, also known as the Media Access Control (MAC) address,
is a unique identifier assigned to each network interface card (NIC) in a device.
MAC addresses are typically represented as a series of six pairs of hexadecimal
digits separated by colons or hyphens (e.g., 00:1A:2B:3C:4D:5E).

20. MAC addresses operate at the Data Link Layer (Layer 2) of the OSI model and are
used for local network communication within the same physical network
segment.

21. Port addresses are used in the context of transport layer protocols (such as TCP
and UDP) to distinguish between different services or applications on a single
device.

22. Port addresses are 16-bit numbers, and they are used in combination with IP
addresses to uniquely identify a specific process or service on a device.

23. Port addresses operate at the Transport Layer (Layer 4) of the OSI model and are
used to direct data to the correct application or service running on a device.

24. Logical addressing is used for end-to-end communication across different


networks. The most common form of logical addressing is the Internet Protocol
(IP) address.
9
25. IP addresses can be IPv4 (32-bit) or IPv6 (128-bit). In IPv4, addresses are typically
represented as four decimal numbers separated by dots (e.g., 192.168.1.1).

26. Logical addressing operates at the Network Layer (Layer 3) of the OSI model and
is used for global communication between devices on different networks.

27. End-to-End Delivery (Network Layer): End-to-end delivery refers to the


successful and reliable delivery of data from the source host to the destination
host across a network, regardless of the number of intermediate devices or
networks involved.

28. Process-to-Process Delivery (Transport Layer): Process-to-process delivery


focuses on the delivery of data between specific processes or applications
running on the source and destination hosts.

29. Layer 1 , Layer 2 and Layer 3 in OSI model are network layers

30. Layer 5 , Layer 6 and Layer 7 in OSI model are User layers

31.

10
32. The TCP/IP model, also known as the Internet protocol suite, is a conceptual framework
used for understanding and designing network protocols.

33. It stands for Transmission Control Protocol/Internet Protocol.

34. The TCP/IP model is a set of protocols and standards that form the foundation of the
modern internet. It consists of four layers, Link layer or Network access layer , Internet
layer , Transport layer and application layer .

35. Link Layer (or Network Access Layer): This layer deals with the physical connection to
the network and the transmission of raw bits over a physical medium. It includes
protocols for Ethernet, Wi-Fi, and other technologies. Network interface cards,
switches, and bridges operate at this layer.

36. Internet LayerThis layer is responsible for logical addressing, routing, and
fragmentation and reassembly of packets. The Internet Protocol (IP) operates at this
layer. Routers operate at the Internet Layer.

37. Transport Layer: his layer is concerned with end-to-end communication, ensuring that
data is reliably and accurately delivered between devices. It includes protocols like
Transmission Control Protocol (TCP) for reliable, connection-oriented communication,
and User Datagram Protocol (UDP) for connectionless, lightweight communication.
Gateways and hosts operate at the Transport Layer.

38. Application Layer: This layer deals with high-level protocols, including those for email,
file transfer, and web browsing. It provides a platform for software applications to
communicate over a network. HTTP (Hypertext Transfer Protocol), FTP (File Transfer
Protocol), SMTP (Simple Mail Transfer Protocol), and others operate at the Application
Layer.

39. Common protocols and technologies associated with the Network Access Layer in
TCP/IP include Ethernet, Wi-Fi (802.11), PPP (Point-to-Point Protocol), and others.

40. The LLC ( Logical Link Control ) sublayer is responsible for providing a reliable link
between devices over a network. It deals with flow control, error checking, and
addressing at the Data Link Layer

41. In the TCP/IP model, the most commonly used LLC protocol is the IEEE 802.2 standard,
which defines the LLC sublayer for various network technologies.

42. LSAP (Logical Service Access Point) is a concept associated with the LLC sublayer. It is a
value used to identify the type of network layer protocol being used. LSAP values are
used in IEEE 802.2 frames to indicate the upper-layer protocol for which the frame is
intended.

11
43. LSAP values are 8 bits long and can vary for different protocols. For example, an LSAP
value of 0xAA might be used to identify IP (Internet Protocol) as the upper-layer protocol.

44. LSAP values help the receiving device at the Data Link Layer determine which higher-
layer protocol should process the data contained in the frame.

45. Ethernet typically uses various types of cabling, such as twisted pair cables (like Cat5e
or Cat6) or fiber-optic cables, to transmit data.

46. Ethernet operates at the Data Link Layer of the OSI model. The most common Ethernet
protocol is IEEE 802.3.

47. Ethernet frames consist of a header, payload (data), and a trailer. The header includes
source and destination MAC addresses, among other control information.

48. Devices on an Ethernet network are identified by their unique Media Access Control
(MAC) addresses.

49. Ethernet uses the CSMA/CD access method, where devices listen to the network before
transmitting to avoid collisions.

50. In CSMA / CD If a collision is detected (two devices transmitting at the same time), a
backoff algorithm is used to prevent both devices from retransmitting simultaneously.

51. Ethernet comes in various speeds, with common standards including 10 Mbps
(Ethernet), 100 Mbps (Fast Ethernet), 1 Gbps (Gigabit Ethernet), 10 Gbps (10 Gigabit
Ethernet), and higher.

52. Ethernets are designated as 10 base T , 10 base 2 , 10 base 5 , 100 base x and
1000 base x etc.

53. ICMP ( Internet control message protocol ) is primarily used for sending error
messages and operational information about network conditions. It provides feedback
about the status of network communication.

54. ICMP includes the Echo Request and Echo Reply messages, commonly used for
network troubleshooting. The "ping" command sends Echo Request messages to a
destination host, and the host responds with Echo Reply messages.

55. IGMP ( Internet Group message Protocol ) is used by hosts to report their multicast
group memberships to any neighboring multicast routers. It enables hosts to join or
leave multicast groups on a network.

12
56. There are multiple versions of IGMP. IGMPv2 and IGMPv3 are commonly used. IGMPv2
adds a Leave Group message to improve group management, while IGMPv3 introduces
source-specific multicast, allowing hosts to express interest in traffic from specific
sources.

57. ARP (Address Resolution Protocol):ARP is used to map an IP address to a


corresponding MAC address on a local network. When a device wants to communicate
with another device on the same network, it needs to know the MAC address of the
destination device. ARP helps in resolving this mapping.

58. RARP (Reverse Address Resolution Protocol): RARP is the reverse of ARP. It is used to
map a MAC address to an IP address. RARP is less commonly used than ARP.

59. TCP( Transmission Control Protocol ) is a connection-oriented protocol. Before data


exchange, a reliable and established connection is established between the sender and
the receiver.

60. UDP( User datagram Protocol ) is a connectionless protocol, meaning it does not
establish a dedicated connection before transmitting data. Each packet is independent.

61. TCP header details

Source Port (16 bits): Identifies the port number of the sender's application.

Destination Port (16 bits): Identifies the port number of the receiver's application.

Sequence Number (32 bits): It is used for ordering and reassembly of segments at the
receiver.

13
Acknowledgment Number (32 bits): If the ACK flag is set, this field contains the next
sequence number that the sender of the segment is expecting to receive. It
acknowledges receipt of all prior bytes (up to the acknowledged number).

Data Offset (4 bits): Specifies the length of the TCP header in 32-bit words. This field
indicates where the data begins. The minimum value for this field is 5, indicating a 20-
byte header.

Reserved (6 bits): Reserved for future use. Must be set to zero.

Control Flags (6 bits): Contains several control flags that control the behavior of the
TCP connection. Common flags include:
URG (Urgent): Indicates that the Urgent Pointer field is significant.
ACK (Acknowledgment): Indicates that the Acknowledgment Number field is significant.
PSH (Push): Pushes data to the receiving application without waiting for a full buffer.
RST (Reset): Resets the connection.
SYN (Synchronize): Initiates a connection.
FIN (Finish): Terminates a connection.

Window Size (16 bits): Specifies the size of the sender's receive window. It indicates
how much data the sender is willing to receive before requiring an acknowledgment.

Checksum (16 bits): Provides error-checking for the TCP header and data. It covers the
entire TCP segment.

Urgent Pointer (16 bits): f the URG flag is set, this 16-bit field is an offset from the
sequence number indicating the last urgent data byte.

Options (variable): May include optional parameters or padding. The length of the
options field is determined by the Data Offset field.

Padding (variable): Used to ensure that the header is a multiple of 32 bits.

62. DHCP stands for Dynamic Host Configuration Protocol. It is a network protocol used to
automatically assign and manage IP addresses and configuration information to devices
on a network

63. By automating the assignment of IP addresses, DHCP helps in managing and organizing
network configurations more efficiently. It eliminates the need for manual IP address
assignments, reducing the likelihood of conflicts and making it easier to add or remove
devices from the network.

64. DHCP is commonly used in local area networks (LANs), home networks, and in larger
enterprise environments to streamline the network configuration process.

14
65. NTP is a protocol for time synchronization on computer networks, while FTTP is a type of
broadband internet service that uses fiber-optic cables to provide high-speed internet
access.

66. FTP (File Transfer Protocol) and TFTP (Trivial File Transfer Protocol) are both networking
protocols used for transferring files between systems.

67. FTP is a standard network protocol used to transfer files between a client and a server
on a computer network.

68. FTP operates on the traditional client-server model where the client initiates a
connection to the server and can upload or download files.

69. TFTP (Trivial File Transfer Protocol): TFTP is a simpler and less feature-rich file transfer
protocol compared to FTP.

70. TFTP is often used in scenarios where simplicity is more critical than advanced features.
It's commonly used in situations like bootstrapping devices, transferring firmware
images, or configurations within a local network.

71. Telnet has largely been replaced by more secure protocols such as SSH (Secure Shell).
Telnet is a text-based protocol, meaning that the communication is in the form of plain
text. This includes the transmission of commands and responses.

72. Telnet was once widely used for remote access, its lack of encryption and susceptibility
to security threats have led to its decline in favor of more secure alternatives like SSH

73. SNMP (Simple Network Management Protocol) and SMTP (Simple Mail Transfer
Protocol) are two different network protocols serving distinct purposes in the realm of
networking and communication.

74. SNMP is designed to manage and monitor network devices and their functions. It allows
network administrators to monitor the performance, detect and resolve network issues,
and configure remote devices.

75. SMTP is a protocol used for the transmission of email messages between email servers.
It works to send outgoing mail from a client (email sender) to a server or between email
servers.

76. SNMP comprises a set of standards, including the SNMP manager, SNMP agents,
Management Information Base (MIB), and the SNMP protocol itself.

77. SMTP is part of the application layer of the Internet Protocol Suite and works in
conjunction with other protocols like POP3 (Post Office Protocol) and IMAP (Internet
Message Access Protocol) that are used by email clients to retrieve messages.

15
78. Port numbers are 16-bit unsigned integers, and they range from 0 to 65535.

79. Well-Known Ports (0-1023): Reserved for standard services like HTTP (80), HTTPS (443),
FTP (21), SMTP ( 25).

80. Registered Ports (1024-49151): Assigned by the Internet Assigned Numbers Authority
(IANA) for specific applications.

81. In unicast communication, data is sent from one sender to one specific receiver. It is
One-to-one communication or Point-to-point communication.

82. In multicast communication, data is sent from one sender to multiple recipients. It is
One-to-many communication.

83. In broadcast communication, data is sent from one sender to all devices in the network.
It is one to all communication .

84. LANs are networks that are limited to a small geographic area, such as a single building,
a campus, or a group of nearby buildings.

85. MANs cover a larger geographic area than LANs but are smaller than WANs. They
typically span a city or a large campus.

86. WANs cover a wide geographic area and can connect networks across cities, countries,
or even continents.

87. In a bus topology, all devices share a common communication medium, often a single
cable. Nodes are connected to the bus through interfaces or taps.

88. In a star topology, each device is connected to a central hub or switch. All
communication between devices passes through the central hub.

89. In a ring topology, each device is connected to exactly two other devices, forming a
closed loop. Data travels in one direction around the ring.

90. In a mesh topology, every device is connected to every other device in the network.
There are two types: full mesh (every node connects to every other node) and partial
mesh (only some nodes connect to others).

91. A tree topology is a combination of star and bus topologies. It consists of groups of star-
configured networks connected to a linear bus backbone.

92. A repeater operates at the Physical Layer (Layer 1) of the OSI model. Its primary function
is to regenerate and retransmit signals to extend the reach of a network by boosting the
signal strength.

16
93. A hub operates at the Physical Layer (Layer 1) of the OSI model, similar to a repeater. It
connects multiple devices in a local network and regenerates signals, broadcasting
incoming data to all connected devices.

94. Hub is multiport repeater and non intelligent device . There are two types of
hubs namely active hub ( concentrator ) and Passive hub . It works in half
duplex mode.

95. Both bridges and switches operate at the Data Link Layer and use MAC addresses for
forwarding decisions.

96. Bridges connect and filter traffic between network segments, while switches are more
advanced, providing larger MAC address tables and often supporting higher port
densities and speeds.

97. Switches offer separate collision domains for each port, support full-duplex
communication, and handle broadcasts more efficiently than bridges.

98. Switches are intelligent devices and typical switches contains 8 / 16 / 24 / 48


ports and supports the speeds of 10 mbps / 100 mbps / 1 gbps .

99. Router: Primarily operates at Layer 3, makes decisions based on IP addresses, and
connects different networks. Often involved in routing, NAT, and packet filtering.

100.
Gateway: A more general term referring to any device that connects different networks.
Gateways can operate at various layers and may perform protocol translation or other
functions.

101.
Token Ring is a network topology and access method in which a token, a small data
packet, is passed along a physical or logical ring from one node to the next. This token
passing mechanism controls access to the network and helps regulate the flow of data.

102.
Token is a kind of permission slip . Token ring contains 6 bytes address. Speeds
supported by token rings is 4 mbps / 16 mbps. The token ring cable impedence is
150 ohms

103.
Fiber Distributed Data Interface (FDDI) is a high-speed local area network (LAN)
technology that uses optical fiber for data transmission. It was designed to provide a
reliable and high-performance network infrastructure for both data and voice

17
communication. FDDI was standardized by the American National Standards Institute
(ANSI) and the International Organization for Standardization (ISO)

104. FDDI contains dual rings and access method used in FDDI is token passing
. It uses 6 byte addressing . Data rates supported by FDDI is 100 mbps

105.
X.25 provided a standardized framework for wide-area networking and served as a
foundation for early computer networks.

106.
X.25 uses network addresses to identify devices on the network. Each device connected
to an X.25 network is assigned a unique address for communication.

107.
X.25 provides an interface at the Data Link Layer (Layer 2) and Network Layer (Layer 3) of
the OSI model. It specifies both the link layer protocol (LAP, Link Access Procedure) and
the network layer protocol (Packet Layer Protocol).

108.
It uses packet switching . the interface between DTE ( Data Terminal equipment )
and DCE ( data Ciruit terminating equipment ) is X.25

109.
Frame Relay is a high-performance packet-switched networking protocol used to
connect local area networks (LANs) and other network devices over wide-area networks
(WANs).

110.
Frame Relay uses the concept of virtual circuits to establish logical connections
between devices on the network. These virtual circuits are identified by Data Link
Connection Identifiers (DLCIs).

111.
Frame relay contains Permanent or switched virtual networks .
112.
Asynchronous Transfer Mode (ATM) is a high-speed networking standard designed for
the simultaneous transmission of voice, video, and data over a single network. It was
developed to address the need for a flexible and efficient network technology capable of
handling diverse types of traffic. ATM operates at the data link layer (Layer 2) of the OSI
model.

113.
ATM uses a fixed-length cell format for data transmission. Each cell is 53 bytes in size,
consisting of a 5-byte header and a 48-byte payload. The fixed size of cells simplifies the
switching process and enables more predictable and efficient network performance.

18
114.
ATM is connection-oriented, meaning that virtual circuits must be established before
data can be transferred. There are two types of virtual circuits in ATM: Permanent Virtual
Circuits (PVCs) and Switched Virtual Circuits (SVCs).

115.
ATM contains 3 layers namely physical layer data link layer and ATM network layer .

116.
Sublayers in Data link layers

ATM Adaptation Layer (AAL): AAL is a sublayer responsible for adapting higher-layer
protocols (such as IP, voice, or video) to the fixed-size ATM cells. It ensures that different
types of traffic can be transported over ATM networks.

ATM Layer (ATM Layer Management): This sublayer is responsible for managing the
ATM layer, including cell multiplexing and demultiplexing, cell switching, and flow
control.

IP ADDRESSING

1. IP addressing, or Internet Protocol addressing, is a fundamental concept in


computer networking that enables devices to be uniquely identified and
communicate with each other over a network, such as the Internet.

2. IP addresses are numerical labels assigned to each device (computers,


smartphones, servers, routers, etc.) on an IP-based network. They serve mainly
two purposes Host identification and routing

3. Two types of notations used in IP addressing they are dotted decimal


notation and Hexa decimal notation.

4. There are two main versions of the Internet Protocol in common use and they
are IPv4 addressing and IPv6 addressing.

5. IPv4 version uses 32-bit addresses and IPv6 address uses 128 bits.

6. IPv4 addresses were initially divided into five classes, labeled A, B, C, D, and E.

7. In a Class A address, the first bit is always set to 0, indicating that it belongs to
Class A. The next 7 bits (in the first octet) are used for the network identifier, and
the remaining 24 bits are for host addresses.

19
8. Class A addresses have a range from 0.0.0.0 to 127.255.255.255.

9. There are 128 possible Class A networks (ranging from 0.0.0.0 to 127.0.0.0).
However, not all of them are available for general use, as some are reserved or
have special purposes.

10. The actual number of usable Class A networks is less than 128. But 0.0.0.0 is
used for default route and 127.0.0.0 is used for loop back . Hence
available networks in Class A address are 27 - 2 = 126 Networks.

11. In Class A address 10.0.0.0 to 10.255.255.255 used for common private


addresses.

12. Class A network can support up to 16,777,214 (2^24 - 2) host addresses.

13. Class A addresses are quite rare and are generally assigned to very large
organizations or entities that require a tremendous number of IP addresses for
their networks.

14. In a Class B address, the first two bits of the first octet are always set to 10,
which identifies it as a Class B address. The next 14 bits in the first two octets are
used for the network identifier, and the remaining 16 bits in the last two octets
are available for host addresses.

15. Class B addresses have a range from 128.0.0.0 to 191.255.255.255.

16. There are 16,384 possible Class B networks (2^14), which makes Class B
addresses suitable for medium-sized organizations or institutions.

17. Class B network can accommodate up to 65,534 (2^16 - 2) host addresses.

18. Class B addresses are commonly used in corporate networks, universities, and
mid-sized organizations.

19. Class B Private address range: 172.16.0.0 to 172.31.255.255

20. Class C addresses allocate the first 24 bits for the network identifier, leaving the
last 8 bits for host addresses.

21. In a Class C address, the first three bits of the first octet are always set to 110,
which identifies it as a Class C address. The next 21 bits in the first three octets
are used for the network identifier, and the remaining 8 bits in the last octet are
available for host addresses.

20
22. Class C addresses have a range from 192.0.0.0 to 223.255.255.255

23. There are 2,097,152 possible Class C networks (2^21), which makes Class C
addresses suitable for smaller organizations, local area networks (LANs), and
even some internet service providers.

24. Each Class C network can accommodate up to 254 (2^8 - 2) host addresses.

25. Class C addresses are commonly used in small to medium-sized organizations


and
are often found in home networks and small business networks

26. Private Address range in Class C addressing : 192.168.0.0 to 192.168.255.255

27. Class D addresses are in the range of 224.0.0.0 to 239.255.255.255.

28. Class D addresses are used for multicast groups

29. Class E addresses are in the range of 240.0.0.0 to 255.255.255.255.

30. Class E addresses were originally reserved for experimental and research
purposes.

31. Classful IP addressing :-It divided the available IP address space into five
classes: A, B, C, D, and E. Each class had a fixed range of IP addresses, and the
class determined the network and host portions of the address.

32. Classful addressing had limitations, as it did not efficiently allocate IP


addresses, leading to address space wastage.

33. Classless Inter-Domain Routing (CIDR) introduced classless addressing to


overcome the limitations of classful addressing. CIDR allows for a more flexible
allocation of IP addresses, making it possible to create subnets with variable-
sized address blocks.

34. In CIDR notation, an IP address is represented with a prefix length, which


specifies the number of bits used for the network portion. For example,
192.168.1.0/24 indicates that the first 24 bits are used for the network, leaving 8
bits for hosts.

35. The subnet mask is a 32-bit binary value used to separate the network and host
portions of an IP address.

21
36. It consists of a series of contiguous '1' bits followed by a series of contiguous '0'
bits.
The '1' bits in the subnet mask indicate the network portion, and the '0' bits
indicate the host portion.

37. Common subnet masks include 255.0.0.0 for Class A, 255.255.0.0 for Class B,
and 255.255.255.0 for Class C addresses.

38. Subnetting example

192.168.5.85 /24 Address


IP Address : 192.168.5.85
Subnet Mask : 255.255.255.0

IP Address : 11000000. 10101000.00000101.01010101


Subnet Mask : 11111111. 11111111. 11111111.00000000

So, here, the first 24 bits (First 3 octets) are network bits and the last 8 bits (Last
octet) are the host bits.

IP Add: 11000000. 10101000.00000101.01010101


SubM : 11111111. 11111111. 11111111.00000000
AND : 11000000. 10101000.00000101.00000000

So the result of this multiplication will be 192.168.5.0.

39. Subnetting example

10.128.240.50/30.

IP Address : 10.128.240.50
Sunet Mask : 255.255.255.252

we have seen the /30 and write 255.255.255.252.


/30 means that the subnet mask has 30 bits 1s and 2 bits 0s.

Remember the total Subnet Mask is 32 bits. So in binary mode our Subnet Mask
is:

IP Add : 00001010.10000000.11110000.00110010
SubM : 11111111.11111111.11111111.11111100
AND : 00001010.10000000.11110000.00110000

The result of AND operation is the Network Address.

22
This is 00001010.10000000.11110000.00110000 in binary.
The decimal value of this is 10.128.240.48.

When we set all the host bits with 1s, we will find the Broadcast Address.
This is 00001010.10000000.11110000.00110011 in binary.
The decimal value is 10.128.240.51.

The middle addresses can be used for hosts. These addresses


are 10.128.240.49 and 10.128.240.50.

Network Address : 10.128.240.48


Host Addresses : 10.128.240.49 and 10.128.240.50
Broadcast Address : 10.128.240.51

40. Find the first and last address

IP address 11001101 00010000 00100101 00100111


Mask 11111111 11111111 11111111 11110000
First address 11001101 00010000 00100101 00100000 ( And Operation )

Last Address

IP address 11001101 00010000 00100101 00100111


Mask Complement 00000000 00000000 00000000 00001111
Last address 11001101 00010000 00100101 00101111 ( Or
Operation)

41. If the mask complement value is 15 then no of addresses equal to 15 + 1 =


16

23
42. Two – Level Internet Address Stucture

43. Broad Cast address


In classes A, B and C, if the hostid is all 1s, the address is called a ‘Direct
Broadcast Address’.
• It is used by a router to send a packet to all hosts in a specific network.
• Eg. In Class A IP Address block ranging from 10.0.0.0 to 10.255.255.255, the
last address 10.255.255.255 is the broadcast address

44. Class-A Address


– Default Mask is /8
– 11111111.00000000.00000000.00000000
– 255.0.0.0

45. Class-B Address


– Default Mask is /16
– 11111111.11111111.00000000.00000000
– 255.255.0.0

46. Class-C Address


– Default Mask is /24
– 11111111.11111111.11111111.00000000
– 255.255.255.0

47. A Variable Length Subnet Mask (VLSM) is a means of allocating IP addressing


resources to subnets according to their individual need rather than some general
network-wide rule .

Each subnet may have different subnet mask .


Efficient use of the organization’ s assigned IP address space.

24
48. Supernetting, also known as route aggregation, is an IP addressing technique
that involves combining multiple smaller address blocks or subnets into a
single, larger network.

49. This process results in the creation of a supernet, which simplifies routing and
reduces the size of routing tables, making network management more
efficient.

50. Several networks are combined to create a Supernetwork.


• In Supernetting, an organization can combine several Class ‘C’ addresses to
create a larger range of addresses.

51. Supernetting is the idea of combining two or more blocks of IP address that
together compose a continuous range of addresses.

192.60.128.0 11000000.00111100.10000000.00000000
192.60.129.0 11000000.00111100.10000001.00000000
192.60.130.0 11000000.00111100.10000010.00000000
192.60.131.0 11000000.00111100.10000011.00000000
Supernet Address is 192.60.128.0 /22
Subnet Mask 11111111.11111111.11111100.00000000
255 . 255 . 252 . 0

52. IPv6 addressing


128-bit address space
– 2128 possible addresses are
– 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (3.4 x 1038)

53. IPv6 address in binary form:


0010000111011010000000001101001100000000000000000010111100111011
0000001010101010000000001111111111111110001010001001110001011010

Divided along 16-bit boundaries:


• 0010000111011010 0000000011010011 0000000000000000
0010111100111011
0000001010101010 0000000011111111 1111111000101000
1001110001011010

25
54. Each 16-bit block is converted to hexadecimal and delimited with colons:
21DA:00D3:0000:2F3B:02AA:00FF:FE28:9C5A

Suppress leading zeros within each 16-bit block:


21DA:D3:0:2F3B:2AA:FF:FE28:9C5A

55. Some IPv6 addresses contain long sequences of zeros


• A single contiguous sequence of 16-bit blocks set to 0 can be compressed to
“::” (double-colon)

56. Example:
– FE80:0:0:0:2AA:FF:FE9A:4CA2 becomes FE80::2AA:FF:FE9A:4CA2
– FF02:0:0:0:0:0:0:2 becomes FF02::2

57. Types of IPv6 Address are Unicast address , Multicast addresses and
Anycast addresses.

58. Unicast Address


•Identifies a Single interface
•Packets addressed to unicast addresses are delivered to a single
interface only
•One to one delivery

59. Multicast IPv6 Addresses


•Identifies zero or more interfaces
•Packets addressed to multicast addresses are delivered to all the
interfaces identified by the address
•One to many delivery
•Nodes can join or leave a group at any time

60. Anycast IPv6 Addresses


•Identifies multiple interfaces
•Packets addressed to anycast addresses are delivered to a single interface only
– the NEAREST interface identified by the address
•One to one-of-many delivery

61. Link Local address

Used for local link only


– Single subnet, no router
– Address autoconfiguration
– Neighbor Discovery
– Begins with FE80
– Format prefix is FE80::/64

26
– IPv6 router never forwards the link-local traffic beyond the link

62. Used for local site only


– Replacement for IPv4 private addresses
– Intranets not connected to the Internet
– Routers do not forward site-local traffic outside the site
– Similar to private IP Addresses in IPv4
– Begins with FEC0 (1111 1110 11)
– Format prefix is FEC0::/48

63. Global Unicast Address


2000::/3 (First hextet: 2000::/3 to 3FFF::/3).
Globally unique and routable.
Similar to public IPv4 addresses.
2001:db8::/32 – RFC 2839 and RFC 6890 reserve this range of addresses for
documentation.

64. RFC means Request for Comment

65. IETF means Internet Engineering Task Force

66. In IPV4 source and destination addresses are 4 bytes and in IPV6 it is 16
bytes

67. IPsec, or Internet Protocol Security, is a set of protocols and standards used to
secure Internet Protocol (IP) communications and data traffic.

68. Two modes in IPsec ate Transport mode and Tunnel Mode

69. Transport Mode:

In this mode, only the payload (the actual data being transmitted) is encrypted
and authenticated while the header information of the IP packet remains
intact.
This mode is often used for end-to-end communication.

70. Tunnel Mode:

In tunnel mode, the entire IP packet, including the original IP header, is


encrypted and encapsulated within a new IP packet.

27
This mode is often used to create secure connections between network
gateways, such as when setting up VPNs.

71. In IPv4 IPsec support is optional where as in IPv6 IPsec support is


required.

72. In IPv6 ARP requests are replaced with Multicast Neighbour Solicitation
messages

73. In IPv6 IGMP is replaced with Multicast Listener Discovery ( MLD )


messages .

74. In IPV6 ICMP router discovery is replaced with ICMPv6 router Solicitation .

75. There are no IPv6 broadcast addresses instead a link- local scope all nodes
multicast address is used .

76. IP4 Supports a 576 byte packet size where as IPv6 supports 1280 packet
size.

77. Loop back address in IPv4 is 127 . 0. 0. 1 where as in IPv6 it is


0.0.0.0.0.0.0.0. 1 or ::1

ROUTING AND ROUTING PROTOCOLS

1. Routing in computer networks refers to the process of determining the optimal


path for data packets to travel from the source to the destination across a
network.

2. Routing involves making decisions about how to forward data packets based on
the network topology, available paths, and various routing algorithms.

3. Routing algorithms consider factors like network topology, link costs, traffic
load, and quality of service requirements to determine the best path for data to
take.

4. Routers in the network maintain routing tables, which contain information about
known destinations and the corresponding next hop or outgoing interface to
reach those destinations.

28
5. In static routing, administrators manually configure the routing tables on routers.
Static routes do not change unless explicitly modified by an administrator,
making them less adaptable to network changes.

6. In dynamic routing, routers exchange routing information with each other to


adapt to changes in the network, such as link failures or network congestion.

7. Protocols like OSPF (Open Shortest Path First) and BGP (Border Gateway
Protocol) are commonly used for dynamic routing.

8. In IP Routing routers use IP addresses to determine how to forward packets.


They examine the destination IP address in each packet and use the routing table
to determine the next hop.

9. In Destination based routing routing decisions are made based on the


destination address in the packet header. Routers compare the destination
address with their routing table entries to find the best path.

10. QoS requirements, ensuring that data packets with different priorities (e.g.,
voice, video, data) are routed accordingly to meet specific service level
agreements.

11. Routing tables are data structures used by routers in computer networks to
determine how to forward data packets.

12. In an IPv4 routing table, entries typically include the following information:

Destination Network | Subnet Mask | Next Hop | Interface

The first entry specifies that any packet destined for the 192.168.1.0/24 subnet
should be sent to the next hop 192.168.1.1 via the LAN interface.

The second entry indicates that packets destined for the 10.0.0.0/8 network
should be sent to the next hop 10.0.0.1 via the WAN interface.

The third entry (default route, 0.0.0.0/0) is used for all other destinations and
sends packets to the next hop 203.0.113.1, which is the gateway to the Internet.

29
13. IPv6 Routing Table Example:

In an IPv6 routing table, entries follow a similar structure to IPv4 but deal with
IPv6 addresses and prefixes:

Destination Network | Prefix Length | Next Hop | Interface

The first entry routes packets for the 2001:db8:0:abc::/64 subnet to the next hop
2001:db8:0:abc::1 via the LAN interface.

The second entry routes packets for the 2001:db8:100:1::/48 network to the next
hop 2001:db8:100:1::2 via the WAN interface.

The third entry (::/0) is the default route for all other destinations and sends
packets to the next hop 2001:db8:0:1::1, which is the gateway to the Internet.

14. Dijkstra's Algorithm: This algorithm finds the shortest path between two nodes
in a weighted graph. It's used in link-state routing protocols like OSPF.

RIP is a distance vector routing algorithm that measures the distance to a


destination based on the number of hops. RIP is used in small to medium-sized
networks.

15. IGRP is another distance vector routing algorithm developed by Cisco for its
routers.

16. OSPF is a link-state routing protocol that calculates the shortest path to a
destination based on link costs. It is commonly used in large IP networks.

17. Intermediate System to Intermediate System (IS-IS): IS-IS is another link-state


routing protocol used in IP and OSI networks.

18. BGP is an exterior gateway protocol used to route traffic between different
autonomous systems on the internet. It makes routing decisions based on
multiple criteria, including policies and path attributes.

19. BGP makes routing decision based on AS ( Autonomous systems ) .

30
20. IANA ( Internet Assigned numbers Authority ) assigns unique AS number to
ISP.

21. Enhanced Interior Gateway Routing Protocol (EIGRP): EIGRP is considered a


hybrid routing protocol because it incorporates features of both distance vector
and link-state routing algorithms. It's used in Cisco networks.

22. Static routing involves manually configuring routing tables on routers. Network
administrators specify the routes and next hops for each destination in the
routing table.

23. Static routes remain constant unless modified by an administrator. They do not
adapt to changes in the network, such as link failures or congestion

24. Static routing is straightforward to set up and is efficient for small networks with
simple topologies where network changes are infrequent

25. It requires less computational overhead because routers do not exchange


routing information with other routers

26. In some cases, static routing can enhance network security by minimizing the
potential for unauthorized route changes

27. Static Routing is not suitable for large, complex networks with changing
topologies because it lacks adaptability. Manual configuration can be error-
prone and time-consuming.

28. Dynamic routing uses routing protocols to automatically exchange routing


information and adapt to changes in the network. Routers dynamically learn and
update routing tables

29. Dynamic routing protocols can respond to network changes, such as link failures
or network additions, by recalculating routes based on the most current
information

30. Dynamic routing is well-suited for large and complex networks where the
topology can change frequently. It scales better than static routing.

31. It can optimize routes based on factors like link bandwidth, delay, and traffic
load, leading to more efficient use of network resources

32. Examples of dynamic routing protocols include OSPF, EIGRP, RIP, and BGP, each
with its own strengths and use cases.

31
33. RIP is a distance vector routing protocol, which means it determines the best
path to a destination based on the number of hops (routers) that must be
traversed. It periodically broadcasts routing tables to its neighboring routers

34. RIP uses hop count as the metric to measure the distance to a destination. A
route with fewer hops is considered better. The maximum hop count in RIP is 15,
with a hop count of 16 indicating an unreachable route.

35. RIP routers send routing updates (known as Routing Information Protocol
Packets) to their neighboring routers every 30 seconds.

36. When a route becomes unreachable, RIP uses a technique called "route
poisoning" to inform other routers of the change. The route is marked as
unreachable with a hop count of 16, and this information is quickly propagated
to other routers.

37. RIP may take time to converge (stabilize) after a network change. The 30-second
update interval can result in relatively slow adaptation to network changes.

38. There are two main versions of RIP: RIP version 1 (RIPv1) and RIP version 2
(RIPv2). RIPv2 adds support for classless inter-domain routing (CIDR), VLSM
(Variable Length Subnet Masking), and authentication.

39. OSPF, or Open Shortest Path First, is a link-state routing protocol used in
computer networks to determine the optimal paths for data packets to traverse
from a source to a destination. It is a widely used interior gateway protocol (IGP)
and is designed to work within an autonomous system (AS), such as a corporate
network or an Internet service provider's network.

40. OSPF uses Dijkstra's shortest path algorithm to calculate the shortest path to all
destinations. This algorithm takes into account the link costs and constructs a
routing table based on the information gathered from other routers in the OSPF
domain.

41. OSPF supports Variable Length Subnet Masking (VLSM) and Classless Inter-
Domain Routing (CIDR), allowing for efficient use of IP address space.

42. OSPF supports various authentication methods to secure the exchange of


routing information between routers.

43. OSPF routers can be classified into different types, including internal routers
(those within a single area), area border routers (connecting multiple areas), and

32
autonomous system boundary routers (connecting to routers outside the OSPF
domain).

44. OSPF can handle multiple IP networks simultaneously, and it is capable of


carrying IP version 4 and IP version 6 routing information

45. OSPF allows network administrators to configure metrics based on various


factors, such as bandwidth, delay, reliability, and cost

46. EIGRP, or Enhanced Interior Gateway Routing Protocol, is a Cisco-proprietary


hybrid routing protocol used in computer networks.

47. EIGRP is primarily designed for routing within an autonomous system (AS), such
as an enterprise network.

48. EIGRP combines features of both distance vector and link-state routing
protocols, making it a robust and efficient option for routing.

49. EIGRP is considered a hybrid routing protocol because it incorporates elements


of both distance vector and link-state protocols.

50. EIGRP uses the Diffusing Update Algorithm (DUAL) to achieve rapid convergence

51. EIGRP uses a metric called the composite metric, which considers various
factors like bandwidth, delay, reliability, and load when determining the best
path to a destination.

52. Dual stacking :- EIGRP can handle both IPv4 and IPv6 routing, making it suitable
for networks that are transitioning to IPv6.

53. EIGRP uses various mechanisms to prevent routing loops, including split
horizon, route poisoning, and hold-down timers

54. EIGRP supports route summarization, which allows routers to advertise a


summarized route instead of listing individual subnets, reducing the size of
routing tables.

55. EIGRP is a Cisco-proprietary protocol, meaning it is primarily used in Cisco


networking environments.

56. EIGRP supports VLSM and automatic network summarization

57. Route announcements in EIGRP are triggered and sent neighbours using
224.0.0.10 multicast address.

33
58. EIGRP maintains neighbour relations for every 5 sec

59. EIGRP update messages are sent by Reliable transport protocol and route
the announcements are acknowledged.

60. EIGRP supports maximum 255 routers in a network

61. EIGRP Metric = [K1 * (bandwidth + K2 * bandwidth)] * [K3 * (delay + K4 * delay)] /


256
K1, K2, K3, and K4 are constants that can be configured on the EIGRP routers. By
default, K1 and K3 are set to 1, while K2 and K4 are set to 0.

Bandwidth: The minimum bandwidth of the slowest link in the path, usually
expressed in kilobits per second (Kbps).

Delay: The cumulative delay of all the links in the path, usually measured in tens
of microseconds (tens of microseconds or tens of picoseconds).

62. EIGRP Metric calculation example

Route A: Bandwidth = 1000 Kbps, Delay = 10 microseconds


Route B: Bandwidth = 500 Kbps, Delay = 20 microseconds

For Route A:

EIGRP Metric = [1 * (1000 + 0 * 1000)] * [1 * (10 + 0 * 10)] / 256


EIGRP Metric = 1000 * 10 / 256
EIGRP Metric = 390.625

For Route B:
EIGRP Metric = [1 * (500 + 0 * 500)] * [1 * (20 + 0 * 20)] / 256
EIGRP Metric = 500 * 20 / 256
EIGRP Metric = 39.0625

Route B has a lower EIGRP metric, making it the preferred route for EIGRP
routing.

63. EIGRP (Enhanced Interior Gateway Routing Protocol) maintains several tables to
manage routing information and make routing decisions.

1. Neighbor Table (also known as Neighbor Adjacency Table)

34
2. Topology Table (also known as the Diffusing Update Algorithm (DUAL)
Topology Table)

3. Routing Table

4. Feasibility Condition Table

5. Interface Table

6. Stub Routing Table (optional)

64. In EIGRP, you can configure stub routers and networks. When a router is
configured as a stub router, it will maintain a Stub Routing Table.

65. The Stub Routing Table contains summarized routes and default routes that can
be advertised to EIGRP neighbours.

66. Stube routing tables helps to reduce the size of the routing updates and
control which routes are propagated outside the stub area.

67. BGP maintains several tables to manage routing information and make routing
decisions.

BGP Neighbor/Peer Table

BGP Routing Information Base (RIB)

BGP Local-RIB (Loc-RIB)

BGP Adj-RIB-In (Adjacency-RIB-In)

BGP Adj-RIB-Out (Adjacency-RIB-Out)

BGP RIB-Failure

68. The most commonly used BGP path attributes include

AS-Path (AS_PATH)

Next-Hop (NEXT_HOP)

Local Preference (LOCAL_PREF)

35
Multi-Exit Discriminator (MED)

Origin (ORIGIN)

Weight (Cisco-Specific) (A higher Weight value is preferred)

Communities( Communities are not used in BGP route selection but are used for
policy and filtering purposes.)

Atomic Aggregate:

Aggregator

Path Length ( BGP prefers shorter AS-Paths when selecting the best route )

PROXY AND DNS SERVICES

1. Proxy services in data communication play a crucial role in enhancing security,


privacy, and network performance.

2. A proxy server acts as an intermediary between a client (e.g., a user's device) and
a destination server (e.g., a website or another server).

3. Anonymity and Privacy:


Proxy servers can hide the client's IP address and location.
When a user accesses a website through a proxy, the website sees the proxy
server's IP address instead of the user's, helping to maintain anonymity and
privacy.

4. Content Filtering:
Organizations often use proxy servers to filter and control the content accessed
by their network users.
This can be used to block access to certain websites or content types that
violate company policies or security protocols.

5. Caching:
Proxy servers can cache frequently accessed web content. This reduces the load
on the destination servers and speeds up content delivery to clients.

Cached content can be served to users without having to request it from the
original source repeatedly.

36
6. Security:
Proxies can provide security by acting as a firewall between a client and the
internet. They can inspect incoming and outgoing traffic for malicious content
and filter out harmful data.

7. Load Balancing:
In the context of data centers and web applications, proxy servers can distribute
incoming client requests among multiple servers to balance the load.
This ensures that no single server becomes overwhelmed and improves overall
system performance and availability.

8. Access Control:
Proxy servers can enforce access control policies, allowing or denying access to
specific resources or services based on user credentials, IP addresses, or other
criteria.

9. Monitoring and Logging:


Proxy servers can log all incoming and outgoing traffic, making it easier to
monitor network activity and investigate security incidents or anomalies.

10. Bypassing Geographical Restrictions:


Some users use proxy servers to bypass geographical restrictions on content
access. For example, they can appear to be in a different country to access
region-locked content.

11. Web Acceleration: Proxy servers, especially content delivery networks (CDNs),
can accelerate web content delivery by serving content from geographically
distributed servers, reducing latency and improving load times.

12. Protocol Conversion: Some proxy servers can convert between different
network protocols, allowing clients and servers that use incompatible protocols
to communicate.

13. Proxy services are versatile and can be configured and customized to meet
various network and security requirements. However, it's essential to use them
responsibly, as they can also be used for malicious purposes, such as
concealing cyberattacks or engaging in illegal activities.

14. DNS, or Domain Name System, is a critical component of the internet that
translates human-readable domain names (e.g., www.example.com) into IP
addresses (e.g., 192.0.2.1) that computers and servers use to locate each other
on the network.

37
15. DNS services play a vital role in ensuring the functionality and accessibility of the
internet.

16. Name Resolution:


DNS services provide the means to resolve domain names to IP addresses.
When you enter a URL in your web browser, the DNS system is responsible for
finding the corresponding IP address of the web server hosting that website.

17. Hierarchy
The DNS hierarchy is organized into various levels or components, and the
complete domain name forms a structured hierarchy from right to left. Here's an
overview of the DNS hierarchy:

Root Level:
At the top of the hierarchy is the root level, denoted by a single dot (.). This level
contains the highest-level domain names in the DNS, such as .com, .org, .net,
and more.

Top-Level Domains (TLDs):

Below the root level are the top-level domains (TLDs), which include generic
TLDs (gTLDs) like .com, .org, .net, and country code TLDs (ccTLDs) like .us
(United States), .uk (United Kingdom), and .ca (Canada).

Second-Level Domains (SLDs):

These are specific domain names within a TLD. For example, in the domain name
"example.com," "example" is the second-level domain.

Subdomains:

Subdomains are additional subdivisions of domain names that precede the


second-level domain. For example, in the domain name "blog.example.com,"
"blog" is a subdomain of "example.com."

Hostnames:
The hostname is the leftmost component of a fully qualified domain name
(FQDN). It represents a specific resource or server within a subdomain. For
example, in the FQDN "www.example.com," "www" is the hostname.

Resource Records:

38
Each level of the hierarchy can have associated resource records (RRs) in DNS
servers.
these records store information about the domain, such as IP addresses (A
records), mail exchange settings (MX records), and more.

18. DNS Servers: DNS services are provided by a network of DNS servers. These
servers can be categorized into various types, including:

Root Servers: These servers hold the information about the root domain and
provide referrals to the appropriate TLD servers.

Top-Level Domain (TLD) Servers: These servers maintain information about


specific TLDs (e.g., .com, .org) and can direct queries to authoritative name
servers for second-level domains and inverse TLD ( converts IP address into
domain name services )

Authoritative Name Servers: These servers have information about specific


domain names and their associated IP addresses. Each domain typically has one
or more authoritative name servers.

Recursive Resolvers: These are typically operated by internet service providers


(ISPs) or other network administrators. They handle client requests by recursively
querying the DNS hierarchy to find the IP address associated with a domain
name.

19. Caching: DNS servers often cache resolved domain name-to-IP address
mappings for a specified period. Caching helps reduce the load on the DNS
infrastructure and speeds up future lookups for frequently accessed domains.

20. Dynamic Updates: DNS services can support dynamic updates, allowing
changes to domain records, such as adding or modifying IP addresses or mail
server configurations, to be reflected in the DNS system.

21. DNS Records: DNS services use various types of DNS records to store
information associated with domain names. Common DNS record types include
A records (for IPv4 addresses), AAAA records (for IPv6 addresses), MX records
(for mail servers), CNAME records (for aliases), and TXT records (for textual
information).

22. Security: DNS services are vulnerable to various attacks, such as DNS spoofing
and cache poisoning. To enhance security, DNSSEC (DNS Security Extensions) is
used to provide data integrity and authentication for DNS responses.

39
23. Load Balancing: DNS can be used for load balancing by distributing client
requests across multiple IP addresses associated with a domain. This can
improve the performance and availability of web services.

24. Content Delivery: Content delivery networks (CDNs) leverage DNS to route
users to the nearest server or cache, ensuring faster content delivery and
reducing latency.

CYBER SECURITY

1. Cybersecurity basics are fundamental principles and practices that help


individuals and organizations protect their digital assets and information from
various cyber threats. Some of the following steps can reduce cyber attacks.

2. Use strong, unique passwords for all your accounts.


Enable two-factor authentication (2FA) or multi-factor authentication (MFA)

In Two factor authentication the first factor is password and second factor
is One Time Password.

In MFA ( Multifactor authentication ) The Factors are biometrics like


fingerprint or facial recognition , a mobile device, and password.

3. Regular updation of Operating system, software, and applications to patch


can reduce cyber attacks because Cybercriminals often exploit outdated
software

4. Use a firewall to monitor and control incoming and outgoing network traffic.
This helps prevent unauthorized access to your network.

5. Install reputable antivirus and anti-malware software to detect and remove


malicious software, including viruses, trojans, and spyware.

6. Be cautious with email attachments and links. Verify the sender's authenticity
before clicking on links or downloading attachments.

Beware of phishing emails, which aim to trick you into revealing sensitive
information. Look for red flags like unusual email addresses or requests for
personal information.

40
Phishing is a type of cyberattack in which attackers attempt to trick individuals
into revealing sensitive information, such as usernames, passwords, credit card
numbers, or other personal or financial information, by posing as a trustworthy
entity. Phishing attacks are typically carried out through email, although they can
also occur via other communication channels like text messages (SMS), social
media, or instant messaging.

7. Regularly back up your important data to an external drive or a cloud service.


This ensures that you can recover your data in case of an attack or data loss.

8. Secure Wi-Fi network with a strong password and encryption. Change default
router login credentials.

9. Use HTTPS-enabled websites for secure data transmission. Look for the padlock
icon in the browser's address bar.

A padlock icon typically indicates that a website is secure and that data
transmitted between the user's web browser and the website is encrypted.

10. Be ware of social engineering attacks, where attackers manipulate individuals to


divulge sensitive information. Verify requests for personal or financial
information, especially if received through phone calls or messages.

11. Regular training and awareness programs can help in recognizing and mitigating
threats.

12. Protect physical access to your devices and systems. Lock your computer when
not in use, and secure servers and networking equipment in a controlled
environment.

13. Secure your smartphones and tablets with passcodes or biometric


authentication. Install security apps and keep your mobile OS and apps
updated.

14. Adjust privacy settings on social media and other online accounts to limit data
exposure.

15. ICT means Information Security act . The main components of ICT are
Confidentiality , integrity and availability

41
16. Types of attacks on Information system are malicious code attacks , Known
vulnerabilities , and configuration errors.

17. Vulnerabilities refer to weaknesses or flaws in a system, application, or network


that can be exploited by attackers to compromise the security, integrity, or
availability of the system.

18. Indications of Infections are Poor system performance , crashing of


applications , abnormal system behavior , unknown services are running ,
Change in file extensions , automatic shutdown of system , system not
shutting down and hard disk busy

19. System Vulnerabilities to attack are use default user account and
password , remote access not disabled , logging and audit disabled , no
proper access control on files , non availability of updated antivirus and
firewall , and unnecessary services are running .

20. Security Implementation levels are operating system levels , application


levels , RDBMS level and Network level

21. Managing Information Security

42
22. Technology and Defence

23.

43
24. Hackers

A hacker is a person able to exploit a system or gain unauthorized access


through skill &tactics. There are black hat hacker, white hats (ethical hackers) &
grey hats depending upon the capacity &goal of hacking involved

25. Black hat hackers use their technical skills to break into computer systems,
networks, or applications with the intent of exploiting vulnerabilities, stealing
data, spreading malware, or causing disruptions.

26. White hat hackers, also known as ethical hackers or security researchers, are
individuals who use their hacking skills for legitimate, lawful, and beneficial
purposes.

27. Gray hat hackers fall in a middle ground between white hat and black hat
hackers. Their actions can be ambiguous in terms of legality and ethics.

28. A computer virus is a type of malicious software (malware) that is designed to


infect and replicate itself on a computer or across a network.

29. Executable viruses are a common type of computer virus that infects executable
files. These viruses attach their malicious code to legitimate executable files
(e.g., .exe, .dll), making them carriers for the virus.

30. Boot sector viruses target the master boot record (MBR) of a computer's hard
drive or removable storage devices (e.g., USB drives).

31. Email viruses are delivered through email attachments or embedded links. They
often use social engineering tactics to trick users into opening the infected
attachments or clicking on malicious links.

32. Macro viruses are a specific type of computer virus that target macros within
documents, typically in the context of productivity software like Microsoft Office
applications (e.g., Word, Excel, PowerPoint).

33. Computer worms are a type of malicious software (malware) that can self-
replicate and spread independently from one computer to another over
networks.

Unlike viruses, worms do not require a host file to attach themselves to; they are
standalone programs capable of propagating and infecting other systems.

Computer worms are known for their ability to rapidly spread and can cause
disruptions on a large scale.

44
34. The life cycle of worm contains creation and development , propagation , pay
load activation , concealment , response and mitigation , adaption and
evolution, end of life.

35. A Trojan horse, often referred to simply as a "Trojan," is a type of malicious


software (malware). Unlike viruses and worms, Trojans do not self-replicate.
Instead, they rely on social engineering techniques to deceive users and gain
access to their systems. Once executed, Trojans can perform various malicious
actions on the compromised system.

36. A Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is a


malicious attempt to disrupt the normal functioning of a network, service,
website, or online platform by overwhelming it with a flood of traffic, requests, or
malicious activities.

The goal of such attacks is to make the targeted service unavailable to its
intended users.

37. Spam refers to the indiscriminate and unsolicited transmission of electronic


messages, often in the form of emails, to a large number of recipients.

These messages are typically commercial or promotional in nature and are sent
without the explicit consent of the recipients.

Spam can also manifest in other digital communication channels, such as text
messages, instant messaging, and social media.

38. Spoofing refers to the act of falsifying, imitating, or masquerading as something


or someone else with the intent to deceive, manipulate, or gain unauthorized
access.

It is commonly used in the context of computer networks, communication


protocols, and digital identity theft.

39. Spyware is a type of malicious software (malware) that is designed to secretly


gather information from a computer or device and transmit it to an external entity
without the user's knowledge or consent.

Spyware is typically used for surveillance, data theft, or other malicious


purposes. It can be installed on a computer or device through various means,
often without the user's awareness.

45
40. A keylogger, short for "keystroke logger," is a type of surveillance software or
hardware device designed to record and capture every keystroke made on a
computer or mobile device.

Keyloggers can record not only the actual keystrokes (i.e., the characters or keys
pressed), but also other types of input, such as mouse clicks and touchpad
movements.

The primary purpose of keyloggers is to monitor and record user activities, which
can have legitimate uses in certain contexts, but they are often used maliciously
for purposes like identity theft and espionage.

41. Desktop security, also known as endpoint security, refers to the measures and
practices employed to protect individual computers (desktops, laptops,
workstations) from various threats, vulnerabilities, and security risks.

42. A "zombie computer" (or "zombie PC") refers to a computer that has been
compromised by malware, typically a type of malicious software called a "bot" or
"botnet," and is under the control of a remote attacker.

Zombie computers are also sometimes called "bots" or "zombies" because they
are essentially used as mindless, automated agents to carry out various
malicious activities on behalf of the attacker

43. A firewall is a network security device or software application designed to


monitor and control incoming and outgoing network traffic, based on
predetermined security rules.

Firewalls act as a barrier or filter between a trusted internal network (e.g., a


company's internal network) and an untrusted external network (e.g., the
internet) to protect the internal network from various types of threats.

44. Packet filtering :-

Firewalls examine data packets (the fundamental units of data transmission) and
decide whether to allow or block them based on predefined rules.

These rules are typically based on criteria such as source and destination IP
addresses, port numbers, and protocol type (e.g., TCP or UDP).

45. Stateful Inspection :-

Stateful firewalls keep track of the state of active connections and make
decisions based on the context of the traffic.

46
This helps prevent unauthorized access through established connections.

46. Access Control List :-

Firewalls use ACLs to specify which types of traffic are allowed and denied.

ACLs can be configured to control traffic at the network and transport layer (IP
and TCP/UDP).

47. Application Layer filtering :-

Some advanced firewalls can inspect and filter traffic at the application layer
(Layer 7 of the OSI model).

This allows them to make decisions based on specific applications, such as web
browsers or email clients.

48. Firewalls can act as proxy servers, intercepting traffic between clients and
servers. This can add an additional layer of security and anonymity for users.

49. Firewalls with NAT functionality can modify source and destination IP addresses
in network packets, which helps hide internal network structure from external
entities.

50. Many firewalls offer VPN functionality, allowing secure remote access and
encrypted communication over the internet.

51. Security policies define the behavior of the firewall. Policies specify what traffic
is allowed, blocked, or logged, and can be tailored to an organization's specific
security requirements.

52. Firewalls can be used to divide a network into segments with different security
requirements, enhancing overall security by limiting the attack surface.

47
BIG DATA AND CLOUD COMPUTING AND AI
1. Big Data refers to extremely large and complex datasets that cannot be easily
managed, processed, or analyzed using traditional data processing tools and
methods.

2. The Big data associated with "three Vs" concepts Volume , velocity and
Variety.

3. Volume: Big Data involves vast amounts of data that can range from terabytes to
petabytes and beyond. This data is often generated at high velocity and
accumulates rapidly.

4. Traditional data is stored in megabytes and gigabytes and big data stored in
Peptabytes and Zetta bytes . 1 Peptabyte = 10^15 bytes and 1 zetta byte = 10
^ 21 bytes.

5. Velocity: Data in the Big Data context is generated and collected at high speeds.
This includes real-time data from sources like social media, sensors, IoT devices,
and more.

6. Variety: Big Data encompasses various types of data, including structured data
(e.g., databases), semi-structured data (e.g., XML and JSON), and unstructured
data (e.g., text, images, videos). It can also include data from diverse sources,
such as social media, logs, and sensor data.

7. The OSEMN model is a common framework used in data science projects to


guide the process of solving problems using data. OSEMN stands for Obtain,
Scrub, Explore, Model, and iNterpret.

8. Obtaining data in data science typically involves gathering relevant data


from various sources. Here are some common methods for obtaining data:

Publicly Available Datasets


APIs:
Web Scraping:
Surveys or Questionnaires:
Data Purchase or Subscription:
Data Collection

9. Data scrubbing, also known as data cleaning or data preprocessing, is the


process of cleaning and transforming raw data to ensure its quality and
consistency before analysis. Here are some examples of data scrubbing
techniques commonly used in data science:

48
Handling Missing Values ,Data Standardization and Normalization, Handling
Inconsistent or Erroneous Values, Removing Duplicates ,Dealing with
Categorical Variables ,Feature Engineering . The data processing is done by
batch processing or stream processing.

10. Data Cleaning Example

There are some blank fields


Average pulse of 9 000 is not possible ,9 000 will be treated as non-numeric,
because of the space separator.One observation of max pulse is denoted as "AF",
which does not make sense

We can use the dropna() function to remove the NaNs. axis=0 means that we
want to remove all rows that have a NaN value:

health_data.dropna(axis=0,inplace=True)
print(health_data)
The result is a data set without NaN rows:

11. Explore the data

49
Exploring data in data science involves analyzing and visualizing the dataset to
gain insights and understand its characteristics. Here are some examples of
techniques commonly used in data exploration:
Descriptive Statistics ,Data Visualization ,Data Profiling ,Exploratory Data
Analysis (EDA) ,Dimensionality Reduction, Time Series Analysis, Data
Segmentation, Interactive Exploration.

12. The Model phase


In this phase, you are trying to find the algorithm that best describes how known
input data can predict unknown output values.

13. Data interpretation:-

Refers to the process of using diverse analytical methods to review data and
arrive at relevant conclusions.
The interpretation of data helps researchers to categorize, manipulate, and
summarize the information in order to answer critical questions.

14. Managing and analyzing Big Data requires specialized technologies and
techniques. Some of the common tools and concepts associated with Big Data
are as follows.

15. Hadoop:
An open-source framework that allows for the distributed storage and processing
of large datasets across clusters of computers.
It consists of the Hadoop Distributed File System (HDFS) and the MapReduce
programming model.
It can handle structured and unstructured data.

16. NoSQL databases:


No SQL stands for Not only SQL
These databases are designed to handle unstructured and semi-structured data,
making them suitable for Big Data applications. Examples include MongoDB,
Cassandra, and Redis.

17. YARN
stands for Yet Another Resource Negotiator, and it's a key component in Hadoop,
which is a framework for distributed storage and processing of large data sets.

YARN is responsible for managing and allocating resources (like CPU and
memory) across various applications running in a Hadoop cluster.

It allows multiple data processing engines, such as Apache MapReduce, Apache


Spark, and others, to coexist and share resources efficiently.

50
It supports stream and batcth processings.

18. SPARK
It's an open-source, distributed computing system that provides a fast and
general-purpose cluster-computing framework for big data processing.
Spark is designed for speed and ease of use, offering in-memory processing
capabilities that make it well-suited for iterative algorithms and interactive data
analysis.
It supports a variety of programming languages, including Scala, Java, Python,
and R, making it accessible to a wide range of developers.
Spark also includes libraries for data analysis (Spark SQL), machine learning
(MLlib), graph processing (GraphX), and stream processing (Spark Streaming),
making it a versatile tool for different big data use cases.
It can run on top of Hadoop Distributed File System (HDFS) and can also
integrate with other big data technologies.

19. Tableau
It is a data visualization and business intelligence software that allows users to
connect, visualize, and share data in a comprehensible and interactive way.
It's designed to help people see and understand their data by creating interactive
and shareable dashboards.

20. Power BI is another popular business intelligence tool that enables users to
visualize and analyze data, share insights across an organization, or embed them
in an app or website.

21. Big benefits of big data analytics are cost savings , product development and
market insights

22. Big challenges of Big data are making big data accessible , maintaining
quality table , keeping data secure and finding the right tools and plat form.

23. Apache Storm is an open-source, distributed stream processing system used for
real-time big data processing and analytics. It is designed to process large
amounts of data in a distributed and fault-tolerant manner.

24. Cloudera is a company that provides a distribution of Apache Hadoop, an open-


source software framework for distributed storage and processing of large data
sets.

25. GridGain is indeed a distributed computing platform based on Java.


It provides in-memory computing solutions that can be used for real-time data
processing, analytics, and high-performance computing.

51
GridGain supports distributed data storage and processing across a cluster of
machines.

26. The technology that spacecurve is developing can discover patterns in


multidimensional geodata

27. Cloud computing is a technology that allows individuals and organizations to


access and use computing resources (such as servers, storage, databases,
networking, software, analytics, and more) over the internet, commonly referred
to as "the cloud."

Instead of owning and maintaining physical hardware and infrastructure, users


can leverage cloud services provided by third-party providers.

28. Key characteristics of cloud computing

On-Demand Self-Service:
Users can provision and manage computing resources as needed, without
requiring human intervention from the service provider.

Broad Network Access:


Cloud services are accessible over the internet from various devices, such as
laptops, smartphones, and tablets.

Resource Pooling:
Cloud providers pool computing resources to serve multiple customers.
Resources are dynamically assigned and reassigned based on demand.

Rapid Elasticity:
Resources can be rapidly and automatically scaled up or down to accommodate
changing workloads. Users pay only for the resources they consume.

Measured Service: Cloud computing resources are metered, and users are
billed based on their actual usage. This pay-as-you-go model offers cost
efficiency.

29. Service Models

Infrastructure as a Service (IaaS): Provides virtualized computing resources


over the internet. Users can rent virtual machines, storage, and networking
infrastructure.

52
Platform as a Service (PaaS): Offers a platform that includes the tools and
services needed to develop, test, and deploy applications without worrying
about underlying infrastructure.

Software as a Service (SaaS): Delivers software applications over the internet


on a subscription basis. Users access the software through a web browser
without needing to install or maintain it locally.

30. Deployment Models in Cloud computing

Public Cloud: Services are provided by third-party vendors over the internet and
are available to the general public.

Private Cloud: Cloud infrastructure is exclusively used by a single organization.


It can be managed by the organization itself or by a third party.

Hybrid Cloud: Combines public and private cloud models, allowing data and
applications to be shared between them. It provides greater flexibility and
optimization of existing infrastructure.

31. Leading cloud providers include Amazon Web Services (AWS), Microsoft Azure,
Google Cloud Platform (GCP), and others.

32. Key Components:

Virtualization: Enables the creation of virtual instances of computing resources,


allowing multiple virtual machines to run on a single physical machine.

Resource Pooling: Aggregates computing resources to serve multiple users.


Resources are dynamically assigned and reassigned based on demand.

Elasticity: Allows resources to be rapidly scaled up or down to accommodate


changing workloads.

Load Balancing: Distributes incoming network traffic across multiple servers to


optimize resource utilization and ensure high availability.

Fault Tolerance and Redundancy: Ensures system availability by duplicating


critical components and data across multiple locations.

53
33. Cloud Infrastructure Layers:

Physical Layer: Includes the actual hardware components such as servers,


storage devices, and networking equipment

Virtualization Layer: Enables the creation of virtual instances of computing


resources.

Management Layer: Provides tools for provisioning, monitoring, and managing


resources in the cloud environment.

34. Networking:
Internet: Cloud services are accessible over the internet.
Intranet: Allows communication between components within the cloud
infrastructure.
Load Balancers: Distribute incoming network traffic to optimize resource
utilization.

35. Security and Compliance:

Identity and Access Management (IAM): Manages user identities and their
access to resources.

Encryption: Protects data in transit and at rest.

Compliance Measures: Ensures adherence to regulatory requirements and


industry standards.

36. Service-Oriented Architecture (SOA):

Loose Coupling: Components are designed to operate independently,


promoting flexibility and scalability.

APIs (Application Programming Interfaces): Allow different services to


communicate with each other.

37. Monitoring and Management:

Logging and Auditing: Tracks activities and changes within the cloud
environment.

Resource Monitoring: Monitors performance and usage of computing


resources.

54
Automation: Enables automated provisioning, scaling, and management of
resources.

38. These components work together to provide the foundation for cloud computing
services, allowing users to access and utilize computing resources on-demand
with flexibility and scalability. The specific architecture may vary based on the
cloud service model, deployment model, and the provider's implementation.

39. Grid computing is often used interchangeably with the term "distributed
computing."
Both concepts involve the use of multiple computing resources to work on a
common task, and they share similarities in terms of resource sharing, parallel
processing, and collaboration across a network.

40. Grid computing mainly contains three nodes Control node , Provider and
user.

41. Control node in grid computing is server which administrates the network .

42. Provider in grid computing is computer which contributes its resources in


the network resource pool

43. User in grid computing is a computer which uses the resources of the
network

44. Grid computing is mainly used in ATM.s Back end infra structure and
Marketing research

45. Utility computing is most trending IT service model . Infrastructure is based


on pay per use method. Large organizations like Google and Amazon
established their own utility services for computing storage and applications

46. AI means Artificial Intelligence .

47. AI (Artificial Intelligence) is the ability of a machine to perform cognitive


functions as humans do, such as perceiving, learning, reasoning and solving
problems. The benchmark for AI is the human level concerning in teams of
reasoning, speech, and vision.

48. Narrow AI: A artificial intelligence is said to be narrow when the machine can
perform a specific task better than a human

55
49. General AI: An artificial intelligence reaches the general state when it can
perform any intellectual task with the same accuracy level as a human.

50. Strong AI: An AI is strong when it can beat humans in many tasks

51. AI is used for medical image analysis, assisting in the diagnosis of diseases such
as cancer. It also helps in identifying treatment options and predicting patient
outcomes.

52. AI accelerates drug discovery by analyzing biological data and predicting the
effectiveness of potential drugs.

53. Language Translation: AI-powered translation services enable real-time


language translation.

54. Voice Assistants: NLP is used in voice-activated virtual assistants for tasks like
setting reminders, answering questions, and controlling smart devices.

55. Speech Recognition − It can handle different accents, slang words, noise in the
background, change in human‗ noise due to cold, etc.

56. Handwriting Recognition − The handwriting recognition software reads the text
written on paper by a pen or on screen by a stylus. It can recognize the shapes of
the letters and convert it into editable text

57. Following are the most common subsets of AI:


1. Machine Learning
2. Deep Learning
3. Natural Language processing
4. Expert System
5. Robotics
6. Machine Vision
7. Speech Recognition

58. Machine Learning (ML) is a subset of artificial intelligence (AI) that focuses on
developing algorithms and statistical models that enable computers to learn
from data and make predictions or decisions without explicit programming.

59. Supervised Learning: In supervised machine learning, the algorithm is trained


on a labeled dataset, where the input data is paired with corresponding output
labels. The goal is for the algorithm to learn the mapping between inputs and
outputs.

56
60. Unsupervised Learning: Unsupervised learning involves training an algorithm
on an unlabeled dataset. The algorithm tries to find patterns, relationships, or
structures within the data without explicit guidance.

61. Reinforcement Learning: Reinforcement learning involves training an agent to


make sequential decisions in an environment to maximize a cumulative reward.
The agent learns from the consequences of its actions.

62. Natural language processing application enables a user to communicate with


the system in their own words directly. The Input and output of NLP applications
can be in two forms Speech and Text.

63. Deep learning is a subset of machine learning which provides the ability to
machine to perform human-like tasks without human involvement. It provides
the ability to an AI agent to mimic the human brain. DL can use both supervised
and unsupervised learning to train an AI agent

64. Neural Networks :- Deep learning models are typically based on artificial neural
networks, which are inspired by the structure and function of the human brain.
Neural networks consist of interconnected nodes organized into layers.

65. Deep learning involves training deep neural networks with multiple hidden
layers. These networks are capable of learning intricate patterns and
representations from data.

66. Feature learning :- Deep learning algorithms automatically learn hierarchical


representations of features from the raw input data.

67. CNNs ( Convolution Neural Networks ) are a type of deep neural network
designed for processing structured grid data, such as images.

68. RNNs ( Recurrent Neural Networks ) are designed to handle sequential data,
such as time-series or natural language. They have recurrent connections that
allow information to persist over time.

69. Deep Learning Algorithms work on deep neural networks, so it is called deep
learning. These deep neural networks are made of multiple layers.

The first layer is called an Input layer, the last layer is called an output layer, and
all layers between these two layers are called hidden layers.

In the deep neural network, there are multiple hidden layers, and each layer is
composed of neurons. These neurons are connected in each layer.

57
The input layer receives input data, and the neurons propagate the input signal to
its above layers.

The hidden layers perform mathematical operations on inputs, and the


performed data forwarded to the output layer. o The output layer returns the
output to the user.

70. Robotics is a multidisciplinary field that involves the design, construction,


programming, and operation of robots.

71. Robots are automated machines that can perform tasks autonomously or under
human control. Hence they are classified as automatically or semi-
automatically .

72. Machine vision refers to the technology and methods used to enable machines,
typically computers, to interpret and "see" the world through visual information.

73. Machine Vision involves the application of computer vision techniques to


extract meaningful insights and make decisions based on visual data.

74. Machine vision systems use cameras, sensors, and algorithms to analyze and
interpret images or video streams.

75. Speech recognition, also known as automatic speech recognition (ASR) or voice
recognition, is a technology that converts spoken language into written text.

76. ASR involves the use of algorithms and computational models to analyze and
interpret audio signals, identifying the words and phrases spoken by a user.

77. Speech recognition has applications in various domains, including


telecommunications, virtual assistants, transcription services, and accessibility
tools.

78. There are two types of speech recognition 1. Speaker Dependent 2. Speaker
Independent

79. TensorFlow: An open-source machine learning framework developed by Google


for building and training deep learning models.

80. PyTorch: An open-source deep learning framework developed by Facebook's AI


Research lab (FAIR) that is widely used for research and development.

58
81. Scikit-Learn: A simple and efficient tool for data analysis and machine learning
in Python, providing implementations of various algorithms.

82. Keras: An open-source high-level neural networks API written in Python that runs
on top of TensorFlow, Theano, or Microsoft Cognitive Toolkit.

83. MXNet: A flexible and efficient deep learning library that supports both symbolic
and imperative programming, developed by Apache Software Foundation.

84. NLP tools


NLTK (Natural Language Toolkit):
GPT (Generative Pre-trained Transformer) Models:

85. Computer Vision Tools


OpenCV (Open Source Computer Vision Library):
YOLO (You Only Look Once):
ImageAI:

86. Data Science and Analytics tools are Jupyter note book , Tableu

LATEST TRENDS IN WEB TECHNOLOGIES

1. A Uniform Resource Locator, or URL, is a reference or address used to access


resources on the internet. It's a string of characters that provides the means to
locate and retrieve a particular resource.

2. Example URL:
https://www.example.com:8080/path/to/resource?param=value#section
Scheme: https
Domain/Host: www.example.com
Port: 8080
Path: /path/to/resource
Query Parameters: param=value
Fragment/Anchor: #section

3. URLs are now defined as a standard in IETF RFC 1630

4. HTTP The communication protocol for the Web is the HTTP (See IETF RFC 2616)
protocol. HTTP is a simple request-reply protocol.

59
5. Hypertext Markup Language, or HTML, is the standard markup language used to
create and design documents on the World Wide Web.

6. HTML is the foundation of web pages and provides a structure for the content on
the web. HTML uses a system of tags and attributes to define elements within a
document.

7. Important HTML Tags

8. LAMP is an acronym that represents a popular and open-source software stack


used for building and deploying dynamic web applications.
L- Linux , A- Apache M- MySQL P -PHP (Hyper text Processor ), python or perl

9. Google Docs is a cloud-based document editing and collaboration platform


developed by Google. It is part of the larger suite of productivity tools known as
Google Workspace (formerly G Suite), which includes applications like Google
Sheets, Google Slides, Gmail, and others. Google Docs allows users to create,
edit, and store documents online and collaborate with others in real-time.

10. Del.icio.us (pronounced "delicious") was a social bookmarking service that


allowed users to save, organize, and share bookmarks to web pages.

11. Wikipedia has become one of the most popular sites on the Internet. It is used by
many as an authoritative source of information, from finding definitions of
techni- cal terms to explanations of current event

12. Flickr provides a platform for users to upload, store, and share their photos and
videos. It supports a wide range of media formats.

60
13. MySpace was one of the earliest and most influential social networking
platforms, allowing users to create personal profiles, connect with friends, and
share multimedia content.

14. A blog system, short for weblog system, refers to a platform or software that
facilitates the creation, publication, and management of blogs. A blog is a type of
website or online publication where individuals or groups share information,
opinions, and updates in a conversational and informal style.

15. According to Wikipedia, ―A blog is a Web site where entries are made in journal
style and displayed in a reverse chronological order.

16. Wiki systems refer to collaborative platforms that allow multiple users to create,
edit, and organize content collaboratively. The term "wiki" comes from the
Hawaiian word for "quick," reflecting the ease with which users can edit and
update content. Wikis are commonly used for knowledge-sharing,
documentation, and collaborative projects.

17. Key components of web applications Studying these emerging applications,


some features stand out as key common principles.
• Search • Tagging • User participation • User interaction and collaboration

18. HTML stands for Hyper Text Markup Language. It is the standard markup
language for creating Web pages. It describes the structure of a Web page.

19. JavaScript makes HTML pages more dynamic and interactive. Common uses for
JavaScript are image manipulation, form validation, and dynamic changes of
content

20. CSS stands for Cascading Style Sheets. It is the language we use to style a Web
page. CSS describes how HTML elements are to be displayed on screen, paper,
or in other media

21. PHP is mainly focused on server-side scripting.

22. There are three main areas where PHP scripts are used 1) Server side scripting
2 ) Command Line scripting and 3) Writing Desktop applications

23. Java is a programming language and computing platform. Java is used to develop
mobile apps, web apps, desktop apps, games and much more. It was originally
designed for embedded network applications running on multiple platforms. It is
a portable, object-oriented, interpreted language

61
24. JDK is a software development environment used for making applets and Java
applications. The full form of JDK is Java Development Kit.

25. JRE ( Java Runtime Environment ) is a piece of software that is designed to run
other software

26. Java Virtual Machine (JVM) is an engine that provides a runtime environment to
drive the Java Code or applications. It converts Java bytecode into machine
language. JVM is a part of the Java Run Environment (JRE).

27. Python is High level general purpose language and developed by Guido Van
Rossam in the year 1989 .

28. High level language requires compilers and interpreters for translation into
machine code.

29. Low level languages requires assembler for direct translation into machine
language

30. Understanding , Analyzing and processing is termed as translation.

31. The translator translates and executes the source code in the form of
statement by statement.

32. Compiler checks the source code and if the source code is error free it
generates object code.

33. Python is an interpreted language. At runtime an interpreter processes the code


and executes it.

34. An identifier is a name used to identify a variable, function, class, module, or


other object.

35. An identifier starts with a letter (A to Z or a to z) or an underscore (_) followed by


zero or more letters, underscores, and digits (0 to 9). Case is significant in
Python: lowercase and uppercase letters are distinct. Python does not allow
punctuation characters such as @, $, and % within identifiers.

36. In Python there are 35 Keywords ( Reserved Words ) used for some meaning
or functionality. Some of the key words are True , False , None , break ,
continue , class , global , pass , return , and , not , or , is , in , if , elif , else , while
etc

62
37. All key words starts with lower case letters except True , False and None (
they starts with capital letter )

38. Main features in Python


Functional programming features ,
Object oriented programming features,
Scripting language features ,
Modular programming features.

39. Important applications of Python language are


Machine learning ,
Artificial Intelligence ,
Data Analysis applications ,
IoT ,
Networking applications ,
Desktop applications etc.

40. Advantages of Python


Python is easy to learn ,
it is freeware and opensource ,
platform independent ,
dynamically typed ,
interpreted language ,
both procedure and object oriented ,
embedded and
extensive library .

41. Flavours of Python are


Cpython ,
Jython ,
IronPython ,
PyPy ,
RubyPython ,
Anaconda

42. IDE means integrated development environment used for programmers to


develop software.

43. IDLE means integrated development and learning environment and it is 100
percent pure Python using tkinter GUI tool kit . The Python installer for
Windows contains the IDLE module by default.

44. Creation of Python Environment by using URL


https://colab.research.google.com/?utm_source=scs-index

63
sign in with your gmail,
click on note book on the POP up Menu
Python Environment is created .

45. Creation of Python environment using Anaconda


Down load anaconda
Set the path
Down load Jupyter notebook

MULTIPLY BROADBAND

1. Broad Band Definition :-

“An always-on‟ data connection that is able to support interactive services


including Internet access and has the capability of the minimum download
speed of 256 kilo bits per second (kbps) to an individual subscriber from the
Point Of Presence (POP) of the service provider.

2. Professional activities with Broad Band

Telecommuting (access to corporate networks and systems to support


working at home on a regular basis)

Video conferencing (one-to-one or multi-person video telephone calls)

Home-based business (including web serving, e-commerce with customers,


and other financial functions)

Home office (access to corporate networks and e-mail to supplement work at


a primary office location)

3. Entertainment Activities:

Web surfing (as today, but at higher speeds with more video content)

Video-on-demand (movies and rerun or delayed television shows)

Video games (interactive multi-player games)

64
4. Consumer Activities:

Shopping (as today, but at higher speeds with more video content)

Telemedicine (including remote doctor visits and remote medical analyses


by medical specialists)

Distance learning (including live and pre-recorded educational


presentations)

Public services (including voting and electronic town hall meetings)

Information gathering (using the Web for non-entertainment purposes)

Photography (editing, distributing, and displaying of digital photographs)

Video conferencing among friends and family

5. Electronic commerce services are provided by BSNL by installing different


network elements in a phased manner under different projects of NIB .They
are ;

I) Project 1 – MPLS core network


II) Project 2 – Access network
III) 2.1 - Narrowband access
IV) 2.2 - Broadband access
V) Project 3 – Messaging, Storage, EMS etc.

6. Services of Project 2.2

• Primary source of Internet bandwidth for retail users for application such as
Web browsing, e-commerce etc

• Multicast video services, video on demand etc through Broadband Remote


Access Server (BRAS).

• Allow wholesale BRAS ports to be assigned to smaller ISPs through the


franchises model wherein the later has a separate network of DSLAMs, AAA,
LDAP through a revenue scheme of BSNL.

• Dialup VPN (VPDN) user connects to NIB-II through the Narrow band RAS
and connected to its private network through a secure L2TP tunnel
established between Narrowband RAS and Broadband RAS.

65
• Support for both prepaid and postpaid Broadband services.

7. Components of Broad Band Access Network

• Broad Band Remote Access Server (BBRAS)

• Gigabit and Fast Ethernet Aggregation Switches (LAN Switches)

• Digital Subscriber Line Access Multiplexers (DSLAMs)

• SSSS/SSSC (Subscriber Service Selection System/ Centre)

• Servers for AAA, LDAP at Pune

• Provisioning and configuration management at NOC

8. Broadband Network Architecture (NIB 2);


It is a layered architecture as Access , Distribution, Metro Core and Core.

Core Contains MPLS based IP infrastructure

Distribution + Metro Core From Tier 2 Switch onward (towards the network)

Access is from DSLAM to user

9. Multiplay expansion of DSL Broad band Network of BSNL

DSLAM continue to work in star topology

Uplink bandwidth of DSLAM is min. 1+1 GE

The Aggregation Network for Multiplay will be in Ring Topology based on RPR
instead of the existing tree structure of Project 2.2. (for second layer of
aggregation, RPR is used).

Connection Admission Control and hierarchal QoS implementation New


applications like automated subscriber installation and on going support is
introduced.

The Traffic aggregation to Core Backbone happens across various cities

66
10. Network Elements and servers of BB Multiplay Project

Hardware –

CPE ----- UTStarcom Contract Manufacturer SemIndia


DSLAM---UTStarcom
RPR-------UTStarcom
OCLAN--- ZTE
BNG------- Redback
Servers---- SUN

Miscellaneous Components
Converters, DSL Tester, Desktop/Laptop, UPS

Applications
PMS ---- Metasolve
Subscriber management --- Motive
Subscriber Self Service Centre--- Redback
Internet Policy Server – NetSweeper
AAA/SSSS -- Elitecore
DNS/DHCP -- ISC
eMS for above Hardware
Database – Oracle

11. Application / Server Infrastructure

1. NOC &DR -NOC: SUN HW EMS,PMS, SSSS,SSSC,AAA, Sub Automation ,


All Application S/W etc

2. Regional POP : SUN HW EMS , SSSS,SSSC,AAA, Application S/W etc

3. Aggregation Network : BNG, RPR T1, RPRT2, OC LAN switch

4. Access Network : DSLAM, CPEs

5. Other: DSL Tester, UPS, Laptop, Client PCs

67
12. Services on BB-Multiplay

TVOIP Television Voice over Internet Protocol (also called as IPTV)

VOIP The technology used to transmit voice conversations over a data


network using the Internet Protocol.

13. NMS Based on the Five Layer Model of ITU.

NMS consist of following components:

F: Fault
C: Configuration
A: Accounting and Asset Management
P: Performance
S: Security.

END

68

You might also like