OSI is short for Open System Interconnect reference model.
The OSI model is designed to become an open network interconnect model to overcome
interconnect difficulties and improve efficiency.
The OSI model soon becomes a basic model for computer network communication. It
complies with the following design principles:
There is a clear edge between layers for easy understanding.
Each layer implements a specific function without affecting each other.
Each layer serves its upper layer and is served by its lower layer.
Layer division helps define the international standard protocol.
The number of layers should be enough to prevent different layers from having the
same function.
In the OSI model, data at each peer layer is named protocol data unit (PDU). The data at
the application layer is called application protocol data unit (APDU), while the data at the
presentation layer is named presentation protocol data unit (PPDU). The data at the
session layer is named session protocol data unit (SPDU). Generally, the data at the
transport layer is called segment; the data at the network layer is called packet; the data at
the data link layer is called frame; and the data at the physical layer is called bit.
Encapsulation means that a network node packetizes the data to be transmitted with a
specific protocol header and also refers to adding a packet to the end of the data at some
layers for processing. Each layer in the OSI model encapsulates data to ensure that the
data properly reaches the destination and is received and executed by the terminal host.
The physical layer involves the original bit streams transmitted over channels. The physical
layer is the basis of the OSI model, providing mechanical, electrical, and functional features
required by data transmission. The physical layer does not care about the meanings of
each bit stream (0,1), but cares about how to transmit bit streams to the peer end over
different physical links. In other words, the physical layer cares about signals, for example,
amplifying signals to transmit them to farther places, but does not care about whether
each bit stream represents an address or a piece of application data. The typical devices
are relay devices and hubs.
The data link layer sets up data links between adjacent nodes on the basis of bit stream
service provided by the physical layer. The data link layer aims to control the physical layer
and detect and correct possible errors to create an error-free link for the network layer. In
addition, the data link layer monitors traffic. (This feature is optional. Traffic can be
monitored by the data link layer or the transport layer.)
The network layer checks the network topology to determine the best route for packet
transmission and forwarding. The key is to determine how to select routes for the packets
from the source to destination. Devices at the network layer figure out the best routes to
destinations by using routing protocols and find out the next network devices to which
packets should be forwarded. Then, devices use the network-layer protocols to
encapsulate packets and send data to the next network devices based on the service
provided by the lower layer.
Procedure for processing network data streams:
1. When an application on a network host needs to send a packet to a destination on
another network, one interface of the router on the same network of the host
receives the frame.
2. The data link layer of the router checks the frame, determines the carried data type
at the network layer, removes the frame head, and sends the data to the
corresponding network layer.
3. The network layer checks the packet header to determine the network segment of
the destination and obtains the next-hop interface by looking up the routing table.
4. The data link layer of the next-hop interface adds a frame header to the packet,
encapsulates the packet as a frame, and sends it to the next hop. Forwarding of
each packet follows this process.
5. After reaching the network of the destination host, the packet is encapsulated as
the frame at the data link layer of the destination network and sent to the target
host.
6. After the destination host receives the packet, the frame header is removed by the
data link layer and the packet header is removed by the network layer. Then, the
packet is sent to the corresponding protocol module.
Due to its openness and ease-of-use features, TCP/IP is widely used and becomes a
standard protocol.
The difference between the TCP/IP model and OSI model is that the presentation layer and
session layer of TCP/IP fall under the application layer. Therefore, the TCP/IP model is
divided into four layers from bottom up: data link layer, network layer, transport layer, and
application layer. In some documents, the TCP/IP model is divided into five layers, among
which the physical layer is an independent layer.
The sender submits data to the application to send to the destination. The data
encapsulation process is as follows:
1. The data is sent to the application layer first and added with application-layer
information.
2. After being processed by the application layer, the packet is sent to the transport
layer and added with transport-layer information (for example, TCP or UDP. The
application-layer protocol is TCP or UDP).
3. After being processed by the transport layer, the packet is sent to the network layer
and added with network-layer information (such as IP protocol).
4. After being processed by the network layer, the packet is sent to the data link layer
and added with data link-layer information (such as Ethernet, 802.3, PPP, and
HDLC). Then, the data is transmitted to the peer end in bit stream format. (In this
process, processing methods vary with device types. In general, switches process
data link-layer information, whereas routers process network-layer information. The
data is restored only when it reaches the destination.)
After reaching the destination, the packet is decapsulated. The procedure is as follows:
1. The packet is sent to the data link layer. After resolution, the data link-layer
information is removed, and the network-layer protocol is obtained, such as the IP
protocol.
Each layer of the TCP/IP model has protocols for enabling network applications. Some of
the protocols do not have their specific layers. For example, ICMP, IGMP, ARP, and RAPP
fall under the network layer at which the IP protocol runs. However, in some scenarios,
ICMP and IGMP fall under the upper layer of the IP protocol, while ARP and RARP fall
under the lower layer of the IP protocol.
Application layer:
HTTP: used to access web pages.
FTP: used for file transfer, allowing data transmission from one host to another.
DNS: enables conversion from host domain names to IP addresses.
Transport layer:
TCP: provides reliable connection-oriented communication services to applications,
applying to the applications that require response. Currently, many popular
applications use TCP.
UDP: provides connectionless communication without guaranteeing transmission
reliability. It is suitable for transmitting a small number of data. Reliability is
guaranteed by the application layer.
A socket consists of a quintuple: source IP address, destination IP address, protocol, source
port, and destination port. The protocol information for TCP is 6, and that for UDP is 17.
Destination port: In general, a commonly used application service has a standard
port, for example, HTTP, FTP, and Telnet services. Some applications are not
popular, and their ports are generally defined by developers. In this case, the
registered service ports on one server must be unique.
Source port: The source port is numbered in ascending order from 1024. Some
operating systems may use a greater number as its initial port number and assign
port numbers in ascending order. Because the source port is unpredictable, it is not
frequently involved in ACL policies.
To provide services for external users, all application servers are required to register their
ports in TCP/UDP during startup to respond to service requests. Through the quintuple,
application servers can respond to any concurrent service requests and ensure that each
link is unique in the system.
In the TCP/IP stack, data link-layer protocols are at the lowest layer. Currently, data link-
layer protocols have two frame formats, namely, Ethernet and 802.3 frame formats,
among which the Ethernet frame format is widely used. The 802.3 frame format is more
complex than the Ethernet frame format. Apart from the length field, the 802.3 frame
format contains other fields. Both Ethernet and 802.3 frame formats require the same
minimum length and the same maximum length.
Data link-layer protocols are classified into LAN and WAN protocols. This document
describes only one LAN protocol. For WAN protocols, refer to other Internet
documentations. LAN protocols include Ethernet and token ring network protocols.
Data link-layer protocols implement the following functions:
1. Coordinate data link parameters, such as duplex and rate.
2. Encapsulate the frame header (frame tail may be encapsulated) of the transmitted
packet, identify the frame header of the received packet, and decapsulate the
packet destined to itself.
3. Most data link-layer protocols support error detection but do not support error
correction. Error correction is generally provided by the protocols at the transport
layer, such as TCP.
Version: This field contains 4 bits, and it indicates IP version number. The current protocol
version is IPv4.
Header length: This field contains 4 bits, and it indicates the length of the IP packet
header, in bytes.
Type of service: This field contains 8 bits. The first 3 bits defines the packet priority, and
the last five bits respectively indicate the delay (D), throughput (T), reliability (R),
transmission cost (M), and the reserved bit (0).
Total length: This field contains 16 bits. It indicates the length of the entire IP packet, in
bytes, including the header and data. Therefore, an IP packet can contain up to 65,535
bytes.
Identifier: This field contains 16 bits and functions with the flag and fragment offset fields
to fragment large upper-layer data packets.
Flag: This field contains 3 bits. The first bit is reserved. The second bit is DF (Don’t
Fragment). If it is set to 1, the data packet cannot be fragmented. If it is set to 0, the data
packet can be fragmented. The third bit is MF (More Fragments). If it is set to 0, it is the
last fragment. If it is set to 1, it indicates more fragments.
Fragment offset: This field contains 3 bits and indicates the position of the fragment in the
data stream.
The UDP packet format is different from the TCP packet format. A TCP packet contains
more bytes than a UDP packet and therefore has more functions, such as reliability.
The TCP packet format is described as follows:
Sequence number (SN): The sender determines an initial number when encapsulating a
TCP packet. Then the serial numbers of subsequent packets increase in ascending order.
The recipient can check whether packets are all received based on the serial numbers.
Acknowledgement number: After receiving a TCP packet, the recipient verifies the packet
and returns an acknowledge number. Then the sender knows that the packet has been
received by the recipient.
Source port and destination port: identify and distinguish application processes on source
and destination devices.
Data offset: It is the fixed length of the header. If the option field is not specified, the
header length is 20 bytes.
Reserved: Reserved bits.
Control flag: includes six flags:
If URG is 1, the packet is an emergency packet.
19
Establishing a TCP connection is a three-way handshake. Both communication parties
confirm the initial sequence number (SN) for subsequent communication in an orderly
manner. The three-way handshake is as follows:
1. The client sends an SYN packet with initial SN a.
2. After receiving the SYN packet, the server returns an SYN packet that contains the
ACK information of SYN packet a. The retuned SN is the SN of the packet that the
server hopes to receive next time, namely, a+1. The returned SYN packet also
contains initial SN b of the server.
3. After receiving the returned SYN packet, the client returns one ACK packet for
response, which contains the SN of the packet that the client hopes to receive next
time, namely, b+1.
After the preceding process, a TCP connection is established, and the client and server can
communicate.
The four-way handshake process for terminating a TCP connection is as follows:
1. The host that sends the first FIN packet proactively terminates the connection, and
then the server that receives this FIN packet passively closes the connection.
2. After receiving the FIN packet, the server returns one ACK packet and confirms
that the SN is the received SN plus 1. One FIN packet has one SN, which is the
same as SYN packets.
3. The TCP server also sends a file terminator to the application (namely, the
discarding server). Then, the server program closes the connection. As a result, the
TCP server sends one FIN packet.
4. The client must return an acknowledge message and set the acknowledge SN to
the received SN plus 1.
Along with the rapid development of the Internet, the TCP/IP protocol has become the
most widely used network interconnection protocol. However, due to insufficient of
security concerns at the beginning of the design, the protocol has some security risks. The
Internet was firstly applied to research environment for a few trusted user groups.
Therefore, network security problems are not the major concern, and in the TCP/IP
protocol stack, the vast majority protocols do not provide the necessary security
mechanisms. For example, they do not provide the following functions:
1. Authentication
2. Confidentiality protection
3. Data integrity protection
4. Anti-denial of services
5. QoS
In the TCP/IP protocol stack, each layer has its own protocols. At the beginning, these
protocols do not focus on safety, so they do not have necessary security mechanisms.
Therefore, more and more security threats and attacks target at these protocols, and
TCP/IP protocol stack security problems become more obvious.
Equipment damage generally does not cause information leaks but usually causes network
communication interruptions. It is usually a violent means of attacks.
Now we increasingly emphasize the high reliability of network services. So equipment
damage attacks need more focus. Of course, if not human vandalism, various physical
device damages under natural disasters also need concerns, such as earthquake, typhoon
etc.
Among common network devices, hubs and repeaters work similar. All packets received
from a port will be forwarded to all the other ports. If an attack host can connect to the
hub or repeater, the attacker host can use sniffing tools to obtain all the traffic data.
For wireless networks, because the data is transmitted through wireless signals, the
eavesdropper can easily obtain the signals.
Taking advantages of the MAC address learning mechanism of switches, attackers can
send packets with forged source MAC addresses to the switch, causing the switch to learn
the wrong mapping between MAC address and port. As a result, the packets which should
be sent to the correct destination are sent to the attacker's host. The attacker can install
sniffing software on the host to obtain information for attacks.
You can configure static entries on the switch to bind the IP address to the correct port to
prevent MAC spoofing attacks.
MAC flooding attacks exploit the MAC address learning mechanism of switches. Attackers
send packets with forged source MAC addresses to a switch, so the switch learns the
incorrect MAC entries. While the number of MAC entries on the switch is a specified
number. After a large number of such attack packets are sent to the switch, the MAC
entries on the switch are used up. Therefore, normal packets can not match MAC entries
and flood to all the other ports on the same VLAN. In this way, packet interception is
implemented.
You can configure static MAC entries or limit the number of MAC entries to prevent MAC
flooding attacks.
ARP implementation considers only normal service interaction without verifying improper
service interaction or malicious behaviors. For example, after receiving ARP response
packets, hosts do not verify whether they have sent the ARP request, but directly replace
the original ARP buffer table with the mapping between MAC and IP addresses in the
response packet.
ARP spoofing: Attackers send a great number of forged ARP requests and response
packets to attack network devices. ARP spoofing is classified into ARP buffer overflow and
ARP DoS.
ARP flood (ARP scanning): When attackers use a tool to scan hosts in the network
segment of attackers or hosts across network segments, the USG searches for the ARP
entries before sending response packets. If the MAC address of the destination does not
exist, the ARP module of the USG sends ARP Miss to the upper-layer software to request
the upper-layer software to send an ARP request to obtain the MAC address of the
destination. A lot of scanning packets result in a great number of ARP Miss messages. As a
result, USG resources are used up to process ARP Miss messages, affecting the processing
of normal services.
Note: ARP spoofing can be implemented using ARP requests or replies.
IP spoofing is implemented based on the trust relationship between hosts. The trusted
hosts can access destination hosts without authorization.
The entire IP spoofing procedure is summarized as follows:
1. Paralyze the trusted host for the moment to avoid interfering the attack.
2. Connect to a port of the target host to guess ISN basic value and addition rule.
3. Forge the source address as the trusted host address and send a data segment that
carries the SYN flag to request for a connection.
4. Wait for the target host to send the SYN+ACK packet to the paralyzed host.
5. Pretend to be the trusted host to send the ACK packet to the target host. The sent
data segment carries the guessed SN of the target host, namely, ISN+1.
6. Set up the connection and send a command request.
The attacker sends ICMP request packets (the source IP addresses are the IP addresses of
victims) to broadcast IP addresses to lure all hosts on the network into returning ICMP
response packets to the victims. As a result, the victims are busy, and the links are
congested.
ICMP Redirect Packet Attack
If router detects that the route on a host to a destination is not the optimal route, it
sends an ICMP redirect packet to the host, requesting the host to change the route.
At the same time, the router sends the initial datagram to the destination. ICMP is
not a routing protocol, but it can redirect the direction of data flows (to the correct
gateway).
In ICMP redirect packet attacks, the attacker sends ICMP redirect packets to the
victim host proactively so that the packets cannot send packets to the gateway. This
type of attacks can be launched from both the LAN and WAN.
To defend against ICMP redirect packet attacks, modify the registries to disable
ICMP redirect packet processing capability.
ICMP Unreachable Packet Attack
After receiving an ICMP unreachable packet indicating that a network or host is
unreachable, certain systems directly regard that follow-up packets to the network
or the host cannot reach the destination and therefore close the connection to the
host or network. Knowing this, attackers forge ICMP unreachable packets to break
the connections between victims and destinations to launch attacks.
To defend against ICMP unreachable packet attacks, modify the registries to disable
ICMP unreachable packet processing capability.
IP address sweeping usually serves as the prelude for other attacks. Attackers usually use IP
sweep to obtain the topology and live systems on the target network to prepare for further
attacks.
Most TCP spoofing attacks occur during the establishment of TCP connections. A false TCP
connection is set up using the trust relationship of a network service between hosts. The
attacker may act as a victim to obtain information from the server. The process is similar as
IP spoofing.
Example: A trusts B, and C is an attacker hoping to act as B to set up a connection with A.
1. C destroys B, for example, by floogin, redirect, or crashing.
2. C sends a TCP packet to A using B’s address as the source address.
3. A returns a TCP SYN/ACK packet to B, carrying serial number (SN) S.
4. C does not receive serial number S but uses S+1 as the SN for response to finish the
three-ay handshake. In this case, C can use either of the following methods to
obtain serial number S:
C monitors the SYN/ACK packet and figures out the SN based on the obtained
value.
C guesses the SN according to the operating system feature of A.
5. C uses the obtained serial number S to respond to A. The handshake is complete,
and a false connection is established.
Features of SYN flood attacks:
The attacker starts a three-way handshake using a fragment with the SYN flag.
The attacked host replies an SYN-ACK packet.
The attacker does not respond.
The attacked host continues to send SYN-ACK packets because it does not receive
any ACK packets from the peer. However, the attacked host supports only a limited
number of half-open TCP connections. When the number exceeds the specified
value, new connections fail to be established.
To resolve this problem, close half-open connections.
UDP is connectionless. Therefore, stateful inspection cannot be enabled for it. You can
enable proactive learning of and collect statistics on UDP packets and analyze the rules and
features that hosts send UDP packets. If a host sends a large number of the same or similar
UDP packets or UDP packets with specific rules, the host is considered as an attacker.
You can set a limit for the rate of UCP packets, so that packets exceeding the threshold are
discarded.
After the parameters of port scanning attack defense are set, the firewall inspects the
incoming TCP, UDP, and ICMP packets. In addition, the firewall checks whether the
destination port of a packet and the destination port of the previous packet from the same
source address are the same. If the destination ports are different, the number of
anomalies increases by one. When the number of anomalies exceeds the specified
threshold, the packets from the source IP address are regarded as port scanning attack
packets, and this source IP address is blacklisted.
Buffer is a place to store data in memory. When a program attempts to put data into a
certain space in the memory, buffer overflow will occur when there is not enough space.
When the attacker writes a character string which length exceeds buffer space and
implants the character string into the buffer, there will be two results: one result is that the
long string overwrites the adjacent memory cell, causing the program running failure, or
even cause a system crash; another result is that you can take advantage of this
vulnerability to execute arbitrary commands, or even get the system root privileges.
A typical Web application consists of three layers:
Client - browser/Javasrcipt/Applet
Presentation layer - HTTP Server + Server Side script
Service logic and data storage layer – implementation of service logic and database
The biggest feature of passive attacks is to monitor the information to be stolen to get
confidential information. Data owners or legitimate users cannot know such passive
attacks. Therefore, focus on attack prevention instead of detection.
In general, the encryption technology is used to protect information confidentiality.
Active attacks refer to forging or falsifying packet headers or data payload in service data
streams to imitate legitimate users to access service resources without authorization or
destroy service resources. To defend against active attacks, analyze and detect data
streams to put forward technical measures, such as data source authentication, integrity
check, and anti-DoS technology, to ensure proper service running.
Man-in-the-middle attacks is a type of indirect attacks. This type of attacks has the
features of passive and active attacks, subject to attack manners (such as stealing or falsifying
information).
Stealing information: When host A exchanges data with host B, the attacker’s host
intercepts information for backup and forwards data (or only monitoring without
forwarding). In this case, the attacker’s host can easily get confidential information
on hosts A and B and hosts A and B do not know it at all.
Falsifying information: The attacker’s host acts as the data exchange intermediary
between hosts A and B. To hosts A and B, they directly communicate with each
other. In fact, there is a transit host between them, the attacker’s host. Generally,
the attacker inserts information into data streams between hosts A and B or
modifies corresponding information to initiate an attack.
Attackers may use various technologies to intercept information, such as DNS spoofing
and network stream monitoring.