CYBER SECURITY Unit-2
CYBER SECURITY Unit-2
Unit-2
Essentially, a MAC is an encrypted checksum generated on the underlying message that is sent along
with a message to ensure message authentication.
The process of using MAC for authentication is depicted in the following illustration −
The sender uses some publicly known MAC algorithm, inputs the message and the secret key
K and produces a MAC value.
Similar to hash, MAC function also compresses an arbitrary long input into a fixed length
output. The major difference between hash and MAC is that MAC uses secret key during the
compression.
The sender forwards the message along with the MAC. Here, we assume that the message is
sent in the clear, as we are concerned of providing message origin authentication, not
confidentiality. If confidentiality is required then the message needs encryption.
On receipt of the message and the MAC, the receiver feeds the received message and the
shared secret key K into the MAC algorithm and re-computes the MAC value.
The receiver now checks equality of freshly computed MAC with the MAC received from the
sender. If they match, then the receiver accepts the message and assures himself that the
message has been sent by the intended sender.
If the computed MAC does not match the MAC sent by the sender, the receiver cannot
determine whether it is the message that has been altered or it is the origin that has been
falsified. As a bottom-line, a receiver safely assumes that the message is not the genuine.
Limitations of MAC
There are two major limitations of MAC, both due to its symmetric nature of operation −
o It can provide message authentication among pre-decided legitimate users who have
shared key.
o MAC technique does not provide a non-repudiation service. If the sender and
receiver get involved in a dispute over message origination, MACs cannot provide a
proof that a message was indeed sent by the sender.
o Though no third party can compute the MAC, still sender could deny having sent the
message and claim that the receiver forged it, as it is impossible to determine which
of the two parties computed the MAC.
Both these limitations can be overcome by using the public key based digital signatures discussed in
following section.
Similarly, a digital signature is a technique that binds a person/entity to the digital data. This binding
can be independently verified by receiver as well as any third party.
Digital signature is a cryptographic value that is calculated from the data and a secret key known only
by the signer.
In real world, the receiver of message needs assurance that the message belongs to the sender and
he should not be able to repudiate the origination of that message. This requirement is very crucial
in business applications, since likelihood of a dispute over exchanged data is very high.
As mentioned earlier, the digital signature scheme is based on public key cryptography. The model of
digital signature scheme is depicted in the following illustration −
The following points explain the entire process in detail −
Generally, the key pairs used for encryption/decryption and signing/verifying are different.
The private key used for signing is referred to as the signature key and the public key as the
verification key.
Signer feeds data to the hash function and generates hash of data.
Hash value and signature key are then fed to the signature algorithm which produces the
digital signature on given hash. Signature is appended to the data and then both are sent to
the verifier.
Verifier feeds the digital signature and the verification key into the verification algorithm. The
verification algorithm gives some value as output.
Verifier also runs same hash function on received data to generate hash value.
For verification, this hash value and output of verification algorithm are compared. Based on
the comparison result, verifier decides whether the digital signature is valid.
Since digital signature is created by ‘private’ key of signer and no one else can have this key;
the signer cannot repudiate signing the data in future.
It should be noticed that instead of signing data directly by signing algorithm, usually a hash of data
is created. Since the hash of data is a unique representation of data, it is sufficient to sign the hash in
place of data. The most important reason of using hash instead of data directly for signing is
efficiency of the scheme.
Let us assume RSA is used as the signing algorithm. As discussed in public key encryption chapter, the
encryption/signing process using RSA involves modular exponentiation.
Signing large data through modular exponentiation is computationally expensive and time
consuming. The hash of the data is a relatively small digest of the data, hence signing a hash is more
efficient than signing the entire data.
Apart from ability to provide non-repudiation of message, the digital signature also provides
message authentication and data integrity. Let us briefly see how this is achieved by the digital
signature −
Message authentication − When the verifier validates the digital signature using public key
of a sender, he is assured that signature has been created only by sender who possess the
corresponding secret private key and no one else.
Data Integrity − In case an attacker has access to the data and modifies it, the digital
signature verification at receiver end fails. The hash of modified data and the output
provided by the verification algorithm will not match. Hence, receiver can safely deny the
message assuming that data integrity has been breached.
Non-repudiation − Since it is assumed that only the signer has the knowledge of the
signature key, he can only create unique signature on a given data. Thus the receiver can
present data and the digital signature to a third party as evidence if any dispute arises in the
future.
By adding public-key encryption to digital signature scheme, we can create a cryptosystem that can
provide the four essential elements of security namely − Privacy, Authentication, Integrity, and Non-
repudiation.
This makes it essential for users employing PKC for encryption to seek digital signatures along with
encrypted data to be assured of message authentication and non-repudiation.
This can archived by combining digital signatures with encryption scheme. Let us briefly discuss how
to achieve this requirement. There are two possibilities, sign-then-encrypt and encrypt-then-sign.
However, the crypto system based on sign-then-encrypt can be exploited by receiver to spoof
identity of sender and sent that data to third party. Hence, this method is not preferred. The process
of encrypt-then-sign is more reliable and widely adopted. This is depicted in the following illustration
−
The receiver after receiving the encrypted data and signature on it, first verifies the signature using
sender’s public key. After ensuring the validity of the signature, he then retrieves the data through
decryption using his private key.
Cryptography - Applications
In real life cryptography plays an important role. Cryptography is just all about keeping our data or
messages secure so only the intended person who sends it and the one who is receiving the data or
message can understand it. It is mostly about encryption which means changing normal text into
cipher text or in encoded form and then changing it back to its normal form when received. Also,
cryptography is hiding information in pictures with the help of methods like digital signature or
blending it in.
Cryptography is mostly used to make messages secret when we send messages to somebody. Here
the simple thing is when we send the message to someone it will get encrypted and when the
person receives the message the message will get decrypted so the person can read that message.
So this is a very simple and basic example and application of Cryptography. In this tutorial we will see
various applications of cryptography and how they use cryptography in our daily lives.
Let us divide the applications as per the cryptography principles - Confidentiality, Integrity,
Authenticity and Non-Repudiation −
Confidentiality
Basically we need Confidentiality in secure messaging and data encryption. Let us see both the
categories one by one.
Secure Messaging/Transmission
Secure messaging means sending messages, emails and files in such a way that it will be received
safely without being hacked or modified by the hackers. This is very important as we do not want
anyone else to read the private information or see the sensitive information.
Before sending the messages the message will be encrypted(unreadable text or format). This
will be difficult to read by the third party.
When the message reaches the authorized recipient they will use their secret key to
decrypt(back to original) the message.
To make sure only the intended receiver can decrypt the message both the parties will
exchange secret keys before sending messages. These keys are like special codes which are
used to lock or unlock the messages.
There are two categories for secure messaging: first is end to end encryption and second is email
encryption.
Messaging apps like WhatsApp, Telegram, Instagram and other messaging apps use end to
end encryption. Which means only the intended recipient can read the messages.
Storage/Data Encryption
Data encryption is just like keeping the information in a box which is locked and before sending it via
the internet or saving it in the device. It basically keeps our sensitive data secure from hackers.
First we need to apply an encryption algorithm which is a set of mathematical rules. The
algorithm then encodes the data into unrecognizable form.
Then we will use a key or password to encrypt and decrypt the data.
After that the data will be transformed into ciphertext with the help of encryption algorithm
and the encryption key.
Lastly, to read the data the ciphertext will be transformed back into plaintext with the help of
a decryption key.
Examples
By using secure messaging and data encryption, we can communicate and store information online
with confidence and our private data will remain private.
Integrity
Now we will discuss Integrity in secure transmission/messaging and secure data storage.
Secure Transmission/Messaging
Some network users are not as concerned about privacy as they are about integrity. In electronic
funds transfers, funds transferred from one account to another are usually in the public domain.
If a working tapper can produce fake transfers, money can be distributed illegally. Inaccuracies in
individual bits can result in millions of dollars in incorrect credits or debits. Encryption techniques are
often used to ensure that intentional or accidental manipulation of data transmissions will not reflect
innocent behavior.
The main reason for assuring the integrity of stored data is access control. Access measures include
lock and key systems, guards, and other physical or logical measures.
With the recent advent of computer viruses this has changed dramatically, the use of cryptographic
checksums to ensure the integrity of encrypted data is becoming more widespread.
Example
Document Verification
Software Integrity
Password Storage
Authenticity
Now we will see applications in which authenticity is a must. Let us see the applications below −
Document Authentication
Digital signatures are mostly used to authenticate the online documents like contracts, agreements,
certificates. So when a person sign the document digitally it means that the document is authorised
by him. This process prevents unauthorised access and alteration in the document.
Email Authentication
When a person send an email then the mail system uses digital signature to verify the authenticity of
the sender. Using the built in functionality sender signs the email with the help of their private key
and the receiver can use the sender's public key to verify the signature. This process is required to
verify that the email came from the authorised party or sender and it is not altered in transit.
Biometric Authentication
Biometric authentication methods like fingerprint scanners are used to authenticate users as per
their unique characteristics. When a person or user scans his fingerprint to unlock a device or to
access a system, the system basically verifies the authenticity of identity of the user as per the saved
biometric data.
Non-repudiation
Here are some examples of non-repudiation applications in cryptography −
Financial Transactions
In Financial Transactions non Non-Repudiation is very necessary because as we know that non-
repudiation means if one person is sent some data then he can not deny that he did not sent that
data. In financial transactions these coditions are normal. So if a person deny that he has not
received his payment which is sent by the sender.
Both the sender and the receiver can digitally sign transactions records and provide non-repudiation
record of the transactoin's authorisation.
Legal Contracts
Digital signatures are mostly used to sign online legal contracts and agreements. With the help of
digitally signing a document, the signer cannot later deny their involvement or the terms he is agreed
upon. This thing provides assurance to all parties involved and prevents disputes over the
authenticity of signatures.
Secure Communication
In email encryption systems that support non-repudiation the sender's identity is authenticated by
digital signatures. And it also ensure the integrity of the message. Reciever can verify the digital
signature to confirm the sender's identity and prevent repudiation of the message.
Applications of Cryptography
Digital Currencies: To protect transactions and prevent fraud, digital currencies like Bitcoin
also use cryptography. Complex algorithms and cryptographic keys are used to safeguard
transactions, making it nearly hard to tamper with or forge the transactions.
Secure web browsing: Online browsing security is provided by the use of cryptography,
which shields users from eavesdropping and man-in-the-middle assaults. Public key
cryptography is used by the Secure Sockets Layer (SSL) and Transport Layer Security
(TLS) protocols to encrypt data sent between the web server and the client, establishing a
secure channel for communication.
A firewall is a type of network security device that filters incoming and outgoing network traffic with
security policies that have previously been set up inside an organization. A firewall is essentially the
wall that separates a private internal network from the open Internet at its very basic level.
Types of Network Firewall
Network Firewalls are the devices that are used to prevent private networks from unauthorized
access. A Firewall is a security solution for the computers or devices that are connected to a
network, they can be either in the form of hardware as well as in form of software. It monitors and
controls the incoming and outgoing traffic (the amount of data moving across a computer network at
any given time ).
The major purpose of the network firewall is to protect an inner network by separating it from the
outer network. An inner Network can be simply called a network created inside an organization and a
network that is not in the range of an inner network can be considered an Outer Network.
Packet Filters
It is a technique used to control network access by monitoring outgoing and incoming packets and
allowing them to pass or halt based on the source and destination Internet Protocol (IP) addresses,
protocols, and ports. This firewall is also known as a static firewall.
Packet Filter Firwall
It is also a type of packet filtering that is used to control how data packets move through a firewall. It
is also called dynamic packet filtering. These firewalls can inspect that if the packet belongs to a
particular session or not. It only permits communication if and only if, the session is perfectly
established between two endpoints else it will block the communication.
These firewalls can examine application layer (of OSI model) information like an HTTP request. If
finds some suspicious application that can be responsible for harming our network or that is not safe
for our network then it gets blocked right away.
Next-generation Firewalls
These firewalls are called intelligent firewalls. These firewalls can perform all the tasks that are
performed by the other types of firewalls that we learned previously but on top of that, it includes
additional features like application awareness and control, integrated intrusion prevention, and
cloud-delivered threat intelligence.
Next-generation Firewalls
Circuit-level Gateways
A circuit-level gateway is a firewall that provides User Datagram Protocol (UDP) and Transmission
Control Protocol (TCP) connection security and works between an Open Systems Interconnection
(OSI) network model’s transport and application layers such as the session layer.
Circuit-level Gateways
Software Firewall
The software firewall is a type of computer software that runs on our computers. It protects our
system from any external attacks such as unauthorized access, malicious attacks, etc. by notifying us
about the danger that can occur if we open a particular mail or if we try to open a website that is not
secure.
Software Firewall
Hardware Firewall
A hardware firewall is a physical appliance that is deployed to enforce a network boundary. All
network links crossing this boundary pass-through this firewall, which enables it to perform an
inspection of both inbound and outbound network traffic and enforce access controls and other
security policies.
Hardware Firewall
Cloud Firewall
These are software-based, cloud-deployed network devices. This cloud-based firewall protects a
private network from any unwanted access. Unlike traditional firewalls, a cloud firewall filters data at
the cloud level.
It is needed to be installed on
It needs only one hardware to be installed
every individual system on a
for a whole network.
Requirement network.
Advantages
Monitors Network Traffic : A network firewall monitors and analyzes traffic by inspecting
whether the traffic or packets passing through our network is safe for our network or not. By
doing so, it keeps our network away from any malicious content that can harm our network.
Stops Viruses : Viruses can come from anywhere, such as from an insecure website, from a
spam message or any threat, so it becomes more important to have a strong defense system
(i.e. firewall in this case), a virus attack can easily shut off a whole network. In such a
situation, a firewall plays a vital role.
Better Security: If it is about monitoring and analyzing the network from time to time and
establishing a malware-free, virus-free, spam-free environment so network firewall will
provide better security to our network.
Increase Privacy: By protecting the network and providing better security, we get a network
that can be trusted.
Disadvantages
Cost: Depending on the type of firewall, it can be costly, usually, the hardware firewalls are
more costly than the software ones.
Restricts User: Restricting users can be a disadvantage for large organizations, because of its
tough security mechanism. A firewall can restrict the employees to do a certain operation
even though it’s a necessary operation.
Issues With The Speed of The Network: Since the firewalls have to monitor every packet
passing through the network, this can slow down operations needed to be performed, or it
can simply lead to slowing down the network.
Maintenance: Firewalls require continuous updates and maintenance with every change in
the networking technology. As the development of new viruses is increasing continuously
that can damage your system.
User Management in Cybersecurity
User management is a fundamental aspect of cybersecurity, serving as the first line of defense in
protecting sensitive information and maintaining system integrity. It involves the processes and
policies that govern user access to an organization’s IT resources. Effective user management ensures
that only authorized users can access specific data and systems, thereby reducing the risk of
unauthorized access, data breaches, and other security incidents.
1. Authentication: Verifying the identity of users before granting access to resources. This often
involves passwords, biometrics, multi-factor authentication (MFA), or other methods to
ensure that users are who they claim to be.
2. Authorization: Determining the level of access granted to authenticated users. This ensures
that users can only access the resources necessary for their roles.
3. Accountability: Tracking user activities within the system. This involves logging and
monitoring actions to ensure that users are held accountable for their behavior, which is
critical for detecting and responding to security incidents.
4. Provisioning and De-provisioning: Managing user accounts throughout their lifecycle. This
includes creating accounts when new employees join the organization and disabling accounts
when they leave or change roles.
5. Role Management: Assigning and managing user roles and permissions. This involves
defining roles based on job functions and assigning appropriate permissions to each role.
6. Compliance: Ensuring that user management practices comply with relevant laws,
regulations, and industry standards. This includes maintaining proper documentation and
conducting regular audits.
User management plays a crucial role in access control and overall security strategy. Here are some
key ways it contributes:
1. Least Privilege Principle: By implementing the principle of least privilege, user management
ensures that users only have the minimum level of access necessary to perform their job
functions. This minimizes the potential damage that could occur if an account is
compromised.
3. Password Management: Effective user management includes policies and tools for managing
passwords, such as enforcing strong password policies, regular password changes, and
secure password storage.
6. Incident Response: In the event of a security incident, user management enables quick
identification of compromised accounts and swift action to contain the breach. This includes
disabling affected accounts and analyzing user activity logs to understand the scope of the
incident.
7. Compliance and Auditing: User management supports compliance with regulations such as
GDPR, HIPAA, and SOX by ensuring that user access is properly controlled and documented.
Regular audits help identify and address potential security gaps.
Implementing effective user management policies requires the use of various tools to automate and
streamline processes. Here are some tools that can be used:
1. Identity and Access Management (IAM) Systems: Tools like Okta, Microsoft Azure Active
Directory (Azure AD), and IBM Security Identity Governance and Intelligence (IGI) help
manage user identities, authentication, and authorization. These tools centralize user
management, enforce policies, and provide robust access control mechanisms.
3. Password Management Tools: Tools such as LastPass, 1Password, and HashiCorp Vault help
enforce strong password policies, store passwords securely, and facilitate regular password
changes.
4. User Activity Monitoring and Logging: Solutions like Splunk, SolarWinds Log & Event
Manager, and ELK Stack (Elasticsearch, Logstash, Kibana) provide comprehensive logging and
monitoring of user activities, helping detect and respond to suspicious behavior.
5. Role-Based Access Control (RBAC) Tools: Tools like AWS Identity and Access Management
(IAM), Role-Based Access Control for MongoDB, and Azure RBAC help define and manage
roles and permissions based on job functions.
6. Provisioning and De-provisioning Tools: Tools like SailPoint, One Identity, and ManageEngine
ADManager Plus automate the process of creating, updating, and disabling user accounts,
ensuring timely management of user access throughout their lifecycle.
7. Compliance and Auditing Tools: Solutions like Varonis, Netwrix Auditor, and Compliance
Manager help ensure compliance with regulatory requirements by providing audit trails,
reporting capabilities, and automated compliance checks.
A well-defined user management policy is essential for implementing effective user management
practices. Below is an example of such a policy:
Scope:
This policy applies to all employees, contractors, vendors, and any other individuals who require
access to the organization’s IT resources.
Policy:
User accounts will be created only upon receipt of a completed and authorized user access
request form.
The request form must specify the required access level and the user’s role within the
organization.
New accounts will be configured with the minimum necessary permissions based on the
principle of least privilege.
2. Authentication:
All user accounts must use strong passwords that meet the organization’s password
complexity requirements.
Multi-factor authentication (MFA) must be enabled for all accounts accessing sensitive
systems or data.
3. Authorization:
Access permissions will be granted based on the user’s role and responsibilities.
Regular reviews of user access permissions will be conducted to ensure compliance with the
principle of least privilege.
4. Accountability:
User accounts will be promptly disabled when an employee leaves the organization or
changes roles.
Regular audits of active accounts will be conducted to identify and remove inactive or
unnecessary accounts.
6. Role Management:
7. Compliance:
User management practices will comply with relevant laws, regulations, and industry
standards.
Regular audits will be conducted to ensure compliance and address any identified issues.
8.Incident Response:
In the event of a security incident involving user accounts, immediate steps will be taken to
contain and mitigate the impact.
Enforcement:
Violations of this policy may result in disciplinary action, up to and including termination of
employment or contract.
Review:
This policy will be reviewed annually and updated as necessary to ensure its continued effectiveness.
What is a VPN?
VPN stands for Virtual Private Network. It provides inline privacy and anonymity by building a
private network from a public internet connection. VPN masks your internet protocol address so
your online actions are virtually untraceable. VPN creates a virtual tunnel to transfer data. A VPN
creates an encrypted tunnel to protect your personal data and communications, hide your IP
address, and let you safely use public Wi-Fi networks. Moreover, the privacy and security provided
by VPNs are far better than Wi-Fi hotspots.
Features of VPN
Applications of VPN
VPN can easily bypass geographic restrictions on websites or streaming audio and video.
Every device that has internet connectivity has its own IP address which can be tracked back to the
dedicated device. Now, with the help of VPN, ot becomes easy to bypass without leaving any traces
behind.
VPN offers users to create an effective pathway by which the user can hide their personal and
sensitive information. To prevent from such incidents, you can easily use a VPN in no time.
Working of VPNs
VPNs are becoming increasingly popular among users for anonymity and secure digital trace. These
VPNs work in the following way −
Connection Management − As a user, you can connect to a VPN network operated by a VPN
service provider. These create a virtual space for safe browsing.
Secure Channel − When a connection is established, the user is guided through a secure
channel or tunnel, and he/she can surf the internet without any digital imprints.
Data Hiding − All the data that is surfed or transmitted from your end is encrypted in the
channel and stays away from other users or your internet service provider (ISP).
Masking IP Address − Different servers exist in a VPN service. These servers exist at different
locations worldwide. VPN service providers assign encrypted IP addresses to these servers,
and your IP address is hidden from the outside world.
Types of VPNs
There are many types of VPNs available to users worldwide. These can be mainly categorized as the
following −
1. Personal VPN − This is the lowest level of VPN service. It is mainly used for small enterprises
or individuals. These are generally used for small applications only.
2. PEnterprise VPN − This type of VPN is used by medium and large organizations for employee
utility as well as company applications like remote requirements or data security.
3. PSmartphone VPN − This is the newest category of VPN in the market. These have found
popularity because of easy access and multiple options for mobile users.
Benefits of Using a VPN
There are many benefits of using a VPN service. Some of these benefits are given in the next section
−
Data Protection − You can use a VPN to secure your data from online tracking or leakage.
This is due to the tunnelling feature of VPN services.
Identity Hiding − When you use a VPN, your online trace is not tracked, and you can surf the
internet without any worries.
Unlimited Access − VPN services allow unrestricted access to material from all around the
world, including geo-restricted content.
Secure Connections − When you connect to Public networks, there is a high chance of data
leakage. But when you use a VPN service, this problem is eradicated.
Remote Access − VPN allows users remote access depending upon the number of available
locations given by the service provider.
User’s device
In order to choose the perfect VPN, one must ask the given questions from their VPN providers as
follows.
Remote Access VPN permits a user to connect to a private network and access all its services and
resources remotely. The connection between the user and the private network occurs through the
Internet and the connection is secure and private. Remote Access VPN is useful for home users and
business users both. An employee of a company, while he/she is out of station, uses a VPN to
connect to his/her company’s private network and remotely access files and resources on the private
network. Private users or home users of VPN, primarily use VPN services to bypass regional
restrictions on the Internet and access blocked websites. Users aware of Internet security also use
VPN services to enhance their Internet security and privacy.
A Site-to-Site VPN is also called as Router-to-Router VPN and is commonly used in the large
companies. Companies or organizations, with branch offices in different locations, use Site-to-site
VPN to connect the network of one office location to the network at another office location.
Intranet based VPN: When several offices of the same company are connected using Site-to-
Site VPN type, it is called as Intranet based VPN.
Extranet based VPN: When companies use Site-to-site VPN type to connect to the office of
another company, it is called as Extranet based VPN.
3. Cloud VPN
A Cloud VPN is a virtual private network that allows users to securely connect to a cloud-based
infrastructure or service. It uses the internet as the primary transport medium to connect the remote
users to the cloud-based resources. Cloud VPNs are typically offered as a service by cloud providers
such as Amazon Web Services (AWS) and Microsoft Azure. It uses the same encryption and security
protocols as traditional VPNs, such as IPsec or SSL, to ensure that the data transmitted over the VPN
is secure. Cloud VPNs are often used by organizations to securely connect their on-premises
resources to cloud-based resources, such as cloud-based storage or software-as-a-service (SaaS)
applications.
4. Mobile VPN
Mobile VPN is a virtual private network that allows mobile users to securely connect to a private
network, typically through a cellular network. It creates a secure and encrypted connection between
the mobile device and the VPN server, protecting the data transmitted over the connection. Mobile
VPNs can be used to access corporate resources, such as email or internal websites, while the user is
away from the office. They can also be used to securely access public Wi-Fi networks, protecting the
user’s personal information from being intercepted. Mobile VPNs are available as standalone apps or
can be integrated into mobile device management (MDM) solutions. These solutions are commonly
used by organisations to secure their mobile workforce.
5. SSL VPN
SSL VPN (Secure Sockets Layer Virtual Private Network) is a type of VPN that uses the SSL protocol to
secure the connection between the user and the VPN server. It allows remote users to securely
access a private network by establishing an encrypted tunnel between the user’s device and the VPN
server. SSL VPNs are typically accessed through a web browser, rather than through a standalone
client. This makes them easier to use and deploy, as they don’t require additional software to be
installed on the user’s device. It can be used to access internal resources such as email, file servers,
or databases. SSL VPNs are considered more secure than traditional IPsec VPNs because they use the
same encryption protocols as HTTPS, the secure version of HTTP used for online transactions.
PPTP (Point-to-Point Tunneling Protocol) is a type of VPN that uses a simple and fast method for
implementing VPNs. It creates a secure connection between two computers by encapsulating the
data packets being sent between them. PPTP is relatively easy to set up and doesn’t require any
additional software to be installed on the client’s device. It can be used to access internal resources
such as email, file servers, or databases. PPTP is one of the oldest VPN protocols and is supported on
a wide range of operating systems. However, it is considered less secure than other VPN protocols
such as L2TP or OpenVPN, as it uses a weaker encryption algorithm and has been known to have
security vulnerabilities.
L2TP (Layer 2 Tunneling Protocol) is a type of VPN that creates a secure connection by encapsulating
data packets being sent between two computers. L2TP is an extension of PPTP, it adds more security
to the VPN connection by using a combination of PPTP and L2F (Layer 2 Forwarding Protocol) and it
uses stronger encryption algorithm than PPTP. L2TP is relatively easy to set up and doesn’t require
additional software to be installed on the client’s device. It can be used to access internal resources
such as email, file servers, or databases. It is supported on a wide range of operating systems, but it
is considered less secure than other VPN protocols such as OpenVPN, as it still has some
vulnerabilities that can be exploited.
8. OpenVPN
OpenVPN is an open-source software application that uses SSL and is highly configurable and secure.
It creates a secure and encrypted connection between two computers by encapsulating the data
packets being sent between them. OpenVPN can be used to access internal resources such as email,
file servers, or databases. It is supported on a wide range of operating systems and devices, and can
be easily configured to work with various network configurations and security settings. It is
considered one of the most secure VPN protocols as it uses the industry standard SSL/TLS encryption
protocols and it offers advanced features such as two-factor authentication and kill switch.
1. Internet Protocol Security (IPSec): Internet Protocol Security, known as IPSec, is used to
secure Internet communication across an IP network. IPSec secures Internet Protocol
communication by verifying the session and encrypts each data packet during the
connection. IPSec runs in 2 modes:
2. Layer 2 Tunneling Protocol (L2TP): L2TP or Layer 2 Tunneling Protocol is a tunneling protocol
that is often combined with another VPN security protocol like IPSec to establish a highly
secure VPN connection. L2TP generates a tunnel between two L2TP connection points and
IPSec protocol encrypts the data and maintains secure communication between the tunnel.
3. Point–to–Point Tunneling Protocol (PPTP): PPTP or Point-to-Point Tunneling Protocol
generates a tunnel and confines the data packet. Point-to-Point Protocol (PPP) is used to
encrypt the data between the connection. PPTP is one of the most widely used VPN protocol
and has been in use since the early release of Windows. PPTP is also used on Mac and Linux
apart from Windows.
4. SSL and TLS: SSL (Secure Sockets Layer) and TLS (Transport Layer Security) generate a VPN
connection where the web browser acts as the client and user access is prohibited to specific
applications instead of entire network. Online shopping websites commonly uses SSL and TLS
protocol. It is easy to switch to SSL by web browsers and with almost no action required from
the user as web browsers come integrated with SSL and TLS. SSL connections have “https” in
the initial of the URL instead of “http”.
5. Secure Shell (SSH): Secure Shell or SSH generates the VPN tunnel through which the data
transfer occurs and also ensures that the tunnel is encrypted. SSH connections are generated
by a SSH client and data is transferred from a local port on to the remote server through the
encrypted tunnel.
6. SSTP (Secure Socket Tunneling Protocol): A VPN protocol developed by Microsoft that uses
SSL to secure the connection, but only available for Windows.
7. IKEv2 (Internet Key Exchange version 2): A VPN protocol that provides fast and secure
connections, but not widely supported by VPN providers.
8. OpenVPN: An open-source VPN protocol that is highly configurable and secure, widely
supported by VPN providers and considered one of the most secure VPN protocols.
9. WireGuard: A relatively new and lightweight VPN protocol that aims to be faster, simpler and
more secure than existing VPN protocols.
As discussed above, it uses public key cryptography, symmetric key cryptography, hash function, and
digital signature. It provides −
Privacy
Sender Authentication
Message Integrity
Non-repudiation
Along with these security services, it also provides data compression and key management support.
PGP uses existing cryptographic algorithms such as RSA, IDEA, MD5, etc., rather than inventing the
new ones.
Working of PGP
Resultant 128 bit hash is signed using the private key of the sender (RSA Algorithm).
A 128-bit symmetric key, KS is generated and used to encrypt the compressed message with
IDEA.
KS is encrypted using the public key of the recipient using RSA algorithm and the result is
appended to the encrypted message.
The format of PGP message is shown in the following diagram. The IDs indicate which key is used to
encrypt KS and which key is to be used to verify the signature on the hash.
In PGP scheme, a message in signed and encrypted, and then MIME is encoded before transmission.
PGP Certificate
PGP key certificate is normally established through a chain of trust. For example, A’s public key is
signed by B using his public key and B’s public key is signed by C using his public key. As this process
goes on, it establishes a web of trust.
In a PGP environment, any user can act as a certifying authority. Any PGP user can certify another
PGP user's public key. However, such a certificate is only valid to another user if the user recognizes
the certifier as a trusted introducer.
Several issues exist with such a certification method. It may be difficult to find a chain leading from a
known and trusted public key to desired key. Also, there might be multiple chains which can lead to
different keys for desired user.
PGP can also use the PKI infrastructure with certification authority and public keys can be certified by
CA (X.509 certificate).
S / MIME
S/MIME stands for Secure Multipurpose Internet Mail Extension. S/MIME is a secure e-mail standard.
It is based on an earlier non-secure e-mailing standard called MIME.
Working of S/MIME
S/MIME approach is similar to PGP. It also uses public key cryptography, symmetric key cryptography,
hash functions, and digital signatures. It provides similar security services as PGP for e-mail
communication.
The most common symmetric ciphers used in S/MIME are RC2 and TripleDES. The usual public key
method is RSA, and the hashing algorithm is SHA-1 or MD5.
S/MIME specifies the additional MIME type, such as “application/pkcs7-mime”, for data enveloping
after encrypting. The whole MIME entity is encrypted and packed into an object. S/MIME has
standardized cryptographic message formats (different from PGP). In fact, MIME is extended with
some keywords to identify the encrypted and/or signed parts in the message.
S/MIME relies on X.509 certificates for public key distribution. It needs top-down hierarchical PKI for
certification support.
Employability of S/MIME
Due to the requirement of a certificate from certification authority for implementation, not all users
can take advantage of S/MIME, as some may wish to encrypt a message, with a public/private key
pair. For example, without the involvement or administrative overhead of certificates.
In practice, although most e-mailing applications implement S/MIME, the certificate enrollment
process is complex. Instead PGP support usually requires adding a plug-in and that plug-in comes
with all that is needed to manage keys. The Web of Trust is not really used. People exchange their
public keys over another medium. Once obtained, they keep a copy of public keys of those with
whom e-mails are usually exchanged.
Implementation layer in network architecture for PGP and S/MIME schemes is shown in the
following image. Both these schemes provide application level security of for e-mail communication.
One of the schemes, either PGP or S/MIME, is used depending on the environment. A secure e-email
communication in a captive network can be provided by adapting to PGP. For e-mail security over
Internet, where mails are exchanged with new unknown users very often, S/MIME is considered as a
good option.
6. PGP is comparatively less While it is more convenient than PGP due to the secure
S.NO PGP S/MIME
7. PGP contains 4096 public keys. While it contains only 1024 public keys.
PGP is the standard for strong While it is also the standard for strong encryption but
8.
encryption. has some drawbacks.
Administrative overhead of
15. Administrative overhead of S/MIME is low.
PGP is high.
Bob visits Alice’s website for selling goods. In a form on the website, Bob enters the type of good and
quantity desired, his address and payment card details. Bob clicks on Submit and waits for delivery of
goods with debit of price amount from his account. All this sounds good, but in absence of network
security, Bob could be in for a few surprises.
If transactions did not use confidentiality (encryption), an attacker could obtain his payment
card information. The attacker can then make purchases at Bob's expense.
If no data integrity measure is used, an attacker could modify Bob's order in terms of type or
quantity of goods.
Lastly, if no server authentication is used, a server could display Alice's famous logo but the
site could be a malicious site maintained by an attacker, who is masquerading as Alice. After
receiving Bob's order, he could take Bob's money and flee. Or he could carry out an identity
theft by collecting Bob's name and credit card details.
Transport layer security schemes can address these problems by enhancing TCP/IP based network
communication with confidentiality, data integrity, server authentication, and client authentication.
The security at this layer is mostly used to secure HTTP based web transactions on a network.
However, it can be employed by any application running over TCP.
Transport Layer Security (TLS) protocols operate above the TCP layer. Design of these protocols use
popular Application Program Interfaces (API) to TCP, called “sockets" for interfacing with TCP layer.
Applications are now interfaced to Transport Security Layer instead of TCP directly. Transport Security
Layer provides a simple API with sockets, which is similar and analogous to TCP's API.
In the above diagram, although TLS technically resides between application and transport layer, from
the common perspective it is a transport protocol that acts as TCP layer enhanced with security
services.
TLS is designed to operate over TCP, the reliable layer 4 protocol (not on UDP protocol), to make
design of TLS much simpler, because it doesn't have to worry about ‘timing out’ and ‘retransmitting
lost data’. The TCP layer continues doing that as usual which serves the need of TLS.
The reason for popularity of using a security at Transport Layer is simplicity. Design and deployment
of security at this layer does not require any change in TCP/IP protocols that are implemented in an
operating system. Only user processes and applications needs to be designed/modified which is less
complex.
In year 1995, Netscape developed SSLv2 and used in Netscape Navigator 1.1. The SSL version1 was
never published and used. Later, Microsoft improved upon SSLv2 and introduced another similar
protocol named Private Communications Technology (PCT).
Netscape substantially improved SSLv2 on various security issues and deployed SSLv3 in 1999. The
Internet Engineering Task Force (IETF) subsequently, introduced a similar TLS (Transport Layer
Security) protocol as an open standard. TLS protocol is non-interoperable with SSLv3.
TLS modified the cryptographic algorithms for key expansion and authentication. Also, TLS suggested
use of open crypto Diffie-Hellman (DH) and Digital Signature Standard (DSS) in place of patented RSA
crypto used in SSL. But due to expiry of RSA patent in 2000, there existed no strong reasons for users
to shift away from the widely deployed SSLv3 to TLS.
SSL is specific to TCP and it does not work with UDP. SSL provides Application Programming Interface
(API) to applications. C and Java SSL libraries/classes are readily available.
SSL protocol is designed to interwork between application and transport layer as shown in the
following image −
SSL itself is not a single layer protocol as depicted in the image; in fact it is composed of two sub-
layers.
Lower sub-layer comprises of the one component of SSL protocol called as SSL Record
Protocol. This component provides integrity and confidentiality services.
o Alert Protocol.
These three protocols manage all of SSL message exchanges and are discussed later in this
section.
Functions of SSL Protocol Components
The four sub-components of the SSL protocol handle various tasks for secure communication
between the client machine and the server.
Record Protocol
o It fragments the data into manageable blocks (max length 16 KB). It optionally
compresses the data.
o Provides a header for each message and a hash (Message Authentication Code
(MAC)) at the end.
o It is the most complex part of SSL. It is invoked before any application data is
transmitted. It creates SSL sessions between the client and the server.
o Establishment of session involves Server authentication, Key and algorithm
negotiation, Establishing keys and Client authentication (optional).
o Multiple secure TCP connections between a client and a server can share the same
session.
o Handshake protocol actions through four phases. These are discussed in the next
section.
ChangeCipherSpec Protocol
o As each entity sends the ChangeCipherSpec message, it changes its side of the
connection into the secure state as agreed upon.
o The cipher parameters pending state is copied into the current state.
o Exchange of this Message indicates all future data exchanges are encrypted and
integrity is protected.
o This protocol is used to report errors – such as unexpected message, bad record
MAC, security parameters negotiation failed, etc.
o It is also used for other purposes – such as notify closure of the TCP connection,
notify receipt of bad or unknown certificate, etc.
As discussed above, there are four phases of SSL session establishment. These are mainly handled by
SSL Handshake protocol.
Server sends certificate. Client software comes configured with public keys of various
“trusted” organizations (CAs) to check certificate.
It also sends the Pre-master Secret (PMS) encrypted with the server’s public key.
Client also sends Certificate_verify message if certificate is sent by him to prove he has the
private key associated with this certificate. Basically, the client signs a hash of the previous
messages.
Phase 4 − Finish.
Client and server send Change_cipher_spec messages to each other to cause the pending
cipher state to be copied into the current state.
Message “Finished” from each end verifies that the key exchange and authentication
processes were successful.
All four phases, discussed above, happen within the establishment of TCP session. SSL session
establishment starts after TCP SYN/ SYNACK and finishes before TCP Fin.
It is possible to resume a disconnected session (through Alert message), if the client sends
a hello_request to the server with the encrypted session_id information.
This avoids recalculating of session cipher parameters and saves computing at the server and
the client end.
We have seen that during Phase 3 of SSL session establishment, a pre-master secret is sent by the
client to the server encrypted using server’s public key. The master secret and various session keys
are generated as follows −
The master secret is generated (via pseudo random number generator) using −
o The pre-master secret.
o Two nonces (RA and RB) exchanged in the client_hello and server_hello messages.
Six secret values are then derived from this master secret as −
TLS Protocol
In order to provide an open Internet standard of SSL, IETF released The Transport Layer Security (TLS)
protocol in January 1999. TLS is defined as a proposed Internet Standard in RFC 5246.
Salient Features
TLS protocol sits above the reliable connection-oriented transport TCP layer in the
networking layers stack.
The architecture of TLS protocol is similar to SSLv3 protocol. It has two sub protocols: the TLS
Record protocol and the TLS Handshake protocol.
Though SSLv3 and TLS protocol have similar architecture, several changes were made in
architecture and functioning particularly for the handshake protocol.
Protocol Version − The header of TLS protocol segment carries the version number 3.1 to
differentiate between number 3 carried by SSL protocol segment header.
Session Key Generation − There are two differences between TLS and SSL protocol for
generation of key material.
o Method of computing pre-master and master secrets is similar. But in TLS protocol,
computation of master secret uses the HMAC standard and pseudorandom function
(PRF) output instead of ad-hoc MAC.
o The algorithm for computing session keys and initiation values (IV) is different in TLS
than SSL protocol.
Alert Protocol Message −
o TLS protocol supports all the messages used by the Alert protocol of SSL, except No
certificate alert message being made redundant. The client sends empty certificate
in case client authentication is not required.
o Many additional Alert messages are included in TLS protocol for other error
conditions such as record_overflow, decode_error etc.
Supported Cipher Suites − SSL supports RSA, Diffie-Hellman and Fortezza cipher suites. TLS
protocol supports all suits except Fortezza.
o In SSL, complex message procedure is used for the certificate_verify message. With
TLS, the verified information is contained in the handshake messages itself thus
avoiding this complex procedure.
Padding of Data − In SSL protocol, the padding added to user data before encryption is the
minimum amount required to make the total data-size equal to a multiple of the cipher’s
block length. In TLS, the padding can be any amount that results in data-size that is a
multiple of the cipher’s block length, up to a maximum of 255 bytes.
The above differences between TLS and SSLv3 protocols are summarized in the following table.
The popular framework developed for ensuring security at network layer is Internet Protocol Security
(IPsec).
Features of IPsec
IPsec is not designed to work only with TCP as a transport protocol. It works with UDP as well
as any other protocol above IP such as ICMP, OSPF etc.
IPsec protects the entire packet presented to IP layer including higher layer headers.
Since higher layer headers are hidden which carry port number, traffic analysis is more
difficult.
IPsec works from one network entity to another network entity, not from application process
to application process. Hence, security can be adopted without requiring changes to
individual user computers/applications.
Tough widely used to provide secure communication between network entities, IPsec can
provide host-to-host security as well.
The most common use of IPsec is to provide a Virtual Private Network (VPN), either between
two locations (gateway-to-gateway) or between a remote user and an enterprise network
(host-to-gateway).
Security Functions
Confidentiality
o Provides assurance that a received packet was actually transmitted by the party
identified as the source in the packet header.
Key management.
IPsec provides an easy mechanism for implementing Virtual Private Network (VPN) for such
institutions. VPN technology allows institution’s inter-office traffic to be sent over public Internet by
encrypting traffic before entering the public Internet and logically separating it from other traffic. The
simplified working of VPN is shown in the following diagram −
Overview of IPsec
Origin
In early 1990s, Internet was used by few institutions, mostly for academic purposes. But in later
decades, the growth of Internet became exponential due to expansion of network and several
organizations using it for communication and other purposes.
With the massive growth of Internet, combined with the inherent security weaknesses of the TCP/IP
protocol, the need was felt for a technology that can provide network security on the Internet. A
report entitled "Security in the Internet Architecture” was issued by the Internet Architecture Board
(IAB) in 1994. It identified the key areas for security mechanisms.
The IAB included authentication and encryption as essential security features in the IPv6, the next-
generation IP. Fortunately, these security capabilities were defined such that they can be
implemented with both the current IPv4 and futuristic IPv6.
Security framework, IPsec has been defined in several ‘Requests for comments’ (RFCs). Some RFCs
specify some portions of the protocol, while others address the solution as a whole.
Operations Within IPsec
The IPsec suite can be considered to have two separate operations, when performed in unison,
providing a complete set of security services. These two operations are IPsec Communication and
Internet Key Exchange.
IPsec Communication
o Technically, key management is not essential for IPsec communication and the keys
can be manually managed. However, manual key management is not desirable for
large networks.
o IKE is responsible for creation of keys for IPsec and providing authentication during
key establishment process. Though, IPsec can be used for any other key
management protocols, IKE is used by default.
o IKE defines two protocol (Oakley and SKEME) to be used with already defined key
management framework Internet Security Association Key Management Protocol
(ISAKMP).
o ISAKMP is not IPsec specific, but provides the framework for creating SAs for any
protocol.
IPsec Communication has two modes of functioning; transport and tunnel modes. These modes can
be used in combination or used individually depending upon the type of communication desired.
Transport Mode
The original IP header is maintained and the data is forwarded based on the original
attributes set by the upper layer protocol.
The following diagram shows the data flow in the protocol stack.
The limitation of transport mode is that no gateway services can be provided. It is reserved
for point-to-point communications as depicted in the following image.
Tunnel Mode
This mode of IPsec provides encapsulation services along with other security services.
In tunnel mode operations, the entire packet from upper layer is encapsulated before
applying security protocol. New IP header is added.
The following diagram shows the data flow in the protocol stack.
Tunnel mode is typically associated with gateway activities. The encapsulation provides the
ability to send several sessions through a single gateway.
As far as the endpoints are concerned, they have a direct transport layer connection. The
datagram from one system forwarded to the gateway is encapsulated and then forwarded to
the remote gateway. The remote associated gateway de-encapsulates the data and forwards
it to the destination endpoint on the internal network.
Using IPsec, the tunneling mode can be established between the gateway and individual end
system as well.
IPsec Protocols
IPsec uses the security protocols to provide desired security services. These protocols are the heart
of IPsec operations and everything else is designed to support these protocol in IPsec.
Security associations between the communicating entities are established and maintained by the
security protocol used.
There are two security protocols defined by IPsec — Authentication Header (AH) and Encapsulating
Security Payload (ESP).
Authentication Header
The AH protocol provides service of data integrity and origin authentication. It optionally caters for
message replay resistance. However, it does not provide any form of confidentiality.
AH is a protocol that provides authentication of either all or part of the contents of a datagram by
the addition of a header. The header is calculated based on the values in the datagram. What parts of
the datagram are used for the calculation, and where to place the header, depends on the mode
cooperation (tunnel or transport).
The operation of the AH protocol is surprisingly simple. It can be considered similar to the algorithms
used to calculate checksums or perform CRC checks for error detection.
The concept behind AH is the same, except that instead of using a simple algorithm, AH uses special
hashing algorithm and a secret key known only to the communicating parties. A security association
between two devices is set up that specifies these particulars.
When IP packet is received from upper protocol stack, IPsec determine the associated
Security Association (SA) from available information in the packet; for example, IP address
(source and destination).
From SA, once it is identified that security protocol is AH, the parameters of AH header are
calculated. The AH header consists of the following parameters −
The header field specifies the protocol of packet following AH header. Sequence Parameter
Index (SPI) is obtained from SA existing between communicating parties.
Sequence Number is calculated and inserted. These numbers provide optional capability to
AH to resist replay attack.
In transport mode, the calculation of authentication data and assembling of final IP packet
for transmission is depicted in the following diagram. In original IP header, change is made
only in protocol number as 51 to indicated application of AH.
In Tunnel mode, the above process takes place as depicted in the following diagram.
Encapsulation Security Protocol (ESP)
ESP provides security services such as confidentiality, integrity, origin authentication, and optional
replay resistance. The set of services provided depends on options selected at the time of Security
Association (SA) establishment.
In ESP, algorithms used for encryption and generating authenticator are determined by the attributes
used to create the SA.
The process of ESP is as follows. The first two steps are similar to process of AH as stated above.
Once it is determined that ESP is involved, the fields of ESP packet are calculated. The ESP
field arrangement is depicted in the following diagram.
Although authentication and confidentiality are the primary services provided by ESP, both are
optional. Technically, we can use NULL encryption without authentication. However, in practice, one
of the two must be implemented to use ESP effectively.
The basic concept is to use ESP when one wants authentication and encryption, and to use AH when
one wants extended authentication without encryption.
Security Association (SA) is the foundation of an IPsec communication. The features of SA are −
Before sending data, a virtual connection is established between the sending entity and the
receiving entity, called “Security Association (SA)”.
IPsec provides many options for performing network encryption and authentication. Each
IPsec connection can provide encryption, integrity, authenticity, or all three services. When
the security service is determined, the two IPsec peer entities must determine exactly which
algorithms to use (for example, DES or 3DES for encryption; MD5 or SHA-1 for integrity).
After deciding on the algorithms, the two devices must share session keys.
SA is simple in nature and hence two SAs are required for bi-directional communications.
SAs are identified by a Security Parameter Index (SPI) number that exists in the security
protocol header.
Both sending and receiving entities maintain state information about the SA. It is similar to
TCP endpoints which also maintain state information. IPsec is connection-oriented like TCP.
Parameters of SA
o Every packet of IPsec carries a header containing SPI field. The SPI is provided to map
the incoming packet to an SA.
o The SPI is a random number generated by the sender to identify the SA to the
recipient.
Example of SA between two router involved in IPsec communication is shown in the following
diagram.
Security Administrative Databases
In IPsec, there are two databases that control the processing of IPsec datagram. One is the Security
Association Database (SAD) and the other is the Security Policy Database (SPD). Each communicating
endpoint using IPsec should have a logically separate SAD and SPD.
In IPsec communication, endpoint holds SA state in Security Association Database (SAD). Each SA
entry in SAD database contains nine parameters as shown in the following table −
4 Lifetime of the SA
Time till SA remain active
Algorithm – AH
5
Used in the AH and the associated key
Path MTU(PMTU)
9
Any observed path maximum transmission unit (to avoid fragmentation)
All SA entries in the SAD are indexed by the three SA parameters: Destination IP address, Security
Protocol Identifier, and SPI.
SPD is used for processing outgoing packets. It helps in deciding what SAD entries should be used. If
no SAD entry exists, SPD is used to create new ones.
Selector fields – Field in incoming packet from upper layer used to decide application of
IPsec. Selectors can include source and destination address, port numbers if relevant,
application IDs, protocols, etc.
Outgoing IP datagrams go from the SPD entry to the specific SA, to get encoding parameters.
Incoming IPsec datagram get to the correct SA directly using the SPI/DEST IP/Protocol triple, and
from there extracts the associated SAD entry.
SPD can also specify traffic that should bypass IPsec. SPD can be considered as a packet filter where
the actions decided upon are the activation of SA processes.
Advantages of IPSec
Strong security: IPSec provides strong cryptographic security services that help protect
sensitive data and ensure network privacy and integrity.
Wide compatibility: IPSec is an open standard protocol that is widely supported by vendors
and can be used in heterogeneous environments.
Flexibility: IPSec can be configured to provide security for a wide range of network
topologies, including point-to-point, site-to-site, and remote access connections.
Scalability: IPSec can be used to secure large-scale networks and can be scaled up or down
as needed.
Improved network performance: IPSec can help improve network performance by reducing
network congestion and improving network efficiency.
Disadvantages of IPSec
Compatibility Issues: IPSec can have compatibility issues with some network devices and
applications, which can lead to interoperability problems.
Performance Impact: IPSec can impact network performance due to the overhead of
encryption and decryption of IP packets.
Key Management: IPSec requires effective key management to ensure the security of the
cryptographic keys used for encryption and authentication.
Limited Protection: IPSec only provides protection for IP traffic, and other protocols such
as ICMP, DNS, and routing protocols may still be vulnerable to attacks.
Uses a Web of Trust (WoT) model, Uses a centralized Public Key Infrastructure
Key
where users manually verify and trust (PKI) with trusted Certificate Authorities
Management
each other's keys. (CAs).
Public Key Users exchange and verify public keys Users obtain digital certificates from trusted
Distribution manually or via key servers. CAs (Certificate Authorities).
Supported Email Requires third-party plugins (e.g., Built into major email clients like Microsoft
Clients GnuPG, OpenPGP) for email Outlook, Apple Mail.
S/MIME (Secure/Multipurpose Internet
Feature PGP (Pretty Good Privacy)
Mail Extensions)
encryption.
Digital Uses PGP-signed keys for Uses X.509 certificates issued by CAs for
Signatures authentication. authentication.
1. What is OpenSSL?
OpenSSL is an open-source cryptographic library that provides tools for implementing Secure
Sockets Layer (SSL) and Transport Layer Security (TLS) protocols.
It includes a robust toolkit for encryption, key generation, digital certificates, and secure
communications.
Ensures data integrity and confidentiality in web applications, VPNs, and software systems.
Random Number Generation: Provides secure random number generation for cryptographic
purposes.
openssl version -a
openssl req -x509 -new -nodes -key private_key.pem -sha256 -days 365 -out certificate.pem
TLS 1.0 & 1.1 Deprecation: Older versions of TLS are considered insecure and should not be
used.
Keep OpenSSL Updated: Always use the latest version to protect against vulnerabilities.
Disable Weak Ciphers: Configure OpenSSL to use strong encryption algorithms only.
Use Strong Keys: Generate keys with at least 2048-bit RSA or ECC for better security.
Regularly Audit SSL/TLS Configurations: Use tools like SSL Labs SSL Test to check for security
weaknesses.
Enable Forward Secrecy: Use modern cipher suites that support Perfect Forward Secrecy
(PFS).
7. Alternatives to OpenSSL
LibreSSL: A fork of OpenSSL developed by the OpenBSD project with a focus on security.
OpenSSL in Cybersecurity
1. What is OpenSSL?
OpenSSL is an open-source library that provides tools for implementing Secure Sockets Layer (SSL)
and Transport Layer Security (TLS) protocols, as well as cryptographic functions used in securing
network communications. It includes a full suite of encryption, decryption, digital signatures, and
certificate management utilities.
Key Components
OpenSSL Library (libssl and libcrypto): Implements cryptographic functions and SSL/TLS
protocols.
OpenSSL Command-line Tool (openssl): A utility for performing cryptographic operations
and managing SSL/TLS certificates.
OpenSSL plays a vital role in cybersecurity by providing essential encryption and security functions,
including:
Securing Internet Communications: Used in HTTPS, SSH, VPNs, email encryption (SMTP,
IMAP, POP3).
Authentication and Data Integrity: Ensures identity verification via digital certificates and
prevents data tampering.
Implementing Strong Encryption: Provides encryption standards like AES, RSA, ECC, SHA-256
for secure data transmission.
Certificate Management: Generates, signs, and verifies SSL/TLS certificates used by websites
and applications.
OpenSSL is a versatile and widely used cryptographic library that provides a suite of functions for
secure communications. Below are its key features and their significance in cybersecurity.
OpenSSL supports various encryption and decryption algorithms used for data protection and
confidentiality.
Uses a public key for encryption and a private key for decryption.
# Encrypt
OpenSSL implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security), which are
essential for securing network traffic.
Older versions like SSL 2.0 and 3.0 are deprecated due to vulnerabilities.
SSLCipherSuite HIGH:!aNULL:!MD5
OpenSSL provides tools for managing digital certificates, which verify identity and enable secure
communication over the internet.
openssl req -x509 -new -nodes -key private.key -sha256 -days 365 -out certificate.crt
Hashing ensures data integrity by converting input data into a fixed-size hash value.
Usage:
Digital signatures
OpenSSL supports Public Key Infrastructure (PKI), which is essential for managing encryption keys,
certificates, and digital identities.
🔹 Digital Signatures
🔹 Key Management
Usage:
Secure authentication
🔹 Generates cryptographically secure random numbers for encryption keys, tokens, and session IDs.
🔹 Used in:
🔹 VPN Security
🔹 Code Signing
openssl version
# Encrypt
# Decrypt
bash
CopyEdit
openssl req -x509 -new -nodes -key private.key -sha256 -days 365 -out certificate.crt
What is Steganography?
Steganography is defined as which involves caching of secret information. This word is derived from
two Greek words- ‘stegos’ meaning ‘to cover’ and ‘grayfia’, meaning ‘writing’, thus translating to
‘covered writing’, or ‘hidden writing’. The sensitive information will also be uprooted from the
ordinary train or communication at its discovery. With the help of Steganography, we can hide any
digital thing like textbook, image, videotape, etc behind a medium.
Text Steganography
Text Steganography is defined as a type of steganography which involves caching dispatches or secret
information within a textbook document or other textual data. In this system, we try to hide secret
data with the help of each letter of the word. It is challenging to describe especially when the
variations or changes made are subtle.
Image Steganography
Audio Steganography
Video Steganography
Advantages of Steganography
It's veritably important delicate to descry. It can only be detected by the receiver party.
It can apply through colorful means like images, audio, videotape, textbook,etc.
It offers double subcaste of protection, first being the train itself and second the data
decoded.
With the help of Steganography advanced functional agency can communicate intimately.
Difference between Steganography and Cryptography
Steganography Cryptography
The structure of data is not modified in the The structure of data is modified in the case of
case of Steganography. Cryptography.
The use of key is not obligatory, but if it is The use of key is obligatory in the case of
used it enhances security. Cryptography.
Steganography Tools
Steganography Tools are defined as tools which help the stoner to hide secret dispatches or
information inside another train in colorful formats. There are colorful tools available in the request
which helps to perform steganography. Some of the steganography tools are following-
OpenStego
StegOnline
Steghide
OutGuess
Hide n shoot
QuickSteg