Module:-11. Day56,57,58
Module:-11. Day56,57,58
Day:-56
Database forensics plays a crucial role in investigating data breaches, insider threats, and
unauthorized database access. It helps to determine the extent of the damage, how the
breach occurred, and what specific data was compromised. This is essential for identifying
vulnerabilities and preventing future attacks.
Ensuring data integrity is paramount for businesses that handle sensitive information,
especially those in industries like finance, healthcare, and government. Database forensics is
critical for organizations that must comply with regulatory standards like GDPR, HIPAA, PCI-
DSS, and SOX. Forensics ensures that sensitive data is handled properly and that
unauthorized changes are detected, enabling compliance with legal standards.
Database forensics is not only useful for external cyberattacks but also for detecting and
investigating insider threats. Employees with access to critical databases may misuse their
privileges, modify sensitive information, or steal confidential data. By tracing user activities,
database forensics helps mitigate internal security risks.
Database forensics provides irrefutable digital evidence that can be used in court
proceedings. The process of preserving the data with proper chain-of-custody
documentation ensures that the evidence remains credible and admissible in court. Database
forensics experts often testify in court as to their findings and the methods used in the
investigation.
investigation.
Database forensics helps identify the source and methodology of data breaches. By
analyzing logs and tracing the activities that led to the compromise, forensics teams can
identify vulnerabilities in the database system and suggest remediation measures to prevent
further breaches.
Database forensics also plays a role in verifying the authenticity and accuracy of backup
systems. For instance, if a company experiences data loss or corruption, forensic analysis
can help determine whether the backup data is genuine and has not been tampered with,
ensuring accurate recovery.
Conclusion
Database forensics is vital in today’s digital landscape, where databases are the
backbone of most business operations. It ensures that data integrity is maintained,
helps detect and respond to security incidents, supports legal actions, and assists
organizations in complying with data protection regulations. As databases continue to
grow in complexity, the role of forensics in ensuring their security and integrity
becomes even more critical.
Day:-57
SQL Server is a relational database management system (RDBMS) that stores data in a
structured format. Its storage architecture is designed to efficiently manage, retrieve, and
maintain data. Below is a detailed explanation of how data is stored in SQL Server.
1. Database Files
• Primary Data File (.mdf): This is the main file where the data is stored. Every SQL
Server database has a primary data file, and it can also contain user objects (tables, indexes,
views, etc.).
• Secondary Data Files (.ndf): These are optional files used if the primary data file
becomes too large. Databases can be spread across multiple secondary files, especially in
cases where storage needs to be expanded or spread across multiple disks.
• Log File (.ldf): The log file stores all the transaction logs, which help in maintaining
data integrity and recovery processes. Every database must have at least one log file.
Pages
• A page is the fundamental unit of data storage in SQL Server. Each page is 8 KB in
size, and a database is divided into pages.
• Pages store various types of information, depending on their type:
• Data Pages: Store the actual data in rows.
• Index Pages: Store index information for quicker searches.
• Text/Image Pages: Store large objects such as text, images, or large binary data.
• Global Allocation Map (GAM) Pages: Manage allocated extents.
• Page Free Space (PFS) Pages: Track the amount of free space available on each
page.
• Bulk Changed Map (BCM) Pages: Keep track of changes during bulk operations.
Each page consists of a 96-byte header that includes metadata like page number, type,
amount of free space, etc., followed by the actual data.
Extents
• An extent consists of eight contiguous pages (8 KB each), making the extent size
64 KB.
• SQL Server allocates space to objects in extents, which can either be:
• Uniform Extents: Owned by a single object (all eight pages belong to the same
object).
• Mixed Extents: Shared among multiple objects (used when objects are small and
don’t require a full extent).
Initially, SQL Server uses mixed extents, but as tables grow, uniform extents are allocated to
improve performance.
3. Row Storage
SQL Server stores data in rows, and each row is stored on a page. The maximum size for a
row is 8,060 bytes, though certain types of data (e.g., varchar(MAX) and varbinary(MAX))
can exceed this limit by using a special storage mechanism for large data types (text/image
storage).
Rows are stored sequentially in a page, and each row has a row offset table at the end of the
page that points to the starting position of each row.
If a row contains large data such as varchar(MAX), varbinary(MAX), text, ntext, or image,
SQL Server stores part of the row on the page, but the large data is stored on separate
pages, known as LOB (Large Object) pages. The row contains a pointer to the LOB page.
Indexes help speed up data retrieval and are stored as part of the database structure. SQL
Server uses two primary types of indexes:
• Clustered Index: A clustered index determines the physical order of the data in the
table. There can only be one clustered index per table because the rows are stored physically
in this order. The data and the index are stored together.
• Non-Clustered Index: A non-clustered index stores the index separately from the
data. It contains a pointer to the data location, which can either be the row’s physical
address or the key value in a clustered index.
Both types of indexes are stored on index pages and organized similarly to data pages. Non-
clustered indexes allow faster searching but do not affect the physical order of the data in
the table.
5. Transaction Logs
SQL Server uses transaction logs to maintain a record of all transactions and ensure ACID
properties (Atomicity, Consistency, Isolation, Durability). The .ldf file stores all the logs.
• Write-Ahead Logging (WAL): Before SQL Server makes any changes to the actual
data files, it first logs the change in the transaction log. Only once the log is written does SQL
Server commit the transaction to the data files.
• The transaction log ensures that in the event of a system crash or failure, the
database can be restored to a consistent state. It stores:
• The start of a transaction.
• All modifications (INSERTs, UPDATEs, DELETEs).
• Commit or rollback operations.
SQL Server also uses in-memory storage to cache frequently accessed data pages, indexes,
and plans for faster retrieval. This is known as the buffer pool. When data is queried, SQL
Server first checks the buffer pool. If the data is not there, it reads it from disk and then
stores it in the buffer pool for future access.
The Lazy Writer process manages the buffer pool by writing dirty pages (modified pages)
back to disk and freeing up space for new pages when needed.
7. Filegroups
Filegroups are a logical structure that groups database files (either primary or secondary)
together. Filegroups provide flexibility in managing large databases and can be used for:
8. Database Snapshots
A database snapshot is a read-only, static view of the database at a specific point in time.
SQL Server uses a copy-on-write mechanism for snapshots:
• Whenever a change is made to the database, the original data page is copied to the
snapshot before being overwritten in the database. The snapshot only stores the original
pages that have been modified, making it more storage-efficient.
9. Data Compression
SQL Server offers two types of data compression to optimize storage space and improve I/O
performance:
SQL Server uses a special database called TempDB for temporary storage of data such as:
TempDB is recreated every time SQL Server is restarted, and it plays a critical role in
performance, as many operations rely on it.
Conclusion
Data storage in SQL Server is highly structured, with data stored in pages and extents
within primary, secondary, and log files. The use of indexes, transaction logs, and in-
memory buffer pools ensures efficient access, management, and integrity of the data.
Filegroups, compression, and TempDB further enhance flexibility and performance in
larger and more complex databases. SQL Server’s architecture is designed to provide
both reliability and scalability for enterprise-level database management.
Day:-57
Collecting evidence from a MySQL server involves extracting critical information that can
help investigate unauthorized access, data manipulation, or other security incidents. Proper
evidence collection is crucial for digital forensics, ensuring that all data is accurately
captured, preserved, and can be used in court or internal investigations. Below is a guide on
how to collect evidence from a MySQL server.
Before collecting any evidence, ensure that the system is secured to prevent any further
changes or tampering. This includes:
Before interacting with the MySQL server, it’s crucial to create a full disk image or snapshot
of the entire server, including the database, logs, and system files. This ensures that you
have an unaltered copy of the server data to analyze if needed.
• Use disk imaging tools like dd, FTK Imager, or EnCase to capture a full image of the
disk or partition where the MySQL data resides.
• Document the process carefully, maintaining a proper chain of custody.
Gather information about the MySQL configuration and server version to understand the
environment and identify potential vulnerabilities.
Steps:
• Connect to the MySQL server and run the following command to capture the
version and environment information:
SELECT VERSION();
SHOW VARIABLES;
SHOW GLOBAL VARIABLES;
This will provide information about the MySQL version, configuration settings, and
environment variables.
• Export the configuration file my.cnf (Linux) or my.ini (Windows), which contains
server parameters and settings:
Create a full backup (dump) of all the databases for further analysis. This includes both the
schema (structure) and data.
Steps:
• Use the mysqldump command to export all the databases to a SQL file:
• Options used:
• --all-databases: Dumps all databases.
• --single-transaction: Creates a consistent
snapshot for transactional databases like InnoDB.
• --routines: Includes stored procedures and
functions.
• --events: Includes database events.
• --triggers: Includes triggers.
• --skip-lock-tables: Prevents table locking
during the dump process.
Ensure the dump file is stored securely and make a note of the exact time and date of the
backup.
5. Collect Logs
Logs are critical in determining who accessed the system and what actions were performed.
MySQL generates several types of logs that you should collect:
The General Query Log records all SQL queries executed by the server, including login
attempts. This log is useful for understanding user activities.
cp /path/to/general_log_file /path/to/
evidence_folder/general_query_log.sql
b) Error Log
The Error Log records errors that occur during the operation of the MySQL server, including
startup, shutdown, and runtime errors. It may also contain information about potential
attacks or crashes.
cp /path/to/error_log_file /path/to/evidence_folder/
error_log.sql
The Binary Log contains a record of all changes made to the database (e.g., INSERT, UPDATE,
DELETE statements), as well as the time and user associated with each change. This is
critical for tracking data manipulation.
cp /path/to/mysql_binlog.* /path/to/evidence_folder/
The Slow Query Log records queries that take longer than a specified time to execute. It is
useful for detecting potential performance issues or malicious queries designed to overload
the server.
cp /path/to/slow_query_log_file /path/to/
evidence_folder/slow_query_log.sql
cp /path/to/audit_log_file /path/to/evidence_folder/
audit_log.sql
Extract information about the users, their privileges, and roles to identify suspicious or
unauthorized accounts or privileges.
Steps:
Investigate the MySQL data directory and file system for any signs of tampering or
suspicious files. The MySQL data directory stores the actual database files.
ls -la /path/to/mysql/data/
9. Network Information
Gather information about network connections and communication to the MySQL server, as
attackers often use network-based attacks.
Maintain a chain of custody for all evidence collected. This ensures that the evidence is
admissible in court and that its integrity is maintained. Include the following details:
Once the evidence is collected, store it in a secure location to prevent unauthorized access
or tampering. Ensure that all the evidence is hashed using algorithms like MD5 or SHA-256
to verify its integrity over time.
Conclusion
Collecting evidence from a MySQL server involves gathering data from multiple
sources, including logs, database dumps, user accounts, and network information. The
integrity of the evidence must be maintained throughout the process by following a
structured and documented approach. This evidence will be crucial for identifying
malicious activities, investigating data breaches, or complying with legal requirements.