Set 2 CF
Set 2 CF
Explain the incident response process and people who are involved in
incident response process?
It’s a planned way to handle incident the goal is to quickly fix the problem reduce damage and
get things back to normal.
People Involved in Incident Response process:
1. Incident Response manager :-Leads the team makes Key decisions, ensures serious
Security problem are handled properly.
2. Security Analysts : Investigate. The incident & analyze the details
3. Triage Analysts: Quickly axies the Situation
4. Forensic Analysts: dig deeper into evidence
5. Threat Researchers: Search the internet and other Sources to find possible threats before
they become a problem
6. Management provides money, resources & Support To Ensure the team, Can do their
job Effectively.
7. Human resource - Iets, involved if an Employee is a part of the Security issues,
8. Adult & Risk Specialist: check for, weakness in Security & recommend best practies to
improve security.
9. General counset : Make sure any evidence is legally valid in case the company needs to
take legal action.
Steps:
A standard incident response methodology that may be implemented by an organization includes
the following steps:
1. Preparation
2. Identification (Detection)
3. Containment (Response)
4. Eradication (Mitigation)
5. Recovery
6. Lessons learned (post incident activity, postmortem, or reporting)
1. Preparation:
The preparation phase includes steps taken before an incident occurs. These include training,
writing incident response policies and procedures, and providing tools such as laptops with
sniffing software, crossover cables, original OS media, removable drives, etc.
2. Identification (Detection):
One of the most important steps in the incident response process is the detection phase.
Detection, also called identification, is the phase in which events are analyzed in order to
determine whether these events might comprise a security incident.
3. Containment (Response):
The response phase, or containment, of incident response is the point at which the incident
response team begins interacting with affected systems and attempts to keep further damage
from occurring as a result of the incident.
4. Eradication (Mitigation):
In this phase, finding the root cause of the incident and removing affected systems from the
production environment.
5. Recovery:
In this phase, ensuring no threat remains and permitting affected systems back into the
production environment.
6. Lessons learned (post incident activity, postmortem, or reporting):
Prepare complete documentation of the incident, investigate the incident further, and understand
what was done to contain it and whether anything in the incident response process could be
improved.
4.Define initial response and explain the steps of volatile data collection from
windows system?
One of the first steps of any preliminary investigation is to obtain enough information to
determine an appropriate response.
Volatile data collection from windows system:
1. Run a Trusted cmd.exe:
As discussed earlier, investigators should be careful about the traps that have been implemented
by an attacker, which will mislead the investigator to place wrong incident response.
2. Recording the system time and date:
After executing the trusted command shell, it is a good idea to capture the local system date and
time settings. This is important to correlate the system logs, as well as to mark the times at which
you performed your response. The time and date commands are a part of the cmd.exe
application.
3. dentify who has logged on to the system and who are the remote access users:
It is necessary to identify which user accounts have remote access rights on the target system, in
order to respond to a system that offers remote access via modem. You need to decide if you
want to pull the telephone lines from the system at the time of response, if several accounts
access systems via Remote Access Services (RAS).
4. Record creation, access time, and all the modifications made to the files:
To get the list of all the directory files on the target machine, "dir' command is used. It includes
the size, access, and alteration and creation time.
dir/t:a/a/s/o Provides a recursive directory listing of all the access times on the drive
dir/t:w/a/s/o Provides a recursive directory listing of all the modification times on the drive
dir/t:c/a/s/o Provides a recursive directory listing of all the creation times on the drive
5. Identifying open ports:
There are several networking commands available, out of which Netstat can be used to
determine which ports are open. It also enlists all listening port and currents connections to those
ports. Volatile data, such as recently terminated connections and current connections, can be
recorded using Netstat.
6. List of applications that are associated with those ports:
Knowing which services listen on which ports is helpful. A free tool fport is used to enlist
listening ports for all the processes.
7. List of all running processes:
It is necessary to record all the processes that are currently executing on the system before
turning off the target system. Unplugging the power cable will destroy this information.
8. List of current and recent connections:
To know who is connected or who has connected recently, the networking commands like
Netstat, ARP, and Nbstat are useful. For many Windows system, these utilities might be the only
way to determine a remote system connecting to workstation. Many experts refer Netstat
command to enlist the ports that are opened on a system.
5.Illustrate forensic duplication tool requirements?
Forensic duplication tools must satisfy the following criteria:
1. The tool shall make a bit stream duplicate or an image of an original disk or partition.
2. The tool shall not alter the original disk.
3. The tool will be able to verify the integrity of a disk image file.
4. The tool shall log I/O errors.
5. The tool’s documentation shall be correct.
6. The tool should create a mirror image or forensic duplicate of the original storage media.
7. The tool must be able handle read errors.
8. The tool should not make any changes to the source medium.
9. The tool must have the capability to be held up to scientific review. Results must be verifiable
by a third party.
10. If there are no errors accessing the source, then the tool shall create a bitstream duplicate or
image of the source.
11. If there are I/O errors accessing the source, then the tool shall create a qualified bitstream
duplicate or image of the source.
12. The tool shall log I/O errors in an accessible and readable form, including the type of error
and location of the error.
13. The tool shall be able to access disk drives through one or more well-defined interfaces.
14. Documentation shall be correct, insofar as the mandatory and any implemented optional
requirements are concerned, that is, if a user following the tool’s documented procedures
produces the expected result, then the documentation is deemed correct.
15. If the tool copies a source to a destination that is larger than the source, then it will document
the contents of the areas on the destination that are not part of the copy.
16. If the tool copies a source to a destination that is smaller than the source, then the tool will
notify the user, truncate the copy, and log this action.