CSF Unit-4
CSF Unit-4
INTRODUCTION
Computer Forensics is a scientific method of investigation and analysis in order
to gather evidence from digital devices or computer networks and components
which is suitable for presentation in a court of law or legal body. It involves
performing a structured investigation while maintaining a documented chain of
evidence to find out exactly what happened on a computer and who was
responsible for it.
TYPES
Disk Forensics: It deals with extracting raw data from the primary or
secondary storage of the device by searching active, modified, or deleted
files.
Network Forensics: It is a sub-branch of Computer Forensics that involves
monitoring and analyzing the computer network traffic.
Database Forensics: It deals with the study and examination of databases
and their related metadata.
Malware Forensics: It deals with the identification of suspicious code and
studying viruses, worms, etc.
Email Forensics: It deals with emails and their recovery and analysis,
including deleted emails, calendars, and contacts.
Memory Forensics: Deals with collecting data from system memory (system
registers, cache, RAM) in raw form and then analyzing it for further
investigation.
Mobile Phone Forensics: It mainly deals with the examination and analysis of
phones and smartphones and helps to retrieve contacts, call logs, incoming,
and outgoing SMS, etc., and other data present in it.
CHARACTERISTICS
Identification: Identifying what evidence is present, where it is stored, and
how it is stored (in which format). Electronic devices can be personal
computers, Mobile phones, PDAs, etc.
Preservation: Data is isolated, secured, and preserved. It includes
prohibiting unauthorized personnel from using the digital device so that
digital evidence, mistakenly or purposely, is not tampered with and making a
copy of the original evidence.
Analysis: Forensic lab personnel reconstruct fragments of data and draw
conclusions based on evidence.
Documentation: A record of all the visible data is created. It helps in
recreating and reviewing the crime scene. All the findings from the
investigations are documented.
Presentation: All the documented findings are produced in a court of law for
further investigations.
PROCEDURE:
The procedure starts with identifying the devices used and collecting the
preliminary evidence on the crime scene. Then the court warrant is obtained for
the seizure of the evidence which leads to the seizure of the evidence. The
evidence are then transported to the forensics lab for further investigations
and the procedure of transportation of the evidence from the crime scene to
labs are called chain of custody. The evidence are then copied for analysis and
the original evidence is kept safe because analysis are always done on the
copied evidence and not the original evidence.
The analysis is then done on the copied evidence for suspicious activities and
accordingly, the findings are documented in a nontechnical tone. The
documented findings are then presented in a court of law for further
investigations.
Some Tools used for Investigation:
Tools for Laptop or PC –
COFFEE – A suite of tools for Windows developed by Microsoft.
The Coroner’s Toolkit – A suite of programs for Unix analysis.
The Sleuth Kit – A library of tools for both Unix and Windows.
Tools for Memory :
Volatility
WindowsSCOPE
Tools for Mobile Device :
MicroSystemation XRY/XACT
APPLICATIONS
Intellectual Property theft
Industrial espionage
Employment disputes
Fraud investigations
Misuse of the Internet and email in the workplace
Forgeries related matters
Bankruptcy investigations
Issues concerned the regulatory compliance
Advantages of Computer Forensics :
To produce evidence in the court, which can lead to the punishment of the
culprit.
It helps the companies gather important information on their computer
systems or networks potentially being compromised.
Efficiently tracks down cyber criminals from anywhere in the world.
Helps to protect the organization’s money and valuable time.
Allows to extract, process, and interpret the factual evidence, so it proves
the cybercriminal action’s in the court.
Disadvantages of Computer Forensics :
Before the digital evidence is accepted into court it must be proved that it is
not tampered with.
Producing and keeping electronic records safe is expensive.
Legal practitioners must have extensive computer knowledge.
Need to produce authentic and convincing evidence.
If the tool used for digital forensics is not according to specified standards,
then in a court of law, the evidence can be disapproved by justice.
A lack of technical knowledge by the investigating officer might not offer the
desired result.
The phases in a computer forensics investigation are:
Engagement contract
Non-Disclosure Agreement (NDA)
Authorization
Confidentiality
Payment
Consent and acknowledgement
Limitation of liability
1 Introduction Digital forensics has been developed in a way to other types of forensics. With other
forensic sciences, methodologies have often been based on scientific discoveries, and through ad-hoc
research [1, 2, 3]. However, due to the rapid growth of digital investigations, scientific processes are
now integrated into investigations in order to make digital forensics evidence acceptable in court [4, 3].
Work has also been undertaken to provide ways to help juries understand the value of digital evidence
[4]. Robertson defines this the evolution over the next ten years for forensic sciences and illustrates
some of the current challenges: “New technologies and improved instrumentation will continue to
emerge. Forensic science usually has a lag period before these are adopted into the forensic area, and
this is to be expected, given the conservative nature of the legal arena. In the past, however, the lag
period has been too long. Forensic scientists need to be quicker to recognize the potential applications
to forensic problems and they also need to be able to carry out research aimed at helping to interpret
what analytical data means.” [5] As digital forensics is still an immature science, legal issues still exist
which need to be overcome, such as from Meyers and Rogers [6] who highlighted three main issues
faced within computer forensics: • Admissibility of evidence. In order to ensure that evidence are
admissible in court, investigators should follow rigorous procedures. These need to be standardised, or,
at minimum, guidelines should be defined. Unfortunately, even closely following recommendations,
errors can be made. This is often due to each case being different, and it is not possible to create a
manual which can cover all possibilities. For this reason, it is essential that digital investigators develop
appropriate skills to get round these problems. • Standards and certifications. Certifications are a good
way to develop investigators’ skills. This has is already successfully applied in other computer fields, such
as computer security [7]. A small number of certifications – such as the Certified Forensic Analyst [8]
certification, which is provided by the Global Information Assurance Certification (GIAV) founded by the
SANS Institute – are available. Nonetheless, certifications and standards are not only applicable to
investigators.
International Organization for Standardization (ISO) associated with the International Electrotechnical
Commission (IEC) created this standard in order to provide laboratories general requirements to carry
out, tests, calibrations and sampling. The main requirements are the following: • Management system •
Document control • Subcontracting of tests and calibrations • Purchasing services and supplies • Service
to the customer • Complaints • Corrective action • Preventive action • Test and calibration methods and
method validation • Assuring the quality of test and calibration results • Reporting the results Projects
have been carried out by organisations in order to evaluate Digital Forensic Tools. The most well-known
project is undertaken by the National Institute of Standards and Technology (NIST) under the Computer
Forensics Tool Testing (CFTT) project [10]. Results of these tests are released to the public. The Scientific
Working Group on Digital Evidence (SWGDE) and the DoD [11] also assess Digital Forensic Tools,
however, results are available only to U.S. law enforcement agencies [12]. It is difficult to understand
the reason behind the choice in not release information because computer forensics, as with any other
science, is based on information sharing. Even if all results were available, it would not be possible for
these organisations to keep up with the fast pace of tool development [13]. In addition, many
practitioners rely too much on vendors capability to validate their own tools [3]. Furthermore, Beckett
[3] defined that, in order to comply with the standard requirements, laboratories should not rely on
testing performed by another organisation. The risk of failure in Digital Forensic Tools has been proven
by different authors. NIST [14] demonstrated that the famous acquisition tool dd [15] was not able to
retrieve the last sector of the hard drive if it had an odd number of sectors. These results have been
confirmed by Kornblum [16]. However, the author explained that the behaviour was not coming from
the tool implementation. Instead, he argued that this issue was coming from the Linux Kernel 2.4, but
not from Linux Kernel 2.6, which demonstrates that organisations which validate DFTs can make
mistakes. If the results are followed blindly by laboratories, a major issue might arise if errors have been
introduced in the testing procedure. The previous example discussed the issues related to software.
However, investigators might have to use other tools which combine hardware and software such as
write blockers. NIST produced an evaluation methodology for this type of products and evaluated
multiple write blockers. Beckett [3] properly explained the risks that a laboratory might encounter if no
additional testing is carried out. Each device needs to be tested before it can be used in the field, but a
manufacturing fault may be present, or that the device was damaged, for instance, during transport.
NIST Standardised Approach of Tool Evaluation In the Computer Forensics Tool Testing (CFTT) project,
NIST developed methodologies to validate a range of forensics tools, initally focusing on data acquisition
tools [21, 22] and write blocker [23, 24] (software and hardware based). Figure 2 illustrates the
methodology used to assess the tools [10]. When a tool will be tested, the NIST methodology starts by
acquiring the tool, with a review of the tool documentation. If this documentation is non-existent, the
tool is analysed in order to generate such documentation, and which leads to a list of features along
with the requirements for these features, and thus a test strategy. This methodology is based on
rigorous and scientific methods, and the results are reviewed by both of the stakeholders (vendor and
testing organization), ensuring a certain level of fairness. However, this is also the major weakness of
this methodology, as the time required for the evaluation can be significant. The resources needed to
carry out each test does not enable a single organisation to test all tools along with all versions [13].
Thus, by the time the results are publicly available, the version of the tested tool might be deprecated.
In addition, the requirements of features might evolve which need to be reflected in the test strategy.
Moreover, the time needed to define the requirement of a single function need to be counted in years.
NIST has defined standards for string searching tools [25], but since dditional work has been made
publicly available. The specifications for digital data acquisition tools are still in a draft version since
2004 [21], and these examples show that this methodology is not viable for law enforcement agencies
to rely only on organisations which evaluate DFTs. Some categories of tools commonly used in digital
investigation are only not covered, such as file carving tools. For these reasons, it is essential for digital
investigators to validate DFTs themselves. 2.3 Validation and Verification of Digital Forensics Tools with
Reference Sets Beckett [3] explained that testing may not find all errors of a DFT, due to the fact that a
complete evaluation of a product would require extensive resources. The requirements defined by ISO
17025:2005 [9] specifies that validation is a balance between cost, risk and technical possibilities.
However, testing should be able to provide information on the reliability of the tool.
hese devices are often powered from the source or from the suspect machine. However, the forensic
analyst who relies on forensic hardware must ensure that all possible connectors are available prior to
starting a job. Some of the advantages include:
1. The mbedded development that has been completed, saving space and time and generally simplifying the
acquisition process.
3. The increeased speed of acquiring of digital data using hardware devices compared to using software.
Hardware protection devices offer a simple method to acquire an image of a suspect drive with much less fiddling
with the configuration settings in software. This makes the process simpler and less prone to error.
NoWrite
NoWrite prevents data from being written to the hard disk. It supports hard disk drives with high capacities. It is
compatible with all kinds of devices including USB or FireWire boxes, adapters, and IDE interface cables. It
supports communication between common IDE interfaces.
NoWrite is only functional on native IDE devices. It supports all USB features such as plug- and-play. It is
compatible with most operating systems and drive formats. NoWrite is transparent to the operating system and the
application programs ...
One of those challenges faced by EE practitioners is how to assure the reliability (or forensical
soundness) of digital evidence acquired by EE investigation tools (NRC, 2009). As today's EE
investigations heavily rely on automated software tools, the reliability of investigation outcomes
is predominantly determined by the validity and correctness of such tools and their application
process. Therefore, an insistent demand has been raised by law enforcement and other agencies
to validate and verify EE tools to assure the reliability of digital evidence.
Another factor demanding the validation and verification of EE tools is the request to bring the
EE discipline inline with other established forensic disciplines (e.g. DNA and ballistics) (Beckett
and Slay, 2007). To achieve this goal, one main way is to gain external accreditation, such as
ISO 17025 laboratory accreditation (ISO 17025E). EE laboratories and agencies are tested
against developed criteria and have to satisfy the extensive requirements outlined within this
document to gain accreditation. As a part of the accreditation, the EE tools and their utilization
process need to be tested.
In this work, we propose a functionality orientated paradigm for EE tool validation and
verification based on Beckett's work (Beckett and Slay, 2007). Within this paradigm, we dissect
the EE discipline into several distinct functional categories, such as searching, data recovery and
so on. For each functional category, we further identify its details, e.g. sub-categories,
components and etc. We call this dissection process function mapping. Our focus in this work is
the searching function. We map the searching function, specify its requirements, and design a
reference set for testing EE tools that possess searching functions.
The rest of this paper is organized as follows. Section 2 explains the necessity for EE tool
validation and verification. It also reviews the related work of traditional EE tools testing in the
EE discipline. Section 3 discusses the previous work and identifies their limitations. In Section 4,
we present our functionality orientated testing paradigm in detail, which includes its fundamental
methodology and unique features. Section 5 presents detailed searching function mapping. The
requirements of searching function are identified in Section 6. Lastly, we develop a focused pilot
reference set for testing the searching function in Section 7. This paper is finally concluded by
Section 8.
2. Background and related work
2.1. Validation and verification of softwares
The methods and technologies that provide confidence in system software are commonly called
software validation and verification (VV). There are two approaches to software VV: software
inspection and software testing (Fisher, 2007). While software inspection takes place at all states
of the software development life-cycle, inspecting requirements documents, design diagrams and
program codes, software testing runs an implementation of the target software to check if the
software is produced correctly or as intended. The VV work proposed in this paper falls into the
software testing category.
Since introduced in early 1990s, the concept of validation and verification has been interpreted in
a number of contexts. The followings are some examples.
1)
In IEEE standard 1012–1998, validation is described as the process of evaluating a system or
component during or at the end of the development process to determine whether it satisfies
requirements. Verification is the process of evaluating a system or component to determine
whether the products of a given development phase satisfy the conditions imposed at the start of
that phase.
2)
ISO (17025E) describes validation as the confirmation by examination and the provision of
objective evidence that the particular requirements for a specific intended use are fulfilled.
3)
Boehm (1997), from the software engineering point of view, succinctly defines validation and
verification as “validation: Are we building the right product?” and “verification: Are we
building the product right?”
4)
The only available description of software validation in the EE discipline is given by the
Scientific Working Group on Digital Evidence (SWGDE, 2004) as an evaluation to determine if
a tool, technique or procedure functions correctly and as intended.
Taking into consideration all these definitions and keeping in mind the requirements of ISO
17025, we adopt the definitions of validation and verification of forensic tools (Beckett and Slay,
2007) as follows.
•
Validation is the confirmation by examination and the provision of objective evidence that a
tool, technique or procedure functions correctly and as intended.
•
Verification is the confirmation of a validation with laboratories tools, techniques and
procedures.
2.2. Demands of EE tools validation and verification
The process of using automated software has served law enforcement and the courts very well,
and experienced detectives and investigators have been able to use their well-developed policing
skills, in conjunction with the automated software, so as to provide sound evidence. However,
the growth in the field has created a demand for new software (or increased functionality to
existing software) and a means to verify that this software is truly forensic, i.e. capable of
meeting the requirements of the ‘trier of fact’. Another factor demanding EE tools validation and
verification is for the EE discipline to move inline with other established forensic disciplines.
However, the EE community is now facing a complex and dynamic environment with regard to
EE tools. On one hand, the technology field has become very dynamic and the types of digital
devices, such as notebook computers, iPods, cameras and mobile phones, have changed
incredibly rapidly. And thus the digital evidence acquired from those devices has also changed.
On the other hand, in such a dynamic technological environment, there is no individual tool that
is able to meet all the needs of a particular investigation (Bogen and Dampier, 2005). Therefore,
the world has been witnessing an explosive boom in EE tools in the last decade. Although these
EE tools are currently being used by law enforcement agencies and EE investigators, we must be
aware that while some of them (e.g. EnCase, FTK) were originally developed for the forensic
purpose, others were designed to meet the needs of particular interest groups (e.g. JkDefrag
(Kessels) is a disk defragmenter and optimizer for Windows 2000/2003/XP/Vista/2008/X64).
Hence, to guarantee that the digital evidence is forensically sound, EE investigators must
validate and verify the EE tools that they are using to collect, preserve and analyze digital
evidences.
2.2.2. Laboratory accreditation
The establishment of digital forensic laboratories within Australia has predominantly been
aligned with law enforcement agencies. While these laboratories or teams have worked
successfully since their establishment, the discipline is now developing to a stage where the
procedures, tools and people must be gauged against a quality and competency framework.
To achieve this goal, one main method is to comply with ISO 17025 Laboratory Accreditation
standard. The ISO 17025 intends to specify the general requirements for the competence to carry
out test and/or calibrations. It encompasses testing and calibration performed by the laboratory
using standard methods, non-standard methods, and laboratory-developed methods. A laboratory
complying with this standard will also meet the quality management system requirements of ISO
9001. Among these requirements (e.g. document control, internal audits and etc.), one content
relating to the subject of this paper is “test and calibration methods and method validation”. Due
to the lack of verification and validation of EE tools, the EE branch may not be accredited by
ISO 9001, like other law enforcement branches (e.g. DNA and ballistics) have already done. As
a result, for the EE branch to avoid becoming the shortest wooden board (branch) of a bucket
(law enforcement departments) calls for the verification and validation of EE tools.
2.3. Existing works of EE tools validation and verification
In the last a few years, although extensive research efforts have been conducted in the EE
disciple ranging from generic frameworks and models (Reith et al., 2002, Bogen and Dampier,
2005, Brian, 2006) to practical guidelines and procedures (Beebe and Clark, 2005, Ruibin et al.,
2005, Good practice guide), there is still very little on the validation and verification of digital
evidence and EE tools.
Some efforts (Palmer, 2001, Bor-Wen Hsu and Laih, 2005) in the past have been made to
investigate “trustworthiness” of digital evidence, that is the product of the process. In these
works, the digital evidence (outcomes of the tools) are being examined rather than validating and
verifying the tools themselves. The question can be asked as to why we should not validate the
forensic development of such tools also.
The National Institute of Standards and Technology (NIST) is the one of the pioneers pursuing
the validation and verification of computer forensic tools. Within NIST, the Computer Forensics
Tool Testing (CFTT) project (NIST, 2002) was established to test the EE tools. The activities
conducted in forensic investigations are separated into discrete functions or categories, such as
write protection, disk imaging, string searching, etc. A test methodology is then developed for
each category. So far, several functionalities and tools have been tested and documented, such as
write blockers (NIST, 2003), disk imaging (NIST, 2004, NIST, 2005), string search (NIST,
2009) and mobile devices associated tools (NIST, 2008).
Developing extensive and exhaustive tests for digital investigation tools is a lengthy and
complex process, which the CFTT project at NIST has taken on. To fill the gap between
extensive tests from NIST and no public tests, Carrier (Brian, 2005) has been developing small
test cases, called Digital Forensics Tool Testing Images (DFTT). These tests include keyword
searching, data carving, extended partition and windows memory analysis.
Another research entity that is interested in the validation and verification of EE tools is the
Scientific Working Group on Digital Evidence (SWGDE). Rather than developing specific test
cases, the SWGDE recommended general guidelines for validation testing of EE tools (SWGDE,
2004). These guidelines include purpose and scope of testing, requirements to be tested,
methodology, test scenario selection, test data and documenting test data used.
The validation and verification of EE tool can also been conducted by the vendors that produce
these tools. For example, the Encase and FTK are two widely used digital forensic investigation
tools in the world. Their developers, Guidance Software and Access Data have conducted some
validation and verification work on Encase and FTK. These works can be found on their bulletin
boards.
There are a few of points that need to be noted. First, vendor validation has been widely
undocumented, and not proven publicly, except through rhetoric and hearsay on bulletin boards.
Many published documents in this field discuss repeatability of process with other tools as the
main validation technique, but no documented record can be found in the discipline that expands
on the notion of two tools being wrong (Beckett and Slay, 2007).
Secondly, this validation work treats the EE software package as a single unseparated entity.
Tool orientated validation methods would usually invalidate a tool package when one of all the
functions fails the validation even though all other functions pass the test. In most cases a
forensic tool package is quite complex and provides hundreds of specific functions (Wilsdon and
Slay, 2005), of which only a few may ever be used by an examiner. Therefore, according to the
traditional tool orientated validation method, a digital forensic software suite, where most
functions are valid and can be partially utilized, will not be utilized at all, or else must wait for
the complete validation of the entire function set. Because the cost of purchasing such software is
so great it would be infeasible to discount an entire package due to a single or small group of
functions failing validation.
Following the second point is the cost and complexity issue of the tool orientated VV approach.
Currently, to keep abreast of the broad range and rapid evolution of technology, many EE tools
(and their updated versions) are constantly emerging. These tools either are designed solely for
forensic purposes or are designed to meet the needs of particular interest groups. In such
complex and diverse environments of EE tools, even trivial testing of all functions of a forensic
tool for every version under all conditions, conservative estimates would indicate significant cost
(Beckett and Slay, 2007).
3.1.2. Functionality orientated VV approach
NIST/CFTT and DFTT perform the validation and verification of EE tools from another angle:
functionality driven. Instead of targeting the EE software tool, they start the validation by
looking at the EE discipline itself. They identify various activities required in forensic
investigation procedures and separate them into functionalities or categories, such as write
protection, disk imaging, string searching, etc. Then, they specify requirements that need to be
fulfilled for each function category. Based on the requirements specification, testing cases are
then designed to test functions of candidate EE tools.
The difference between the functionality orientated VV approach and the tool orientated VV
approach is that the former does not treat a EE tool as a single entity. Instead, they parse an EE
tool (or package) into various functions and test each function against the requirements specified
by practitioners and expert advisory groups. For example, in the case of disk imaging testing
(NIST, 2005), the EnCase LinEn 6.01 is selected as a test target and only its imaging function is
tested. Clearly, the functionality orientated VV approach outperforms the tool orientated VV
approach in terms of the effectiveness and cost.
3.2. Open issues in previous work
Despite the considerable achievements of previous EE work (including validation and
verification of digital evidence and investigation tools), we discover two potential issues
remaining unsolved, which motivate our proposed work.
The first open issue is that operational focus in the digital forensics domain to date has been to
solve each problem as it presents and not to look at the process of analysis as a whole. For
example, when dealing with the issue of analyzing an image obtained from one new device (e.g.
new iPod), researchers and practitioners may design an investigation tool specifically working
with this new device, rather than examining what impact will be on the digital forensics as a
scientific discipline.
Digital forensics is very much an emerging discipline and has developed in an ad-hoc fashion
(Beckett and Slay, 2007) without much of the scientific rigour of other scientific disciplines,
such as DNA, ballistics, and fingerprints. Although the scientific foundations of EE field and the
functions which together make up the EE process exist, they have never been formally or
systematically mapped and specified (scientific foundations), or stated and characterized
(functions). Though there have been recent efforts to formalize a definitive theory of digital
forensics and research dissertations that focus on the process model have started to appear
(Brian, 2006), there is still no adequate description of any depth of the specific functions of the
discipline.
The second open issue regarding the validation and verification of EE tools is that methodologies
proposed by NIST/CFTT and DFTT are broad and offer no conclusive detailed identification of
what needs to be tested. In other words, there is still a lack of systematical and definitive
description of the EE field as a scientistic discipline. For example, what basic procedures are in
the EE investigation? What fundamental functionalities are needed in the EE investigation? What
are the requirements of each functionality?
In this work, we use the CFSAP (computer forensic-secure, analyze, present) model (Mohay
et al., 2003) to describe the basic procedures of EE investigation. In this model, four fundamental
procedures are identified: Identification, preservation, analysis and presentation. In the context of
validation and verification, identification and presentation are skill-based concepts, while
preservation and analysis are predominately process, function and tool driven concepts and are
therefore subject to tool validation and verification. In Beckett's previous work (Beckett and
Slay, 2007), the processes of preservation and analysis are preliminarily dissected into several
fundamental functions at an abstract level. The functions in the data preservation procedure are
forensic copy, verification, write protection and media sanitation. The data analysis procedure
involves eight functions: searching, file rendering, data recovery, decryption, file identification,
processing, temporal data and process automation. An ontology of such function mapping is
shown in Fig. 1.
•
The tool shall find a keyword in the file slack.
•
The tool shall find a keyword in a deleted file.
•
The tool shall find a regular expression in a compressed file.
Based on the requirement specification, we then develop a reference set in which each test case
(or scenario) is designed corresponding to one function requirement. With the reference set, a EE
tool or its functions can be validated and verified independently.
Our proposed VV methodology can be presented as the following. If the domain of computer
forensic functions is known and the domain of expected results (i.e. requirements of each
function) are known, that is, the range and specification of the results, then the process of
validating any tool can be as simple as providing a set of references with known results. When a
tool is tested, a set of metrics can also be derived to determine the fundamental scientific
measurements of accuracy and precision. In summary, if the discipline can be mapped in terms
of functions (and their specifications) and, for each function, the expected results are identified
and mapped as a reference set, then any tool, regardless of its original design intention, can be
validated against known elements.
•
Extensibility: With a defined function, there exists a set of specifications for components that
must be satisfied for the result of a function to be valid. That means as new specifications are
found they can be added to a schema that defines the specification.
•
Tool (tool version) Neutrality: If the results, or range of expected results, are known for a
particular function, then it does not matter what tool is applied, but that results it returns for a
known reference set can be measured. As a tool naturally develops over time, new versions are
common, but the basic premise of this validation and verification paradigm means that the
comments previously described for tool neutrality are also measurable.
•
Transparency: A set of known references described as a schema are therefore auditable and
independently testable.
Beside validating and verifying the EE tools, a standard test paradigm is also useful for
proficiency testing (certification), training (competency) and development of procedures.
According to a survey conducted by the National Institute of Justice (Appel and Pollitt, 2005),
only 57% of agencies in US required specific training to duplicate, examine and analyze
evidence and more than 70% of practitioners had no or minimal (less than a few hours) of
training in this discipline. The situation in Australia is no different: new investigators, and the
pool of seasoned detectives with advanced IT qualifications is drying up. Although some modern
IT security certifications include aspects of forensic computing as part of incident response but
there is no formal Australian certification, or established training standards for Electronic
Evidence. Through the development of these standard tests, the skills necessary to carry out a
particular test may similarly be specified. For example, if we know that a piece of software must
be able to search for a keyword from an image, then we can also specify that the investigator will
only be certified as competent if he or she can use the software to analyze the image and find the
same specified keyword.
Generally speaking in the computer forensic domain, the searching relates to finding and
locating the information of interest in digital devices. Naturally, several questions would be
asked when performing the searching: what do we search? how to search? And where to search?
To answer these questions, we divide the searching function category into three sub-
categories: searching target, searching mode and searching domain as shown in
Fingerprint recognition and iris scanning are the most well-known forms of
biometric security. However, facial recognition and (finger and palm) vein
pattern recognition are also gaining in popularity. In this article we consider the
pros and cons of all these different techniques for biometric security.
1. Fingerprint recognition
An identification system based on fingerprint recognition looks for specific
characteristics in the line pattern on the surface of the finger. The bifurcations, ridge
endings and islands that make up this line pattern are stored in the form of an image.
In addition, some line patterns are so similar that in practice this can result in a high
false acceptance rate.** Fingerprints can also wear away as you get older, if you do a
lot of DIY or a particular kind of work, for example. As a result, some people may
find that their fingerprints cannot be recognised (false rejection**) or even recorded.
There is even a hereditary disorder that results in people being born without
fingerprints!
On the other hand, fingerprint identification is already familiar to much of the public
and is therefore accepted by a large number of users to use as biometric security. The
technology is also relatively cheap and easy to use. It should be noted, however, that
quality can vary significantly from one fingerprint recognition system to another, with
considerable divergence between systems in terms of false acceptance and false
rejection rates.
** Find out more about false acceptance and false rejection in our article ‘FAR and
FRR: security level versus user convenience’.
2. Facial recognition
A facial recognition system analyses the shape and position of different parts of the
face to determine a match. Surface features, such as the skin, are also sometimes taken
into account.
However, facial recognition also has a number of significant drawbacks. For example,
the technology focuses mainly on the face itself, i.e. from the hairline down. As a
result, a person usually has to be looking straight at the camera to make recognition
possible. And even though the technology is still developing at a rapid pace, the level
of security it currently offers does not yet rival that of iris scanning or vein pattern
recognition.
3. Iris recognition
When an iris scan is performed a scanner reads out the unique characteristics of an
iris, which are then converted into an encrypted (bar)code. Iris scanning is known to
be an excellent biometric security technique, especially if it is performed using
infrared light.
Lastly, it is important to bear in mind that although iris scanning offers a high level of
biometric security, this may come at the expense of speed. Incidentally, systems have
recently been developed that can read a person’s iris from a (relatively short) distance.
CONTACT US
Another point to bear in mind is that very cold fingers and ‘dead’ fingers (such as
those of people suffering from Raynaud’s syndrome) are impossible or difficult to
read using finger vein pattern recognition. Perhaps the greatest drawback, however, is
that this type of biometric security is still relatively unknown.
The technology, which cannot be copied (or only with extreme difficulty), is currently
regarded as the best available method in the area of biometric security, alongside iris
scanning. Palm scanning is fast and accurate and offers a high level of user
convenience.
Access control systems based on palm vein pattern recognition are relatively
expensive. For that reason such systems are mainly used within sectors that have
exacting demands when it comes to security, such as government, the justice system
and the banking sector.
Please note that this recognition method is sometimes confused with hand geometry.
However, that is an outdated form of biometrics that is based on the shape of the hand
and involves even fewer unique characteristics than fingerprint recognition.
Forensic artifacts are the forensic objects that have some forensic value. Any
object that contains some data or evidence of something that has occurred like
logs, register, hives, and many more. In this section, we will be going through
some of the forensic artifacts that a forensic investigator look for while
performing a Forensic analysis in Windows.
1. Recycle Bin: The windows recycle bin contains some great artifacts like:
$1 file containing the metadata. You can find this file under the
path C:\$Recycle.Bin\SID*\$Ixxxxxx
$R file containing the contents of the deleted files. This file can be located
under the path C:\$Recycle.Bin\SID*\$Rxxxxxx
$1 file can be parsed using a tool $1 Parse.
2. Browsers: Web browsers contain a lot of information like:
Cookies.
<="" li="" style="box-sizing: border-box;">
In this section, we will be discussing some of the open-source tools that are
available for conducting Forensic Analysis in the Windows Operating System.
1. Magnet Encrypted Disk Detector: This tool is used to check the encrypted
physical drives. This tool supports PGP, Safe boot encrypted volumes,
Bitlocker, etc. You can download it from here.
2. Magnet RAM Capture: This tool is used to analyze the physical memory of
the system. You can download it from here.
3. Wireshark: This is a network analyzer tool and a capture tool that is used to
see what traffic is going in your network. You can download it from here.
4. RAM Capture: As the name suggests, this is a free tool that is used to
extract the entire contents of the volatile memory i.e. RAM. You can download it
from here.
5. NMAP: This is the most popular tool that is used to find open ports on the
target machine. Using this tool you can find the vulnerability of any target to
hack. You can download it from here.
6. Network Miner: This tool is used as a passive network sniffer to capture or
to detect the operating systems ports, sessions, hostnames, etc. You can
download it from here.
7. Autopsy: This is the GUI based tool, that is used to analyze hard disks and
smartphones. You can download it from here.
8. Forensic Investigator: This is a Splunk toolkit which is used in HEX
conversion, Base64 conversion, metascan lookups, and many more other
features that are essential in forensic analysis. You can download it from here.
9. HashMyFiles: This tool is used to calculate the SHA1 and MD5 hashes. It
works on all the latest websites. You can download it from here.
10. Crowd Response: This tool is used to gather the system information for
incident response. You can download it from here.
11. ExifTool: This tool is used to read, write, and edit meta information from a
number of files. You can download it from here.
12. FAW (Forensic Acquisition of Websites): This tool is used to acquire web
pages image, HTML, source code of the web page. This tool can be integrated
with Wireshark. You can download it from here.
LINUX FORENSICS
System configuration
Logfile analysis
Note:
Since there are a number of Linux distributions and the article can’t
cover all of them. All artifacts below are presented for Debian.
Fortunately, it is trivial to find similar artifacts in another
distribution.
The article assumes the dead box situation which means that you
only have a hard disk(s) from the targeted machine.
For the forensic investigation, you may want to mount a copy of the
original image in another Linux machine. The steps below illustrate
how to mount a raw image in a Debian Linux machine:
System Configuration
Host Name is useful to identify the computer name that the hard
disk belongs to. Furthermore, it can be used to correlate with other
logs and network traffic based on the hostname.
Network configuration:
Login information:
There are three places to find this information:
(1) /var/log/auth.log records connections/authentication to the
Linux host. The command “grep -v cron auth.log*|grep -v
sudo|grep -i user” filters out most of the unnecessary data and
leaves only information regarding connection/disconnection.
(2) /var/log/wtmp maintains the status of the system, system
reboot time and user logins (providing time, username and IP
address if available). For more information, please refer to this
Wikipedia page.
(3) /var/log/btmp records failed login attempts.
Mounted Disk: provides more inside how the Linux box is set up.
Noticeably, attackers may mount a particular path to RAM; hence,
it will not survive upon reboot.
Persistence mechanisms:
- Cron jobs are often used for persistence. Cron jobs can be
examined in /etc/crontab (system-wide crontab)
and /var/spool/cron/crontabs/<username> (user-wide
crontab)
- Bash Shell initialization: when starting a shell, it will first
execute ~/.bashrc and ~/.bash_profile for each user.
/etc/bash.bashrc and /etc/profile are the system-wide versions
of ~/.bashrc and ~/.bash_profile (If another shell is used, checked
in documents of that shell for similar configuration files).
- Service start-up: System V (configuration files are in
/etc/init.d/* and /etc/rd[0–6].d/*) , Upstart (configuration files
are in /etc/init/*) and Systemd (configuration files are
in /lib/systemd/system/* and /etc/systemd/system/*). For more
information regarding service start-up, please refer to How To
Configure a Linux Service to Start Automatically After a Crash or
Reboot — Part 2: Reference
- RC (Run-control) is a traditional way with init to start
services/programs when run level changes. Its configuration can be
found at /etc/rc.local:
The word “forensics” means the use of science and technology to investigate
and establish facts in criminal or civil courts of law. Forensics is the procedure
of applying scientific knowledge for the purpose of analyzing the evidence and
presenting them in court.
Network forensics is a subcategory of digital forensics that essentially deals
with the examination of the network and its traffic going across a network that is
suspected to be involved in malicious activities, and its investigation for
example a network that is spreading malware for stealing credentials or for the
purpose analyzing the cyber-attacks. As the internet grew cybercrimes also
grew along with it and so did the significance of network forensics, with the
development and acceptance of network-based services such as the World
Wide Web, e-mails, and others.
With the help of network forensics, the entire data can be retrieved including
messages, file transfers, e-mails, and, web browsing history, and reconstructed
to expose the original transaction. It is also possible that the payload in the
uppermost layer packet might wind up on the disc, but the envelopes used for
delivering it are only captured in network traffic. Hence, the network protocol
data that enclose each dialog is often very valuable.
For identifying the attacks investigators must understand the network protocols
and applications such as web protocols, Email protocols, Network protocols, file
transfer protocols, etc.
Investigators use network forensics to examine network traffic data gathered
from the networks that are involved or suspected of being involved in cyber-
crime or any type of cyber-attack. After that, the experts will look for data that
points in the direction of any file manipulation, human communication, etc. With
the help of network forensics, generally, investigators and cybercrime experts
can track down all the communications and establish timelines based on
network events logs logged by the NCS.
Advantages:
Network forensics helps in identifying security threats and vulnerabilities.
It analyzes and monitors network performance demands.
Network forensics helps in reducing downtime.
Network resources can be used in a better way by reporting and better
planning.
It helps in a detailed network search for any trace of evidence left on the
network.
Disadvantage:
The only disadvantage of network forensics is that It is difficult to implement.
E-Mail Investigation:
Spoofing
Anonymous Re-emailing
Here, the Email server strips identifying information from the email
message before forwarding it further. This leads to another big
challenge for email investigations.
Header Analysis
Server investigation
Network Device Investigation
Sender Mailer Fingerprints
Software Embedded Identifiers
import os
import quopri
import base64
if __name__ == '__main__':
parser = ArgumentParser('Extracting information from
EML file')
parser.add_argument("EML_FILE",help="Path to EML
File", type=FileType('r'))
args = parser.parse_args()
main(args.EML_FILE)
Mobile forensics, a subtype of digital forensics, is concerned with retrieving data
from an electronic source. The recovery of evidence from mobile devices such
as smartphones and tablets is the focus of mobile forensics. Because
individuals rely on mobile devices for so much of their data sending, receiving,
and searching, it is reasonable to assume that these devices hold a significant
quantity of evidence that investigators may utilize.
Mobile devices may store a wide range of information, including phone records
and text messages, as well as online search history and location data. We
frequently associate mobile forensics with law enforcement, but they are not the
only ones who may depend on evidence obtained from a mobile device.
Uses of Mobile Forensics:
The military uses mobile devices to gather intelligence when planning military
operations or terrorist attacks. A corporation may use mobile evidence if it fears
its intellectual property is being stolen or an employee is committing fraud.
Businesses have been known to track employees’ personal usage of business
devices in order to uncover evidence of illegal activity. Law enforcement, on the
other hand, may be able to take advantage of mobile forensics by using
electronic discovery to gather evidence in cases ranging from identity theft to
homicide.
Process of Mobile Device Forensics: