Network Forensics PDF
Network Forensics PDF
NETWORK SECURITY
CPRE 537
Spring 2013
Anusha Chennaka
Electrical and Computer Engineering
Iowa State University
ABSTRACT
A topic of concern is the detection of intruders vying to break into sensitive networks. This can be
done by analyzing the network traffic which involves significant resources. I have reported the forensic
techniques and tools available which would make the network easy to monitor.
Network forensics is a branch of digital forensics that focuses on the monitoring and analysis of
network traffic. Unlike other areas of digital forensics that focus on stored or static data, network
forensics deals with volatile and dynamic data. Network forensics involves detecting anomalous traffic
and identifying intrusions. Also the other face, relating to law enforcement, involves capturing and
analyzing network traffic and can include tasks such as reassembling transferred files, searching for
keywords, and parsing human communication such as emails or chat sessions.
In this report I have initially presented the basics of network forensics followed by the process models
and the frameworks. Then a study of the different Network Forensic Analysis tools (NFAT) is presented.
Next the different techniques which can be used to implement forensics in a network to effectively
capture the traffic and logs have been studied. The report concludes by pointing out the role of network
forensics in the times of emergency by taking the example of a recent incident into account.
INTRODUCTION
Man’s foray into science and technology has opened the frontiers for an astronomical development in
almost all the day-to-day activities. No longer does he need to embark on a tiring journey to meet his
Manager. A single Skype call does it all. The USPS no longer has to employ hundreds of people for
reliable delivery of documents, thanks to online file sharing tools like Google drive and Drop box.
All these advanced applications rely heavily on the networks. The health, structural integrity and
security of these networks are the most important factors which determine the reliable performance of
networks. However if we consider that the data is highly confidential then security has a take on other
factors. Also security is needed in case of networks which maintain or transfer data belonging to a
particular organization. These networks are under the threat of continuous attacks from attackers and
hackers who intend to manipulate things on the network. In this survey, I would like to focus on the
attacks which use Internet as the transport medium. These attacks can be broadly classified into the
following categories
Protocol attacks
Every software that is built has an underlying protocol associated with it. Most of the times
there might be certain flaws in the protocol itself or the implementation of the software. By
making use of these flaws an attacker could break into the software. Two examples to
demonstrate such scenario are
Buffer Overflow
Wi-Fi implementation using WEP/WPA/WPA2
Other protocol attacks include cross-site scripting and cookie hijacking. Also the attacker
might embed data in least expected places like options filed in ICMP messages which would
not be generally examined by the network intrusion-detection/prevention systems.
Malware
This includes the software which uses extreme measure to get itself executed on a machine. It
includes Trojans, Worms and Viruses. A worm replicates itself without any human action while a
virus requires a human action. A Trojan is an illegitimate piece of software which disguises itself
as an useful software but will damage the system once installed. A well-known recent example
being the MyDoom virus, which led recipients to believe they were being resent an e-mail that
couldn’t be delivered the first time around, attack thus enticing them to double-click the attach-
ment, which then did Bad Things.
A network is constantly under threat of various kinds of attacks. And it is not very uncommon to say
that the number of unsuccessful attacks is very high when compared to the number of unsuccessful
attacks. In a normal scenario, an attacker is detected only when the attack is successful and by then the
loss might have already taken place. So the system must be constantly monitored to gather any
evidence that might prove the presence of an anomaly in the network.
Forensics
Forensics refers to the usage of the evidence left after an attack to determine how the attack was
carried out and what the attacker did. However there is an issue related to it. Unsuccessful attacks often
go undetected and successful attacks have everything undisclosed. This can be done by erasing all the
logs , core dumps once the attacker gets hold of the system.
Digital Forensics
Digital forensics is a science concerned with the recovery and investigation of material found in
digital artifacts, often as part of a criminal investigation. Digital artifacts can include computer systems,
storage devices, electronic documents, or even sequences of data packets transmitted across a
computer network.
Two other uses of network forensics which can be of general purpose are
To gain information about how computer systems work for the purposes of debugging them,
optimizing their performance, or reverse engineering them.
To recover data in case of hardware or software failure.
Forensic analysis is indeed a real time analysis coupled with other security mechanisms and must
concentrate on the following facts
An important thing to remember is that network forensics doesn’t aim to protect a system. But it is a
post mortem analysis and investigation of the attack in many cases. It mostly starts after a crime
notification. The main goal of network forensics is to ensure that the attacker spends more time and
energy to cover his tracks making the attack costly.
Catch it as you can Systems: These systems initially capture the traffic and write to storage. Analysis
is subsequently done in batch mode. The advantage is that it doesn’t require a high end processor
for the analysis; however it does require large amounts of storage.
Stop Look and Listen systems: This requires that a rudimentary analysis is done on a packet and only
certain information is saved for future analysis. But it does need a high end processor.
Network forensic Process Models
Network Forensics has remained a branch of digital forensics all these years. However now researchers
are trying to make it independent of the digital forensics by proposing various process models which are
quite different form the digital forensic process models. Recently a generic model was proposed in the
paper Network forensic frameworks: Survey and research challenges.
The initial phase is to select and categorize different NFATs and then associate them with a particular
step of the model which would correctly implement the intended functions of the stage.
The following are the different phases of a network Forensic Process Model
Preparation
In this stage the background is set for the uphill task. Since many tools need to be employed at
various points of the network and also since they need to have access to sensitive data on the
network, the prime duty is to obtain required authorizations and legal warrants to ensure that the
privacy is not violated.
Detection
NFATs employed: TCPDump, WireShark, PADS, Sebek, Ntop,P0f,Bro,Snort
The tools are employed and alerts are generated in case of any anomaly. These tools might detect a
security breach or policy violation. These anomalies might be analyzed further for various
parameters to determine the presence and nature of the attack. A validation process then follows
whose result might continue the analysis further or terminate the process regarding the alert as a
false alarm. If the analysis goes on, then it branches into Incident response stage and Data collection
stage.
Incident Response
The response initiated in this stage depends on the type of attack identified and also by
organizational policy, legal and business constraints. An action plan consisting of various activities to
defend future attacks and to recover from the attack is made and also the decision to continue the
investigation to gather more evidence is also taken.
Collection
NFATs employed: TCPDump, Wireshark, TCPFlow, NfDump, PADS, Sebek, SiLK, TCPReplay, Snort, Bro
This is the most critical stage of the analysis as the traffic data change rapidly and it might not be
possible to generate the same trace at a later time. So a well-defined procedure using reliable
hardware and software tools must be in place to gather maximum evidence of the attack. Also the
system must be prepared to allocate large amount of memory for the logs as the number of logs
generated would be huge.
Preservation
NFATs employed: TCPDump, Wireshark, TCPFlow, NfDump, PADS, Sebek, SiLK, TCPReplay, Bro,
Snort
This stage makes sure that a copy of the network data is preserved to facilitate legal requirements
which may expect that the results obtained by the investigation are proved same when the process
is repeated on the original data. Also a hash of the data is preserved and a copy of data is analyzed
keeping the original data untouched.
Examination
NFATs employed: TCPDump, Wireshark, TCPFlow, Flow-tools, NfDump, PADS, Argus, Nessus, Sebek,
TCPTrace, Ntop,TCPStat, NetFlow, TCPDstat, Ngrep, TCPXtract, SiLK, TCPReplay,P0f, Nmap, Bro,
Snort
The data obtained in the previous step may consist of redundant data or contradictory data. Hence
examination stage makes sure that a methodical search is conducted so that no crucial information
is lost. Also a data set which contains the least information and highest possible evidence is
identified.
Analysis
NFATs employed: TCPDump, Wireshark, TCPFlow, Flow-tools,NfDump, PADS, Argus, Nessus, Sebek,
TCPTrace, Ntop, TCPStat, NetFlow, TCPDstat, Ngrep, TCPXtract, SiLK, TCPReplay, P0f, Nmap, Bro,
Snort.
In this stage statistical, soft computing and data mining approaches are used to search the data and
match the attack patterns. The attack patterns are then put together, reconstructed and replayed to
understand the intention and methodology of the attacker. Some of the important parameters are
related to network connection establishment, DNS queries, packet fragmentation, protocol and
operating system fingerprinting.
Investigation
The investigation phase provides data for incident response and prosecution of the attacker. This
phase uses the results from the previous stages to obtain a path form the victim to the point of
attack origination through any intermediate systems and communication pathways. It may require
some additional features from the analysis phase and hence these two phases are iteratively
performed t arrive at the conclusion. IP spoofing and stepping stone attack through which the
attacker hides himself are still open problems.
Presentation
This is the final stage of the process model where the following are done
System documentation is done to meet legal requirements.
Visualization is employed to present the conclusions for easy grasping.
The entire case documentation is done for future references.
A network forensic framework implements the process model. There are various frameworks which can
be categorized as follows
Distributed System Based Frameworks
This is the most popular framework as this represents the LANs and Internet wherein these
networks are distributed as the servers and clients are present at different physical locations.
One of the framework proposed was based on distributed techniques providing an integrated
platform for automatic evidence collection and efficient data storage. It performs easy integration of
known attribution methods and uses an attack attribution graph mechanism. The model is based on
proxy and agent architecture. Data collection, reduction, processing, and analysis is done by the
agents whereas proxies are responsible for attack attribution graph. The merit of this model is that it
provides automatic evidence and quick response to attacks.
Aggregation framework
Aggregation frameworks are provided so that they can be deployed to improve the strength of the
tools already existing rather than developing new tools from the scratch.
Vandenberghe (2008) proposed a Network Traffic Exploration(NTE) Application being developed
by Defense Research and Development Canada (DRDC) for security event and packet analysis. This
tool combines six key functional areas into a single package. They are intrusion detection (signature
and anomaly based), traffic analysis, scripting tools, packet playback, visualization features and
impact assessment. NTE has three layers with MATLAB as development environment, low level
packet analysis library and unified application front end. It provides an environment where
statistical analysis, session analysis and protocol analysis can exchange data.
NETWORK FORENSICS ANALYSIS TOOLS
Network Forensics Analysis Tools support the notion of defense in depth as they can analyze the
network traffic and correlate data from other security tools. This is required by the security
administrators as they need to have multiple layers of defense. According to an Information Security
Magazine, NFAT can be defined as follows
NFAT products capture and retain all network traffic and provide the tools for forensic analysis. An
NFAT user can replay, isolate and analyze an attack or suspicious behavior, then bolster network
defenses accordingly. Some of the functions of the NFATs as mentioned in the article included
· IP protection
· Detection of employee misuse/abuse of company networks and/or computing resources
· Risk assessments
· Exploit attempt detection
· Data aggregation from multiple sources, including firewalls, IDSs and sniffers
· Incident recovery
· Prediction of future attack targets
· Anomaly detection
· Network traffic recording and analysis
· Network performance
· Determination of hardware and network protocols in use
Gather evidence: They listen to the network and gather the evidence.
No Alteration of data : Since the premise of NFATs is non-intrusive, they don’t alter the data on
the network.
Replay features: This feature enables the NFATs to give the evidence to the administrators so
that they can prove an attack or breach without altering the evidence on the machine.
NFATs in general help the security administrators to monitor the security firewalls, obtain information
about rogue servers, to manage the data flow going in and coming out of the network and offer a better
way to record all the events. Next, a brief introduction to some of the NFATs is presented.
SilentRunner
Main focus: This tool mainly focuses on the internal threats in a network by giving the
administrators a 3-dimensional view of the network and allows them to monitor all the
packets passing through a network. If an abnormality is detected in network traffic, it
alerts the appropriate personnel.
Unique feature: SilentRunner has a powerful analysis engine aiding the administrator in
acquiring evidence about a certain event.
Architecture of SilentRunner
Collector: This component is used to capture the packets and is installed on a
system using a SilentRunner-modified Network Driver Interface Specification
packet driver. It captures the packets without interrupting network
performance and reassembles the sessions based on
HTTP,POP,IMAP,SMTP,Telnet and NNTP content. An Interference engine
makes conclusions about network events.
Analyzer: It obtains the data from the collector and uses n-gram analysis to
define patterns for the administrator. It can also analyze firewall logs and IDS
data.
Visualizer: Visualizer component is responsible to create a 3-dimensional
picture of the entire network.
Further analysis: The information from the NFAT next passes to a tool which informs
the administrator of the changes in the network. For instance, an Intrusion Detection
Service can be setup and an administrator can be notified of the changes in the
network.
Demerit: If the traffic cannot be analysed by the IDS or any tool, then the
administrator is responsible for analyzing the large number of log files which
becomes an overhead.
NetIntercept
Main Focus: It analyzes traffic in batches and has the ability to store a large amount of
data through its built in CD-RW. In other words it is an example of catch-it-as-you-can
system.
Functions
Network Traffic Capture: NetIntercept continually captures traffic but doesn’t
store it. Hence an administrator must save the traffic if required else the older
data gets replaced by the new traffic.
Analysis of the network traffic: A batch of traffic is reassembled into meaningful
information so that analysis can be performed by the analysis engine. The
engine then attempts to understand the content of the data stream rather than
report on packet information.
Data discovery: In this stage, prior records archived into NetIntercept are used
for trend analysis by the administrator. Different types of records may be
generated based on network traffic, content, user behavior and breaches.
Unique Feature: It can decrypt SSH-2 sessions and only accepts secure remote
administration into a device. It also allows its log files to be inspected and analyzed by
other tools.
NetDetector:
Passive tool: It is a passive NFAT that captures, analyzes and reports on network
traffic.
Unique feature: As an alerting mechanism it uses GUI-popups ,email or pager. Also it
is possible to couple it with an IDS to run a complete forensic investigation for the
security administrator.
Advantages:
It supports many common network interfaces(like 10/100/1000
Ethernet,T1,FDDI) and protocols (like TCP/IP, Frame relay).
Has a large storage supply and exports data to HTTP,FTP and SCP.
These tools described above offer a powerful range of analysis options for network monitoring or
assessing insider threats, zero-day exploits, and targeted malware focused for commercial organizations.
There are other tools which merely focus on traffic analysis or embed traffic analysis engine for the
investigation. They are described below.
Ngrep
Description: It is simple, low-level network traffic debugging tool in Unix.
Functions: Filters and collects data
Wireshark
Description: It is a widely used network traffic analysis tool; forms basis of network forensics
studies.
Functions: Filters and collects data
Driftnet
Description: It listens to network traffic and picks out images; used in Backtrack v5.
Functions: Filters and collects data
NetworkMiner
Description: It is a network forensic analysis tool that can be used as a passive network
sniffer/packet-capturing tool.
Functions: Filters and collects data
Functions:
Filter and collect data
Analyse the log
Reassembly of data stream
Correlation of data
Kismet
Description: Network detector, network packet sniffer, and intrusion-detection system for
wireless LANs.
Functions: Filters and collects data.
NetStumbler
Description: Widely used wireless LAN analysis tool for devices and network traffic analysis.
Functions: Filters and collects data.
Xplico
Description: Network forensic analysis tool that allows for data extraction from traffic
captures; used in Backtrack v5.
Functions: Filters and collects data.
DeepNines
Description : Provides real-time identity-based network defense for content and
applications, along with basic network forensics.
Functions: Filters and collects data.
Argus
Description : Used for network forensics, nonrepudiation, detecting very slow scans, and
supporting zero-day attacks.
Functions:
Filter and collect data
Analyse the log
Fenris
Description : It is a suite of tools for code analysis, debugging, protocol analysis, reverse
engineering, network forensics, diagnostics, security audits, vulnerability research.
Functions: Filters and collects data.
Flow-Tools
Description : Software package for collecting and processing NetFlow data from Cisco and
Juniper routers.
Functions:
Filter and collect data
Analyse the log
EtherApe
Description : It is a graphical network monitor for capturing network traffic.
Functions: Filters and collects data
Honeyd
Description : It improves cyber security by providing mechanisms for traffic monitoring,
threat detection, and assessment.
Functions: Filters and collects data
Snort
Description : It is widely used, popular tool for network intrusion detection and prevention,
as well as for network forensic analysis.
Functions: Filters and collects data
Omnipeek, Etherpeek
Description :It is a low-level traffic analyzer for network forensics.
Functions:
Filter and collect data
Analyse the log
Reassembly of data stream
Savant
Description : It is an appliance for live forensic analysis, surveillance, network analysis, and
critical infrastructure reporting.
Functions:
Filter and collect data
Reassembly of data stream
Dragon IDS
Description : Provides network, host intrusion detection and network forensic capture
analysis.
Functions:
Filter and collect data
Analyse the log
Reassembly of data stream
Correlation of data
Infinistream, nGenius
Description : Appliance for network forensics, incident analysis combined with session
reconstruction and playback
Functions:
Filter and collect data
Reassembly of data stream
Correlation of data
RSA EnVision
Description : It provides live network forensics analysis, log management, network security
surveillance, data leakage protection.
Functions:
Filter and collect data
Analyse the log
Reassembly of data stream
Correlation of data
Provides Application layer view.
NetWitness
Description : It addresses network forensic analysis, insider threat, data leakage protection,
compliance verification, designer malware, and 0-day detection.
Functions:
Filter and collect data
Analyze the log
Reassembly of data stream
Correlation of data
Provides Application layer view.
Solera DS
Description : Appliance for live network forensics, application classification, metadata
extraction, and analysis tools.
Functions:
Filter and collect data
Reassembly of data stream
Correlation of data
Provides Application layer view
NETWORK FORENSIC TECHNIQUES
This section describes some of the techniques available for conducting network forensic research.
IP TRACEBACK TECHNIQUES
The IP traceback technique allows a victim to identify the network paths traversed by the attack traffic
without requiring interactive operational support from Internet Service Providers. It is mainly used to
deal with the masquerade attacks which can be produced at different layers as described. At link layer a
different MAC address could be used, a different IP address could be used at the Internet layer and a
different TCP/IP port could be used at the transport layer.
Definition: If the connection path between the attacker and victim is given by h1h2h3….. hn,
then IP traceback problem is to find the hosts h1,h2,….,hn-1 given the IP address of the victim hn.
However the security functions of the networks, intermediate spoofing by hosts all make this technique
complicated.
Input Debugging
Attack signature: It can be defined as the common feature contained in all the attack
packets.
Procedure: The victim recognizes that it is being attacked and communicates the attack
signature to the upstream router which then employs filters that prevent the attack packets
from being forwarded and then determines the port of entry. This is repeated recursively on
the upstream routers until the orginiating site is reached or the trace leaves the boundary of
the network provider or the ISP. Then the ISP is requested to carry on this procedure.
Limitation: A considerable management overhead at the ISP level to communicate and
coordinate the traceback across the domains limits the effectiveness of input debugging.
Controlled Flooding:
In this technique the victim has to obtain the map of the Internet topology initially. The victim then
selects hosts iteratively which could be coerced to flood each of the incoming links of the upstream
router.
Basic Idea: The victim forces the host to flood the links to upstream router. Since the router buffer
is shared by all incoming links, and since the attacker also uses one of the links, due to flooding the
attacker packets would be dropped. The victim then analyses the traffic of the attacker packets
being received and then draws a conclusion about the path taken by the router. Once the router is
reached, the same procedure is applied until the source is reached.
Limitations:
An accurate topology map is needed for selecting the hosts.
It is indeed a complicated procedure to use flooding to detect distributed DOS attacks
when multiple upstream links may be contributing to the attack.
ICMP Traceback
ICMP traceback was implemented using a scheme called iTrace.
Applicability: This scheme is applicable for attacks that originate from a few sources and
consist of flooding.
Procedure: Each router will sample one of the packets it is forwarding and copy the
contents into an ICMP traceback message. This ICMP will have information about adjacent
routers and the message will be sent to the destination.
Limitations:
ICMP traffic gets filtered compared to normal traffic .
All the routers in the attack must be enabled with iTrace else the destination would
have to reconstruct several possible attack paths that have a sequence of participating
routers.
Analysis:
The probability that a router marks a packet is p and is same for all the routers. The probability
of receiving a packet marked from a router d hops away and not marked by any other router is
p(1-p)d.
Fig.2 Plot showing the relation between the number of hops and probability of a router marking
a packet
Threshold Probability
An interesting feature to be observed is that as the number of intermediate routers increase,
the chance of at least one router in the path marking the packet also increases.
Definition: The threshold probability can be defined as the minimum probability value to be
assigned to every router in the path to ensure that at least one router on the path will mark the
packet. This decreases as the number hops increases.
Fig.3. Plot showing the relation between the number of hops and Threshold Probability
Convergence time
Definition: It is the minimum threshold number of packets required to determine the sequence
of routers that form the attack path.
In order to determine the order of routers in the attack path, each router in the path must have
marked different number of times on the packets. If the n-hop attack path is to be determined
then the following condition must hold good
the victim should receive packets such that every router in the attack path should have
marked at least one packet and
the number of packets marked by Router Ri should be strictly greater than Ri-1, for any
2 ≤ i ≤ n.
Fig. 4 Plots showing the Probability of marking vs Convergence time for 3 hops and 18 hops
respectively
The above two figures show the probability of marking at the router for a 3hop attack and an 18
hop attack respectively.It can be inferred from these graphs that the minimum convergence
time increases as the hop count of the attack path increases. In order to lower the convergence
time in larger hop
count attack paths, it is essential to assign a lower probability of marking for the routers.
It is possible to trace an attack path for a single packet that traversed through various
Autonomous Systems some of which might not have deployed SPIE.
In the above figure the attacker launches an attack from AS10 to AS1. When the victim reports
about the attack packet to the STM in AS1, the STM queries its one-hop STMs in other ASes
about the attacker. In this case it queries STM in AS7, which then conducts an internal trace
back and sends a negative reply upon which AS1 queries its two hop neighbors AS3 and AS4.
These ASes’ STMs then send a positive reply indicating the attack path in their AS. AS3 will
redirect AS1 to AS10 and then upon query from AS1, the STM of AS10 conducts an internal
traceback to identify the attacker in its system and reports the attack path.
Payload Attribution
In most of the cases ,an investigator may not have any header information about a packet of
interest but is aware of a part of the payload which he expects.
Definition: The payload attribution requires the identification of sources, destinations and the
times of appearance on a network of all the packets that contained such payload. This problem
is more complex because the size of the payload is usually very large and information of the
numerous substrings needs to be stored.
Bloom Filters
They are space efficient data structures used to support various queries. An empty bloom filter
is a bit vector of m bits that uses k different hash functions each of which maps a key value to
one of the m positions in the vector. In order to insert an element into the filter k hash functions
are computed and then the bits at the corresponding k positions are set to 1.
Classification of Honeypots
Low interaction honeypots: A limited number of configured services are available for an
adversary to probe the system.
High interaction honeypots: An adversary can access all aspects of operating systems
and launch further network attacks.
Honeywall: In order to protect non-Honeypot systems with attacks originating from a
compromised Honeypot, a Honeywall could be setup with several data control and data capture
features. It can also nullify the effect of malicious data packets that specifically target
vulnerabilities on other systems.
A honeywall can capture and monitor the data traffic entering, leaving or inside the honeypot
which could be then used to track the procedure adopted by the attacker to get hold of the
honeypot. An encryption software is sometimes used by the attacker to encrypt the
communication between the machine and honeypot which couldn’t be detected by honeywall.
In such a case a software called Sebek can be deployed which decrypts the data.
Serial architecture : In this architecture, the honeywall filters all the traffic intended to the
firewall of the production system and also all the traffic for the production system goes through
the honeynet. If a honeynet is attacked, the production system is alerted about it. After
collecting enough evidence, the honeynet is stopped and analysis of the malicious packets
starts.
Merit: It protects a production system from direct attacks.
Demerit: A delay would be suffered for every incoming and outgoing packet including the naïve
traffic.
Parallel architecture: This mainly aims at reducing the delay by exposing both production system
and honeynets to the internet. The function of the honeynet is now to analyze the malicious
packets and then update the firewall about it which would then impose stringent conditions on
the traffic.
Merit: Delay is reduced.
Demerit: This presents a high risk as the system is directly under attack.
FORENSICS IN THE NEWS
Boston Bombing Incident
Boston witnessed two deadly explosions at the marathon on April 15th 2013. The culprits were caught a
few days after the incident owing to the meticulous work of the police and forensic experts. Here we
look into the forensic aspect of the case.
Evidence: The police collected all the evidence from the survivors like the videos and photographs which
could probably contain some element of information related to the culprits. Also video logs from the
surveillance cameras’ videos have been collected. The videos have been analyzed to find any suspicious
persons near the location.
Analysis: The social media was very active in spreading the news. Hence network forensics experts
expected that the attackers might be actively combing the news regarding the latest updates. On the
other end they also expected the investigators to search the media for any online discussions for
planning the attack. This is the network forensics part where the analysts could help in analyzing the
network logs by inspecting the data coming from/ going to any susceptible IP address going to/coming
from a website/page hosting an active discussion about the incident containing certain “triggering”
phrases. Here the IP traceback techniques could be used.
Also they could see the amount of traffic on the network. If the traffic from a particular address was
unexceptionally high during the past few days then they could take this also into consideration.
Investigators also had inspected the log of the calls made through the cell phones at the time of the
incident. This was obtained from the mobile carrier. A possible way the officials could have worked is to
trace the calls made from the location and see if they could obtain any useful information. These calls
could have been encrypted so a possible decryption mechanism must have been employed. Moreover
another possible forensic direction is to obtain the data of the person under whose name the cell phone
is registered and use his data to match the person in the video.
Further the video logs obtained from the surveillance camera and the people were under constant
monitoring. Software was employed to log even minor details along with forensic analysts thoroughly
inspecting the logs gathered. The additional logs obtained from the people helped the police to
accurately put together that timeline.
Forensic analysts are further looking for the evidence in the deleted files and cache which is the first
place to find digital evidence. Also the online social connections of the culprits are under scrutiny.
CONCLUSIONS
Forensic challenges
Network forensics has important role to play in new and developing areas related to social networking,
data mining and data visualization.
Social Networks
Recently there has been an attack on the twitter account of Associated Press which has 1.9 million
followers. A false post of two explosions in the White House was posted on 23rd April triggering a drop in
the Standard and Poor’s 500 Index that wiped out $136 billion in market value. Such attacks can cause
havoc in the people and also affect the bullion market.
Security has not been given an importance in social networks leading to inevitable risks.
There is a need for advanced forensic tools that address an important area of usage, but as of now only
traditional digital and network forensic tools are available.
Data mining
Data mining could be exploited in order to discover relevant patterns thus generating profiles from large
sized data. The extraction of historical data from supervisory control and data acquisition systems is an
important area which embeds both Network Forensics and data mining. There is a need to develop a
model which implements SCADA. However there is a distinct difference between the process of network
forensics-based data mining investigations (where time-based data is analyzed to detect potential
malware intrusion) and incident recovery and response (where the key purpose is to respond to an
alarm and implement recovery).
Data visualization is the graphical interpretation of high dimensional data which is appropriate for
obtaining an overall view and locating important aspects within a dataset. The visualization of data
obtained from such investigations is a new developing area and can display significant volumes of data
where the dimensionality, complexity or volume prohibits manual analysis. This can present the
investigators with an overall view of the data which can help in better understanding of the data and
gather important evidence form the data.
Cloud Computing
Many systems have moved into cloud, but little research has been done on implementing cloud
forensics. The main drawback in implementing cloud forensics is that tools, processes and
methodologies have to be developed which can retrieve data from physical storage. This is very difficult
because in order to locate data in physical storage the timestamp and authority at different locations
must be considered. So if the conventional tools work, the only issue with the cloud computing is the
data collection aspect. Hence new tools which could visualize physical and logical data locations need to
be developed which use cloud as a discovery engine.
Thus this survey has presented various aspects of the Network Forensics . The comparison of various
tools and techniques intended for NF was presented. A real time scenario was taken and the role of
network forensics has been analyzed. Finally the survey presented the challenges being faced by the
Network Forensics .
REFERENCES
[1] A Generic Framework for Network Forensics by Emmanuel S. Pilli ; R.c. Joshi ; Rajdeep Niyogi
[2] Network Forensics Primer ,ENP Newswire.
[3] A Review of Network Forensics Techniques for the Analysis of Web Based Attack by Mr.
Sudhakar Parate, Ms. S. M. Nirkhi
[4] Network Forensics: Tracking Hackers through Cyberspace by Sherri Davidoff Jonathan Ham
[5] Network Forensics: An Analysis of Techniques, Tools, and Trends by Ray Hunt, Sherali Zeadally,
[6] A Generic Framework for Network Forensics by Emmanuel S. Pilli, R.C. Joshi and Rajdeep Niyogi
[7] Network Forensic Analysis by Vicka Corey, Charles Peterman, Sybil Shearin, Michael
S.Greenberg, and James Van Bokkelen
[8] Network forensic frameworks: Survey and research challenges by Emmanuel S. Pilli*, R.C. Joshi,
Rajdeep Niyogi
[9] Tools and techniques for network forensics by Natarajan Meghanathan, Sumanth Reddy Allam
and Loretta A. Moore
[10]http://www.computerworld.com/s/article/9238541/Consumer_tech_key_in_Boston_Marathon
_bombing_probe
[11]http://www.cbsnews.com/8301-201_162-57579984/boston-bombing-investigators-focus-on-
possible-suspect-in-surveillance-video/
[12]http://www.criminaljusticedegreeschools.com/boston-marathon-forensics-0421133/
[13]http://www.forensicon.com/forensicon-news/digital-video-photo-forensics-boston-marathon-
bombers/
[14]http://www.masslive.com/news/boston/index.ssf/2013/04/state_police_official_says_for.html
[15]http://www.popsci.com/technology/article/2013-04/how-analyze-thousands-hours-boston-
bombing-video
[16]http://www.bloomberg.com/news/2013-04-16/forensic-investigators-discover-clues-to-boston-
bombing.html
[17]http://usnews.nbcnews.com/_news/2013/04/21/17852502-terrorists-may-leave-digital-
breadcrumbs-for-investigators?lite