0% found this document useful (0 votes)
8 views57 pages

Unit 4

The document provides an overview of network forensics, including the roles of various network protocols and layers, such as the Physical, Network, Transport, and Application layers, and their significance in data transmission and analysis. It discusses the importance of network packet analysis, the collection of network-based evidence, and intrusion detection systems (IDS) for identifying and responding to unauthorized access and malicious activities. Additionally, it highlights tools like tcpdump and Wireshark for packet analysis, and the use of routers in forensic investigations, including commands for gathering relevant data.

Uploaded by

21bcs039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views57 pages

Unit 4

The document provides an overview of network forensics, including the roles of various network protocols and layers, such as the Physical, Network, Transport, and Application layers, and their significance in data transmission and analysis. It discusses the importance of network packet analysis, the collection of network-based evidence, and intrusion detection systems (IDS) for identifying and responding to unauthorized access and malicious activities. Additionally, it highlights tools like tcpdump and Wireshark for packet analysis, and the use of routers in forensic investigations, including commands for gathering relevant data.

Uploaded by

21bcs039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Network Forensics

Introduction to network protocols:


A Protocol is an agreement between computers that specifies how the computers will work together.
Protocols vs. Programs : What is the relationship between a protocol and a program? A protocol formally
specifies the type and form of communication that will take place between computers. It partly dictates how a
program using the protocol must work, but it is not a program itself.

Physical Layer : Ultimately, a network must define how computers in the network are connected together and
how 0s and 1s are transmitted between them. For example, on old modems that transmitting digital data over
voice telephone lines, the protocol specified that the originating modem would send a 0 by making a 1070 Hz
tone and a 1 with a 1270 Hz tone whereas the responding modem would send a 0 by making a 2025 Hz tone
and a 1 with a 2225 Hz tone.
Network Layer: The Network Layer provides for a uniform model for getting information across the Internet,
but it does so with some very severe limitations.
At the Network Layer, all information is sent in small size chunks called IP Packets.
• Data in each IP Packet is limited to 64 kbytes.
• In addition to the actual data the IP Packet contains other information including the sender and recipient’s IP
Addresses, a checksum for error detection (something we’ll explore during our Security lectures), and an
indication of how many bytes are actually being sent.
• If you’re sending something larger than 64 kbytes, it will have to be broken down into multiple IP Packets.
• Packets are not guaranteed to arrive in the order in which they are sent, and as we’ve previously seen, in fact
they are not guaranteed to arrive at all.
Transport Layer :
• TCP allows us to send data from one IP address to another IP address.
• It breaks down our data into appropriate IP Packet sized chunks for us.
• If the packets arrive in the wrong order, it properly reorders them for us.
• If packets get lost by the underlying IP Protocol, it sends a request to the sender to send a new copy of the lost
packet.

Application Level :
• SMTP (Simple Mail Transfer Protocol) is the protocol that sends email through the Internet.
• Once your mail server receives an email message via SMTP, it sits on the mail server. When you use an email
program to read your messages from the server, the email program probably either uses POP (Post Office
Protocol) or IMAP (Internet Message Access Protocol) to get the messages from the mail server to your
device.
• Packet Switching and Circuit Switching
• Additional Network Topics
• Internet vs. Intranet
• Transport Layer Security (TLS)
Network packet analysis
• Network Layer :
The Network Layer provides for a uniform model for getting information across the Internet but it does so with
some very severe limitations.

Packet analysis is defined as the process of examining data packets to understand network traffic, utilizing tools
like tcpdump, tshark, and Wireshark to analyze and interpret packet contents and network behavior.
• Transport Layer
There are several different protocols at this level, but the most well-
known is Transmission Control Protocol or TCP.
Collecting Network Based Evidence
Locating and gathering information that is often present among network devices and along the traffic paths
inside a network. This data gathering is essential in the event of an incident where an outside threat source is
trying to control internal systems or steal information from the network. When evaluating host evidence,
network-based evidence is particularly helpful since it offers a second source of event corroboration, which is
crucial for identifying an incident's primary cause.
1. Via sniffers
A network's information traffic can be a useful source of information about intrusions or strange connections.
Network sniffers, also known as packet sniffers, are tools that can intercept and record network traffic. They
were created in response to the necessity to collect this data. Sniffers place network interface cards (NICs) in
promiscuous mode, allowing them to listen to and record every bit of data sent over the network. Hardware taps
and spanned ports make switched networks easier to sniff. In addition to the physical and data-link layer, sniffers
also capture traffic from the network and transport levels. Because of its monitoring and analysis capabilities, a
packet sniffer is used in network forensics to manage traffic, monitor network components, and detect breaches.
Sniffers are used by forensic investigators to examine any suspicious application or apparatus. A few examples
of sniffers are as follows:
A. Sniffing tool: tcpdump
When a Boolean input expression matches a packet on a network interface, Tcpdump prints out a description of the
contents of the packet. The -w parameter instructs the programme to store the packet data to a file for subsequent
analysis, and the -r flag instructs the programme to read packets from a saved packet file as opposed to a network
interface. Tcpdump only ever examines packets that exactly match the supplied phrase. Tcpdump captures packets
until it is interrupted by a signals intelligence (SIGINT) or SIGTERM signal, or if the specified number of packets
have been processed, if it is run without the -c flag. If run with the -c flag, it captures packets until it is interrupted
by a SIGINT or SIGTERM signal, or if the specified number of packets have been processed.
B. Sniffing tool: wireshark
A GUI network protocol analyzer is titled Wireshark. It allows the investigator to interactively view packet data
from either a captured file or a live network. The native capture file format for Wireshark is libpcap, which is also
the format supported by tcpdump and a number of other utilities. Investigators have the option to do live capture
and offline analysis while also being able to perform thorough inspection of hundreds of protocols using
Wireshark. It works with several operating systems, including Windows OS, Linux, macOS, Solaris, FreeBSD, and
NetBSD. Any file type that has been compressed using gzip may be read by Wireshark. The .gz extension is not
necessary for Wireshark to recognize this; it does so directly from the file [11]. Three views of a packet are
displayed in Wireshark's main window, similar to other protocol analyzers. It displays a line that summarizes the
contents of the packet. It displays a protocol tree that enables the researcher to dig down to the specific protocol or
topic of interest. A hex dump demonstrates exactly how the packet appears as it travels across the wire.
Collecting Network Based Evidence
2. Via security information and event management system (SIEM)
• The nature of signing on to network devices is a major issue that a lot of companies face. Log files are
frequently rolled over, whereby new log files are written over previous log files, due to a lack of space. As a
result, an organization can occasionally only have a few days or even a few hours' worth of crucial logs. The
incident response team will lack crucial pieces of evidence if a possible event occurred several weeks earlier
[12].
• An enterprise-wide technology that has gained popularity is the SIEM system. These appliances have the
capacity to gather log and event information from several network sources and consolidate it in one place.
This eliminates the need to look at individual systems and enables the computer security incident response
team (CSIRT) and other security experts to monitor activities across the whole network [13], [14].
• Logs are set to be sent to the SIEM from a number of sources, including structured query language (SQL)
databases and security controls. The user account was used in this instance to copy a database to the remote
server at 10.88.6.12, according to the SQL database at 10.100.20.18. This kind of behavior may be quickly
examined thanks to the SIEM. If it is discovered that the account was hacked, for instance, CSIRT analysts
can instantly search the SIEM for any activity involving that account. The log record indicating a database
copy to the remote computer would then be visible to them. Without the SIEM, CSIRT analysts would have to
search every single system that may have been accessed, which might be a time-consuming procedure [15],
[16].
Intrusion in Cybersecurity
Intrusion is when an attacker gets unauthorized access to a device, network, or system. Cyber criminals use
advanced techniques to sneak into organizations without being detected. Common methods include:
• Address Spoofing: Hiding the source of an attack by using fake, misconfigured, or unsecured proxy
servers, making it hard to identify the attacker.
• Fragmentation: Sending data in small pieces to slip past detection systems.
• Pattern Evasion: Changing attack methods to avoid detection by IDS systems that look for specific
patterns.
• Coordinated Attack: Using multiple attackers or ports to scan a network, confusing the IDS and
making it hard to see what is happening.
Network Intrusion detection
A system called an intrusion detection system (IDS) observes network traffic for malicious transactions and
sends immediate alerts when it is observed. It is software that checks a network or system for malicious
activities or policy violations.
• An IDS (Intrusion Detection System) monitors the traffic on a computer network to detect any suspicious
activity.
• It analyzes the data flowing through the network to look for patterns and signs of abnormal behavior.
• The IDS compares the network activity to a set of predefined rules and patterns to identify any activity that
might indicate an attack or intrusion.
• If the IDS detects something that matches one of these rules or patterns, it sends an alert to the system
administrator.
• The system administrator can then investigate the alert and take action to prevent any damage or further
intrusion.
Network Intrusion detection
Classification of Intrusion Detection System(IDS)
• Network Intrusion Detection System (NIDS):
Network intrusion detection systems (NIDS) are set up at a planned point within the network to examine traffic
from all devices on the network. It performs an observation of passing traffic on the entire subnet and matches
the traffic that is passed on the subnets to the collection of known attacks. Once an attack is identified or
abnormal behavior is observed, the alert can be sent to the administrator. An example of a NIDS is installing it
on the subnet where firewalls are located in order to see if someone is trying to crack the firewall.
• Host Intrusion Detection System (HIDS): Host intrusion detection systems (HIDS) run on independent
hosts or devices on the network. A HIDS monitors the incoming and outgoing packets from the device only
and will alert the administrator if suspicious or malicious activity is detected. It takes a snapshot of existing
system files and compares it with the previous snapshot. If the analytical system files were edited or deleted,
an alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission-critical
machines, which are not expected to change their layout.
• Protocol-Based Intrusion Detection System (PIDS): Protocol-based intrusion detection system (PIDS)
comprises a system or agent that would consistently reside at the front end of a server, controlling and
interpreting the protocol between a user/device and the server. It is trying to secure the web server by
regularly monitoring the HTTPS protocol stream and accepting the related HTTP protocol. As HTTPS is
unencrypted and before instantly entering its web presentation layer then this system would need to reside in
this interface, between to use the HTTPS.
Network Intrusion detection
Classification of Intrusion Detection System(IDS)
• Application Protocol-Based Intrusion Detection System (APIDS): An application Protocol-based Intrusion
Detection System (APIDS) is a system or agent that generally resides within a group of servers. It identifies
the intrusions by monitoring and interpreting the communication on application-specific protocols. For
example, this would monitor the SQL protocol explicitly to the middleware as it transacts with the database in
the web server.
• Hybrid Intrusion Detection System:Hybrid intrusion detection system is made by the combination of two or
more approaches to the intrusion detection system. In the hybrid intrusion detection system, the host agent or
system data is combined with network information to develop a complete view of the network system. The
hybrid intrusion detection system is more effective in comparison to the other intrusion detection system.
Prelude is an example of Hybrid IDS.
Benefits of IDS

• Detects Malicious Activity: IDS can detect any suspicious activities and alert the system administrator before
any significant damage is done.
• Improves Network Performance: IDS can identify any performance issues on the network, which can be
addressed to improve network performance.
• Compliance Requirements: IDS can help in meeting compliance requirements by monitoring network
activity and generating reports.
• Provides Insights: IDS generates valuable insights into network traffic, which can be used to identify any
weaknesses and improve network security.
IDS
Advantages

• Early Threat Detection: IDS identifies potential threats early, allowing for quicker response to prevent damage.
• Enhanced Security: It adds an extra layer of security, complementing other cybersecurity measures to provide
comprehensive protection.
• Network Monitoring: Continuously monitors network traffic for unusual activities, ensuring constant vigilance.
• Detailed Alerts: Provides detailed alerts and logs about suspicious activities, helping IT teams investigate and
respond effectively.

Disadvantages
• False Alarms: IDS can generate false positives, alerting on harmless activities and causing unnecessary concern.
• Resource Intensive: It can use a lot of system resources, potentially slowing down network performance.
• Requires Maintenance: Regular updates and tuning are needed to keep the IDS effective, which can be time-
consuming.
• Doesn’t Prevent Attacks: IDS detects and alerts but doesn’t stop attacks, so additional measures are still needed.
• Complex to Manage: Setting up and managing an IDS can be complex and may require specialized knowledge.
Network Intrusion detection
The goal of intrusion detection is to identify preferably in real time, unauthorized use, misuse, and abuse of
computer systems by both system insiders and external penetrators.
1. ComputerWatch:
• The Computerwatch audit trail analysis tool provides a significant amount of audit data reduction and limited
intrusion-detection capability. The tool uses an expert system approach to summarize security sensitive events
and to apply rules to detect anomalous behavior. It also provides a method for detailed analysis of user actions
in order to track suspicious behavior.
• Computerwatch does no real-time analysis of events. There are three levels of detection statistics, namely,
system, group, and user. Statistical information for system-wide events is provided in a summary report.
Statistical information for user-based events is provided by detection queries. Statistical information for
group-based events will be a later enhancement.
2. Discovery
• Discovery is an expert system tool developed by TRW for detecting unauthorized accesses to its credit
database. The Discovery system itself is written in COBOL, while the expert system is written in an AI shell.
Both run on IBM 3090s. Their goal is not to detect attacks on the operating system, but to detect abuses of the
application, namely, the credit database.
Investigating Routers

• Routers play many different roles during incidents. They can be targets of attack, stepping-stones for
attackers, or tools for use by investigators. They can provide valuable information and evidence that allow
investigators to resolve complex network incidents.
• Routers lack the data storage and functionality of many of the other technologies we have examined in
previous chapters, and thus they are less likely to be the ultimate target of attacks. (One notable exception is
that routers are targets during denial-of-service attacks, which we will examine closely.) Routers are more
likely to be springboards for attackers during network penetrations. The information stored on routers—
passwords, routing tables
Investigating Routers: Example-Cisco
Show Commands
Most of the required information to be collected from the router will be obtained using the Cisco "show" commands. The main
commands that you need to become familiar with are:

• show clock detail • show interfaces


show version show tcp brief all
show running-config show ip sockets
show startup-config show ip nat translations verbose
show reload show ip cache flow
show ip route show ip cef
show ip arp show snmp user
show users show snmp group
show logging show clock detail
• show ip interface
Investigating Routers: Example-Cisco
• Show audit

The Router Security Audit Logs feature allows for the creation of audit trails. If these are configured, they may
be used to track changes that have been made to a router that is running Cisco IOS software.
The "show audit" command displays the contents of an audit file. The syntax of the command is:
Show audit [filestat]
The option, "filestat" is used to displays the rollover counter for the circular buffer and the number of messages
that are received. The rollover counter, which indicates the number of times circular buffer has been overwritten,
is reset when the audit filesize is changed (via the audit filesize command). This command runs from the
privileged exec mode. This command will create a hash of the information from the "show version" command.

• Show Clock Detail


Timeline entanglement is import to forensic investigations. This command is used to display the time of day
and the status of the SNTP server (if one is configured) that is used by the router.
• Show Version
The "show version" command is a powerful tool. It can display:
• the version of the IOS on the router
• the version of the ROM bootstrap
• the version of the boot loader
• how the router was last powered on (i.e. warm reboot or a system panic.)
• the time and date when the system was last started
• the "uptime" (i.e. how long the router has been running from the last power-on)
• the image file that the device last started
• how much RAM the device has and other hardware information such as:
✓ the processor board ID can be used to determine the version of the router's motherboard
✓ the number and type of each interface on the router
✓ the number of terminal lines on the router and if asynchronous serial lines are used
✓ the amount of nonvolatile RAM (NVRAM) used to hold the SAVED version of the configuration file or
startup-config
✓ The size and type of Flash memory on the router
• The configuration register on the device
• The hostname of the device
Investigating Routers: Example-Cisco
• Show Access Lists
This command displays the content of all access lists (or one specified access list) on the router.
show access-lists [access-list-name] [applied]
The access-list-name keyword is used to display a specified access list and the show access-lists applied
command is used to show the ACLs that are currently being applied to an interface and the configured behavior
per interface.
• Show Users
This command will Show or list which users are logged into a Cisco router.
• Show Routing Table
This command will display the routing table used by the router. This will aid in determining if an attacker has:
Injected routing information (e.g. RIP poisoning attacks),
Deleted routes (i.e. to remove the path to a logging server)
• Show Banners
This will display any banners that are configured on the router.
Investigating Routers: Example-Cisco
• Show ARP & Show IP Arp
This command displays ARP statistics associated with the router interfaces. It can be set to display a specified
interface, a specified host, a specified IP address, or a specified MAC hardware address. This command will
aid in determining hardware address information (the MAC Address) of locally connected hosts and if MAC
spoofing has occurred.
• Show TCP
The commands show ip sockets, show udp and show tcp are used to display traffic passing through the router,
display statistics about the protocols and to see which ports the router is listening on.
• Show tech-support
As of Cisco IOS Software Release 11.2, the command "show tech-support" has allowed for the collection of
multiple sources of information concerning the router in a single command. This one command will output the
same as running all of the following commands:
✓ show version
✓ show running-config
✓ show stacks
✓ show interface
✓ show controller
✓ show process cpu
✓ show process memory
✓ show buffers
Investigating Routers: Example-Cisco
• Show Stacks
The "show stacks" command EXEC command is used to monitor the stack usage of processes and interrupt
routines. The show stacks output is one of the most indispensable sources of information to collect when the
router crashes. It is also one of the most detailed commands for the analysis of the routers memory and is
useful in analyzing router compromises.

• Advanced Data Collection


The most effective way to capture and analyze the router involves the creation of a core dump. A core dump
will contain the complete memory image of the router at the time it was created.
Cisco has included an IOS command to test or trigger a core dump: #write core
Use this command in privileged exec mode (enable mode). This command will cause a crash, and the content of
the memory will be dumped accordingly. When a core dump is generated, the entire setup and config can be
reviewed forensically.
• A core dump can be saved to:
1. An FTP server
a. ip ftp usename username
b. ip ftp password password
c. exception protocol ftp
d. exception dump a.b.c.d
• Advanced Data Collection : 2. A TFTP server (exception dump a.b.c.d)
Using RCP
✓ exception protocol rcp
✓ exception dump a.b.c.d

To a Flash disk (exception flash <procmem|iomem|all> <device_name[:partition_number]> <erase | no_erase>)


• Core Analysis
Cisco routers are essentially one single ELF binary that runs as a large, statically linked UNIX Program that is
loaded by ROMMON. Written in C, the IOS dump can be reversed in order to analyze the system. A Cisco IOS
core dump contains a complete image of the router’s:
• main memory,
• IO memory, and
• the PCI memory (if used).

Core dumps are useful as they contain the complete image of the Cisco device at an instant. They can even be
used to extract network traffic from IO memory into a PCAP file for analysis.
Email Tracing

Email tracing is about finding where an email came from and how it got to you. Email tracking, on the other
hand, lets you see what happens to your email after you send it.
The main difference between email tracing and tracking is that tracing looks back at an email's journey, while
email tracking follows what happens next.

Understanding email tracing


Technical insights
• Decoding Headers and Metadata: Tracing an email starts with analyzing an email's headers and metadata,
containing information like the sender's IP address, email servers used, and timestamps. This data is essential
in mapping the email's journey from sender to receiver. Understanding this information is key to revealing the
path and potential stops an email has made.
• Tools and Techniques: Various tools exist for email tracing, from simple online header analyzers to more
sophisticated software. These tools check the complex data in email headers, making tracing emails more
accessible. Each tool offers different capabilities, from basic route mapping to in-depth analysis.
Email Tracing
Practical use cases
• Combating Spam and Malware: Email tracing is often used to identify the origins of spam or malicious
emails. Pinpointing the source can help prevent future attacks. This approach is integral in building defenses
against continuously evolving threats.
• Investigating Scams: In phishing attempts or email scams, tracing an email can uncover the scammer's
location or network. This information is crucial for further investigation or legal action and often leads to
preventive measures against similar future scams.
Legal and ethical aspects
• Legal and ethical aspects of email tracing mainly revolve around privacy and data protection laws. Legally,
you must follow laws like GDPR in Europe or similar regulations in other regions. This means you can't just
trace an email without considering the privacy rights of the people involved. Ethically, it's about respecting
others' privacy. You shouldn't trace emails to get information about someone without a good reason, and even
then, it should be done carefully and responsibly. So, while tracing emails can be useful, it's important to do it
in a way that respects both the law and people's privacy.
Internet Fraud

• Internet fraud involves using online services and software with access
to the internet to defraud or take advantage of victims. The term
"internet fraud" generally covers cybercrime activity that takes place
over the internet or on email, including crimes like identity
theft, phishing, and other hacking activities designed to scam people
out of money.
Types of Internet Fraud
• Cyber criminals use a variety of attack vectors and strategies to commit internet fraud. This includes
malicious software, email and instant messaging services to spread malware, spoofed websites that steal user
data, and elaborate, wide-reaching phishing scams.
1. Phishing and spoofing: The use of email and online messaging services to dupe victims into sharing personal
data, login credentials, and financial details.
2. Data breach: Stealing confidential, protected, or sensitive data from a secure location and moving it into an
untrusted environment. This includes data being stolen from users and organizations.
3. Denial of service (DoS): Interrupting access of traffic to an online service, system, or network to cause
malicious intent.
4. Malware: The use of malicious software to damage or disable users’ devices or steal personal and sensitive
data.
5. Ransomware: A type of malware that prevents users from accessing critical data then demanding payment in
the promise of restoring access. Ransomware is typically delivered via phishing attacks.
6. Business email compromise (BEC): A sophisticated form of attack targeting businesses that frequently make
wire payments. It compromises legitimate email accounts through social engineering techniques to submit
unauthorized payments.
Dark Web

• The dark web refers to encrypted online content and allows individuals to hide their identity and location from
others. Dark web content is not indexed by conventional search engines. To access the dark web, users must
install a private browser, like the TOR Browser, use a Virtual Private Network (VPN), and ensure their
computer remains safe and secure.
• The dark web is a part of the internet that's made up of hidden sites you can't find through
conventional web browsers. Instead, you must rely on the Tor browser—a web browser that anonymizes
your web traffic within its internal network—and search engines designed specifically to unearth these hidden
sites.
• you might find people selling: Stolen credit card numbers, Social Security numbers, and other private
information, Drugs, Firearms, Designer knockoffs, Pornography, Stolen account login information, Fake
diplomas to Ivy League schools, Fake passports, Malware
How the Dark Web Works

• The dark web refers to encrypted online content not indexed by


conventional search engines.
• Specific browsers, such as TOR Browser, are required to reach the
dark web.
• The dark web pulls up sites using information that isn't indexed
online, such as bank accounts, email accounts, and databases.
• It also has a reputation for being associated with illicit and unethical
activities.
How the Dark Web Works
• The dark web refers to encrypted online content not indexed by conventional search engines.
• Specific browsers, such as TOR Browser, are required to reach the dark web.
• The dark web pulls up sites using information that isn't indexed online, such as bank accounts, email
accounts, and databases.
• It also has a reputation for being associated with illicit and unethical activities.
Pros and Cons of the Dark Web
• The dark web helps people to maintain privacy and freely express their views. Privacy is essential for many
innocent people terrorized by stalkers and other criminals. The popularity of the dark web with criminals
makes it a perfect way for undercover police officers to communicate.
• However, some may abuse the power of the dark web by making it easier to engage in criminal activity. While
the dark web promises privacy to its users, it can also be used to violate the privacy of others. Private photos,
medical records, and financial information have all been stolen and shared on the dark web.
Risks and threats of the dark web
• Criminals: There’s a chance you will find websites run by criminals. Beyond selling illegal goods and
services, they may seek to exploit you and steal from you.
• Breaking the law: You can be prosecuted for things you do on the dark web. It’s important to behave in an
appropriate and legal manner.
• Suspicious links: If you click on any links, you may be taken to material you might not want to see. It’s also
possible that clicking a link or downloading a file could infect your device with malware.
• Law enforcement: Law enforcement officials operate on the dark web to catch people engaged in criminal
activity. Like others on the dark web, law enforcement can do their work under a cloak of anonymity.
• Viruses: Some websites could infect your devices with viruses, and there are a lot of different types of
viruses to watch out for. Remember to never download anything from websites you don’t trust.
• Hackers: You can find hacker forums on the dark web, and you can hire computer hackers to do illegal
activities. Not surprisingly, a lot of these people would be willing to hack your devices, too.
• Webcam hijacking: A website on the dark web may try to get a remote administration tool—also known as a
“RAT”—onto your device. That can lead to someone hijacking your webcam, essentially letting them see
what you’re up to through your device’s camera lens. It’s a smart practice to cover your webcam with a piece
of paper or tape if you’re not using it.
Is it illegal to access the dark web?
• It is not illegal to visit the dark web. But you can face criminal charges if you use the dark web to sell or
purchase illegal firearms, drugs, pornography, stolen passwords, hacked credit card account numbers, or other
items.
Dark Web vs. Deep Web

• The dark web and the deep web are also often erroneously used interchangeably. The dark web is one part of
the deep web, also called the invisible web or the hidden web. The information found on the deep web is
normally encrypted and isn't found on indexes. It includes the pages that don’t pop up when you run a web
search. It also contains everything that requires a login, such as content from:
• Online banking
• Pay websites, such as Netflix and Amazon Prime
• File hosting services, such as Dropbox and its competitors
• Private databases
TOR network

• Tor—short for the Onion Routing project—is an open-source privacy network that enables anonymous
web browsing. The worldwide Tor computer network uses secure, encrypted protocols to ensure that
users' online privacy is protected. Tor users' digital data and communications are shielded using a
layered approach that resembles the nested layers of an onion.
• The Tor network is a secure, encrypted protocol that can ensure privacy for data and communications on
the web.
• Short for the Onion Routing project, the system uses a series of layered nodes to hide IP addresses,
online data, and browsing history.
• Originally developed by the U.S. government, critics consider Tor to be dangerous in the hands of some
people, who may use the Tor network for illegal or unethical purposes.
TOR network

• Tor (an acronym for The Onion Router) is essentially a network that masks online traffic. Tor browser is an
open-source platform managed by volunteers and, due to its onion routing, creates anonymity for users who
access websites and servers through this network. The browser is often used legitimately by journalists and
other users who need to protect their identities, for example, while investigating the opposition in a legal
dispute, or researching competitors.

• In the simplest terms, Tor browser is a software that allows users to browse the internet with a relatively high
degree of privacy. The network and browser take their name from the fact that they direct all web activity
through several routers—called nodes—much like going through the layers of an onion, making it difficult to
track and identify users.
• However, there is a close association between Tor and the dark web because the Tor browser is often used for
illicit activity, even though there was never any intention for Tor to enable criminality. Although the Tor
browser is legal in many countries, some do not allow residents to access the network.
How does onion routing work?

• If you are browsing the internet on a normal web browser like Chrome, Firefox, etc you request
webpages by making simple GET requests to servers without any intermediary. It’s just a single
connection between a client and a server and someone sniffing on your network can know which
server your computer is contacting.
• Onion routing does this differently. In onion routing, the connection is maintained between
different nodes i.e. the connection hops from one server to another and when it reaches the last
server on this circuit it is the server that we wanted to contact and it will process our request and
serve us the desired webpage which is sent back to us using the same network of nodes.
• Now you must think why is it called the onion router. It is because the message we send and the
responses we receive are encrypted with different keys, with a unique key for encryption for every
different hop or server visit.
• The client has access to all the keys but the servers only have access to the keys specific for
encryption/decryption to that server.
• Since this process wraps your message under layers of encryption which have to be peeled off
at each different hop just like an onion that’s why it’s called an onion router.
Example-onion routing

1. The client with access to all the encryption keys i.e key 1, key 2 & key 3 encrypts the message(get
request) thrice wrapping it under 3 layers like an onion which have to be peeled one at a time.
2. This triple encrypted message is then sent to the first server i.e. Node 1(Input Node).
3. Node 1 only has the address of Node 2 and Key 1. So it decrypts the message using Key 1 and
realizes that it doesn’t make any sense since it still has 2 layers of encryption so it passes it on
to Node 2
4. Node 2 has Key 2 and the addresses of the input & exit nodes. So it decrypts the message
using Key 2 realizes that it’s still encrypted and passes it onto the exit node
5. Node 3 (exit node) peels off the last layer of encryption and finds a GET request for youtube.com
and passes it onto the destination server
6. The server processes the request and serves up the desired webpage as a response.
7. The response passes through the same nodes in the reverse direction where each node puts on
a layer of encryption using their specific key
8. It finally reaches the client in the form of a triple encrypted response which can be decrypted
since the client has access to all the keys
How does it provide anonymity?
• Imagine if there is a sniffer listening in at the first connection(client – input node) all it can know is the address of
the input node and a thrice encrypted message that doesn’t make sense. So all the attacker/sniffer knows that you
are browsing tor.
Similarly, if sniffing starts at the exit node all the sniffer sees is a server contacting another server but it can’t track
the client or the source of the request generated.
But now you may think that if someone is listening in at Node 2 they will know the address of the input and exit and
can trace the client and the destination server. But it’s not that simple, each of these nodes has hundreds of
concurrent connections going on, and to know which one leads to the right source and destination is not that easy. In
our circuit, Node 2 is a middle node but it can be a part of another circuit on a different connection where it acts as
the input node receiving requests or an exit node serving up webpages from various servers.

• Vulnerability in Onion Routing


The only security flaw in onion routing is that if someone is listening in on a server at the same time and
matches the request at the destination to a request made by a client on the other side of a network by analyzing
the length and the frequency of the characters found in the intercepted request or response at the destination
server and using that to match with the same request made by a client a fraction of a second (time-stamps on
requests and responses can also be helpful in deducing that) and then tracking them down and knowing their
online activity and shattering the idea of anonymity. This is pretty hard to do but not impossible. But removing
this flaw from Tor is virtually impossible.
features of onion routing:
• Encryption: Onion routing encrypts each layer of data, making it difficult for an attacker to
intercept and decode the data.
• Anonymity: Onion routing provides anonymity by masking the IP address of the sender and the
receiver, making it difficult for an attacker to identify them.
• Relays: Onion routing uses a series of relays to route data through the network, with each relay
only aware of the previous and next relays in the chain, adding another layer of anonymity.
• Decentralized: Onion routing is decentralized, with no central authority or control over the
network.
• Resistance to traffic analysis: Onion routing makes it difficult for an attacker to analyze the
traffic patterns and identify the source and destination of the communication.
• Hidden Services: Onion routing can also be used to provide hidden services, which allow websites
and other services to be hosted on the network without revealing their location or IP address.
• onion routing provides a powerful technique for enhancing the security and privacy of internet
communications, particularly in situations where anonymity and resistance to traffic analysis are
important. It is commonly used by activists, journalists, and others who require a high level of
security and privacy in their online communications.
Advantages of Onion Routing:
• Enhanced Security: Onion routing provides enhanced security by encrypting data multiple times and routing it through several servers,
making it difficult for attackers to intercept or tamper with the communication.
• Anonymity: Onion routing provides anonymity by masking the IP address of the sender and the receiver, making it difficult for anyone to
identify them.
• Resistance to Traffic Analysis: Onion routing makes it difficult for attackers to analyze the traffic patterns and identify the source and
destination of the communication, thereby enhancing privacy and security.
• Decentralized: Onion routing is decentralized, with no central authority or control over the network, making it more resilient to attacks.
• Evades Censorship: Onion routing can help users bypass censorship and access content that may be restricted or blocked by governments or
internet service providers.
• Protects Whistleblowers: Onion routing can provide a safe and anonymous means for whistleblowers to communicate sensitive information
without fear of reprisal or retaliation.
• Enhances Privacy: Onion routing can help protect user privacy by preventing internet service providers, advertisers, and other third parties
from tracking or monitoring their online activity.
• Increased Accessibility: Onion routing can allow users to access content or services that may be geographically restricted or blocked, such as
streaming services or websites.
• Improved Network Performance: Onion routing can improve network performance by distributing traffic across multiple servers, reducing
the load on any one server and potentially reducing latency.
• Flexible Routing: Onion routing allows for flexible routing of data, as it is not limited to a specific route or set of nodes. This can allow for
more efficient and customized routing based on network conditions or user preferences.
• Secure Communications for Sensitive Data: Onion routing can provide secure communication channels for sensitive data, such as financial
transactions, personal information, or confidential business communications.
• Protection Against Network Surveillance: Onion routing can protect against network surveillance by government agencies or other
malicious actors who may be monitoring internet traffic for surveillance purposes.
• Cross-Platform Compatibility: Onion routing is compatible with multiple operating systems and devices, making it accessible to a wide
range of users.
Disadvantages of Onion Routing:

• Slow Performance: Onion routing can result in slow performance due to the multiple layers of encryption
and the need to route data through several servers.
• Limited Accessibility: Onion routing is not widely accessible, and users may need specialized software to
use it.
• Malicious Use: Onion routing can be used for malicious purposes, such as to facilitate illegal activities,
making it a target for law enforcement agencies.
• Vulnerability to Endpoints: While onion routing provides enhanced security and anonymity during
transmission, the endpoints of the communication may still be vulnerable to attacks, making it important to
secure the endpoints as well.
• Resource Intensive: Onion routing can be resource-intensive, requiring a large number of servers to route
data, which can result in high bandwidth usage and increased costs.
• Vulnerable to Exit Nodes: The exit nodes of the onion routing network can be vulnerable to attacks, making
it important to use reputable and trusted exit nodes.
• Limited Quality of Service: Onion routing can result in limited quality of service, with slower connection
speeds and reduced network capacity, which may not be suitable for certain applications such as streaming or
online gaming.
• Difficult to Debug: Debugging problems in onion routing networks can be difficult due to the multiple layers
of encryption and the decentralized nature of the network.
How to Use Tor
• To access the privacy and security features of Tor, you need to install the Tor browser. For that, you need an
Internet connection and a compatible operating system.
How Tor Works
• Tor uses an onion-style routing technique for transmitting data. When you use the Tor browser to digitally
communicate or access a website, the Tor network does not directly connect your computer to that website.
Instead, the traffic from your browser is intercepted by Tor and bounced to a random number of other Tor users’
computers before passing the request to its final website destination.
• This same process is reversed to enable the destination website to communicate with you, the Tor user. The
encryption process that the Tor software uses obscures users' identities, requests, communications, and
transactions while still enabling them to use the Internet as they normally would.

A Tor browser use onion routing to direct and encrypt all traffic, offering users a high level of anonymity. The
network transmits traffic through three layers of international network nodes called onion routers:
• Entry nodes, which form the first layer of encryption and enable the connection to the Tor network.
• A series of middle nodes fully encrypt web traffic to ensure anonymity.
• Exit nodes, which further encrypt data before it reaches the final server.
• Because onion routing effectively encrypts and relays data through multiple network layers, the Tor browser is
highly effective at protecting user data and concealing IP addresses.
Who Uses Tor and Why

• Government agencies: Tor can protect and securely share sensitive government information.
• For-profit enterprises: Companies that use Tor can benefit from increased data privacy and security.
• Illicit organizations: Criminals sometimes use Tor to shield their online activity.
• Private individuals: Anyone wishing for more online privacy and better cybersecurity can benefit from the
Tor browser. Journalists, activists, and people facing censorship may choose to interact online via Tor.
Is Tor Legal?
• Tor is legal to use. Tor is not designed nor intended for Tor users or Tor network operators to break the law.
Benefits of a Tor browser

• The Tor browser does have several advantages, which is why some internet users can benefit from using it.
However, not all of these will be relevant to regular internet users. Here are some of the main reasons why
some users choose to use a Tor onion browser:
• The browser is a free, open-source program
• IP addresses and browsing history are masked
• Enjoy heightened network security because the Tor browser operates on secure, encrypted networks
• Easy access to non-indexed pages, especially through search engines
Disadvantages of Tor browser
• Because of the way it routes traffic, Tor connections are very slow, especially when compared to VPNs, and
downloading large files is not practical.
• Activity may not be completely anonymous, and it is possible to decrypt a user’s identity
• Some countries and companies can block the Tor browser, and its usage can even be illegal in certain
countries.
• The use of this browser can be suspicious, even if it is legal
• Not all websites function on Tor
Here are a few security concerns to be aware of:

• The final part of the data relay on the Tor network—between the exit node and destination server—is
not encrypted, giving third parties an opportunity to monitor and track web traffic.
• It may still be possible for third parties to deploy fingerprinting to identify users; for example, if they
use Tor to access compromised websites with JavaScript enabled, it can be possible to track mouse
movements.
• Tor onion browser is still vulnerable to being compromised by security bugs.
• It is important to use the latest versions of Tor browsers, as obsolete versions may have various
vulnerabilities, enabling malicious actors to impersonate the user.
How to stay safe while using the Tor browser
• Ensure that the Tor browser and any associated apps or extensions are always up to date.
• Use the Tor browser in conjunction with a VPN.
• Employ a firewall to protect the computer’s network.
• Use antivirus software.
• Avoid logging into personal accounts, such as social media profiles or emails.
• Use the Tor browser randomly, so that it is hard to create identifiable patterns.
• Use the highest level of security available on the chosen Tor browser, so that it executes the
least amount of browser code and helps protect devices from malware.
• Use an extension that protects your privacy and only accesses secure HTTPS websites, such
as extensions that will automatically rewrite a URL to use HTTPS instead of HTTP.
Application of Big Data techniques for Log Analysis
• big data can be useful in analyzing the logs and user’s behaviors on a web page and detecting anomalous
behaviors, errors and exceptions.
• Log files: Logs are computer-generated files that capture network and server’s operation data containing the
whole information regarding the user’s activity. These activities are written in various log files such as a
weblog, firewall log, network log, Router log, etc. these log files have millions of entries. Log analysis needs a
long time to be investigated. Analysis of log files has been very important in resolving many issues. Servers,
routers and other devices logs may be our best line of defense.
• Big Data: Basically, Big Data is defined as data sets that could not be perceived, acquired, managed and
processed within a reasonable time by traditional IT and software or hardware tools. Big data exceeds the
processing capacity of conventional database systems. The data is huge and massive, it moves at a very high
speed, and does not have to fit in the structures of existing database architectures. To gain value from this data,
there must be an alternative way to process it. Big data analytics is the key to unlock the insights from all data
types as it enables us to analyze all structured, semi-structured and unstructured data together. It is powerful
because it enables the organization to combine, integrate and analyze all data at once regardless of their source,
type, size or format in order to generate the insights needed to address a wide range of business challenges.
• Hadoop-MapReduce framework provides parallel distributed processing and reliable data storage for large
volumes of log files. Here Hadoop’s characteristic of moving computation to the data rather than moving data
to computation helps to improve response time. Hadoop takes log files analysis to the next level by speeding
and improving security forensics and providing a low-cost platform to detect compliance violations
CONSIDERATIONS AND CHALLENGES INBIG DATA LOGS ANALYSIS
• Millions of records are generated from the devices over a network which should be reviewed and handled in
appropriate response time. To handle the challenges, we need to manage various computational and information
security complexities and methods to analyze Logs.
• The logs are generated in different formats with different structures and contains useless data that need to be
cleaned.
• When working with large logs, practitioners often face issues such as scarce storage, incapable analysis tools,
inaccurate capture and replay of logs, and inadequate privacy. Researchers have devised some practical
solutions, but important challenges remain:
• Data Distribution Performing computation on large volumes of log files has been done earlier but what makes
Hadoop different is its simplified programming model and its efficient, automatic distribution of data and
portability.
• Isolation of Processes Each individual record is processed by a task isolated from other tasks, limiting the
communication overhead between the processes by Hadoop. This makes the whole framework more reliable. In
MapReduce, Mapper tasks process records in isolation. Individual node failure can be worked around by
restarting tasks on another machine.
• Type of Data Log files are rows consisting of semi-structured or unstructured records. Hadoop is compatible
with most types of data and more suitable even in case that the log files are mapped to a structured data due to
RDBMS data size limitation. It works well for simple text files as well.
CONSIDERATIONS AND CHALLENGES INBIG DATA LOGS ANALYSIS

• Fault Tolerance Hadoop cluster solved the problem of data loss. Blocks of the input file are replicated by a
factor on multiple machines in the Hadoop cluster. Therefore, even if any machine goes down, another machine
where the same block is residing will take care of further processing.
• Data Locality and Network Bandwidth Log files are spread across HDFS as blocks, therefore the node which
operates on a subset of files will be detected by the locality of the node, and the purpose is reducing the strain on
network bandwidth and avoid unnecessary network traffic.
log data can show
• Log data is data that machines generate. It comes from a variety of sources including software applications,
network nodes, components and data center servers, connected devices and sensors, consumer online activities
and transactions.
• Because of all the places that generate it, log data contains useful information around:
✓ The behavior of the system
✓ End-user activities and habits
✓ Machine performance and patterns that contain insights on potential incidents, events and outcomes in the future
• When a user or software performs an action on the system, all the different parts of the system keep track of
what's happening and what the system looks like at that moment. More precisely, each part of the technology
system involved performs a specific sequence of steps. At each step, information is collected and recorded about:
✓ The current state of the system
✓ The computing request that was made
✓ The new state of the system after the request has been processed
log data can show
• In other words, every action that a user or software makes on a technology system generates a log of
information about that action and its effects on the system. These logs are metadata, or data about data. This
metadata, when looked at together, has information that includes things like:
✓ The time certain action(s) occurred
✓ What part of the system was involved (networking protocols)
✓ Any errors that might have occurred
How to perform Log Analysis

• Collect and centralize data : This involves gathering logs from various sources such as servers, applications,
network devices, and security systems. Implement a log collection mechanism that captures logs in real-time
or scheduled intervals and store them in a centralized location or log management system. Centralizing logs
can pose numerous challenges, including storage quantity and budget constraints.
• Analyze data: There are several approaches to log analysis, including manual inspection, using log analysis
tools, or employing machine learning and data mining techniques.
Investigating Routers
• A router is a network-layer device or software application that determines the next network point to which a
data packet should be forwarded in a packet-switched network.
• As a hardware device, a router can execute specific tasks just like a switch. The only difference is that routers
are more sophisticated. They have access to network-layer (layer 3 of the OSI model) addresses and contain
software that enables them to determine which of several possible paths between those addresses is most
suitable for a particular transmission
References
• https://www.studocu.com/ph/document/university-of-nueva-caceres/computer-forensic/router-forensics/45678507
• https://www.geeksforgeeks.org/intrusion-detection-system-ids/
• https://www.mailbutler.io/blog/email/email-tracing-vs-email-tracking/
• https://www.fortinet.com/resources/cyberglossary/internet-fraud
• https://www.investopedia.com/terms/d/dark-web.asp
• https://in.norton.com/blog/how-to/how-can-i-access-the-deep-web
• https://www.investopedia.com/terms/t/tor.asp
• https://www.kaspersky.com/resource-center/definitions/what-is-the-tor-browser
• https://www.geeksforgeeks.org/onion-routing/
• https://www.sans.org/blog/cisco-router-forensics/
• https://www.splunk.com/en_us/blog/learn/log-analytics.html
• Shendi, M. M., H. M. Elkadi, and M. H. Khafagy. "A study on the big data log analysis: goals, challenges, issues, and tools." International Journal of Artificial
Intelligence and Soft Computing 7.2 (2019): 5-12.

You might also like