0% found this document useful (0 votes)
28 views27 pages

Malware Analysis Da-2-Compressed

Malware analysis

Uploaded by

Ajay naidu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views27 pages

Malware Analysis Da-2-Compressed

Malware analysis

Uploaded by

Ajay naidu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Malware Analysis Lab

Digital Assignment - 2

Name - Ajay Naidu


Reg No - 21BCI0074
Description: HTTP packets are useful for observing requests made to web servers and
responses, especially when analyzing unencrypted tra ic.
Description: TCP packets show the basic structure of data transmission, including
establishing connections and managing packet sequence.

Description: DNS packets help in identifying domain name resolutions and


understanding how domain requests are handled.
Capture Settings

1. Network Interface:

o Selected Interface: Ethernet/Wi-Fi (whichever you are using).

o Rationale: This interface was chosen because it is the active network


interface on the system, through which all internet tra ic is routed.

2. Capture Filter:

o Filter Used: No specific capture filter was applied.

o Rationale: A broad capture without filters was used to ensure that all
types of network tra ic (including HTTP, DNS, TCP, and others) were
captured. This provides a more comprehensive dataset for analysis.

3. Capture Duration:

o Duration: 2+ hours.

o Rationale: This extended period allowed capturing a wide variety of


network tra ic, from routine background system tra ic to active browsing
sessions, providing diverse packet types for analysis.

4. HTTP Tra ic Focus:

o Visited Websites: Specific attention was given to capturing HTTP tra ic,
especially by visiting jayakumars.in and other HTTP sites with form
submissions.

o Rationale: HTTP is unencrypted, making it easier to observe packet


contents like requests, responses, and form data. This is essential for
analyzing how information is transmitted in the clear, which is useful for
network tra ic analysis and understanding potential vulnerabilities.

5. Promiscuous Mode:

o Setting: Promiscuous mode was enabled.

o Rationale: This setting ensures that all packets on the network, not just
those directed to the host, were captured. This provides a more complete
view of network activity.

6. Packet Length:

o Setting: Full packet capture.


o Rationale: Full packet capture was selected to ensure all parts of the
packets, including headers and payloads, were retained for detailed
analysis.
Connection Duration

 First UDP Arrival Time: 16:58:58.97

 Last UDP Arrival Time: 17:37:22.95

Calculation:

1. Convert Times to Seconds:

o First Arrival: 61,138.97 seconds

o Last Arrival: 63,442.95 seconds

2. Calculate Duration:

o Duration = Last Arrival - First Arrival

o Duration = 63,442.95 - 61,138.97 = 2,303.98 seconds

3. Convert to Minutes and Seconds:

o 38 minutes and 24 seconds

The active duration of the connection is 38 minutes and 24 seconds.


Key Protocols Present in Your Capture:

1. Ethernet (0%):

o No Ethernet tra ic was detected directly, which could mean you're


capturing higher-level protocols like IPv6/IPv4 over a network where
Ethernet frames are abstracted away (e.g., through VPN or other
encapsulations).

2. Internet Protocol Version 6 (IPv6) (52.6% of tra ic):

o This shows that a significant amount of your network tra ic is using IPv6.

o Packets: 1319 packets.

o Bytes: 52760 bytes.

3. User Datagram Protocol (UDP):

o Both IPv6 and IPv4 UDP tra ic is detected.

o IPv6 UDP: 1319 packets, 10552 bytes.

o IPv4 UDP: 1189 packets, 9512 bytes.

o UDP is a lightweight protocol often used for DNS, QUIC, and other fast,
connectionless services.

4. DHCPv6:

o Dynamic Host Configuration Protocol for IPv6.

o A small number of packets (4) with 380 bytes. This protocol is used for
assigning IP addresses dynamically over IPv6.
5. IPv4:

o About 47.4% of the tra ic was IPv4 packets.

o Packets: 1189.

o Bytes: 23780 bytes.

6. DNS (Domain Name System):

o 29.8% of tra ic related to DNS, probably for resolving hostnames to IP


addresses.

o This is critical for almost all web tra ic as DNS translates domain names
to IPs.

o Bytes: 75392 bytes.

Breakdown of the Packet Details:

1. Acknowledgment Number:

o Raw Acknowledgment Number: 918417999 indicates that the sender


acknowledges receipt of all bytes up to this number.

2. TCP Header:
o Header Length: 20 bytes (5) indicates the length of the TCP header.

o Flags: 0x018 (PSH, ACK):

 PSH (Push): Indicates that the sender is pushing the data to the
receiving application.

 ACK (Acknowledgment): Indicates that this segment


acknowledges the receipt of data.

o Window Size: 510 and the calculated window size of 130560 indicates
the amount of data that can be sent without waiting for an
acknowledgment.

o Checksum: 0x0338 [unverified] is used for error-checking the header and


payload. "Unverified" means Wireshark did not confirm the checksum.

o Urgent Pointer: 0 indicates no urgent data in this segment.

o Timestamps: Often present for round-trip time calculations and can help
with performance analysis.

3. TCP Payload:

o Payload Length: 74 bytes, indicating the amount of data in the TCP


segment.

o Transport Layer Security:

 TLSv1.3 Record Layer: Specifies that this packet is part of the TLS
v1.3 protocol.

 Content Type: Change Cipher Spec (20) indicates that this packet
is changing the cipher specifications, preparing to switch to a new
encryption method.

 Version: TLS 1.2 (0x0303) suggests that TLS 1.2 is being used.

 Length: 1 indicates the length of the Change Cipher Spec


message.

o Application Data:

 This is the critical part where you see the actual encrypted
application data.

 Opaque Type: Application Data (23) indicates this is TLS


application data.

 Version: Again, shows TLS 1.2 (0x0303).


 Length: 63 indicates the length of the encrypted application data.

4. Encrypted Application Data:

o This is the encrypted payload you are interested in analyzing. The


encrypted data is represented in hexadecimal:

o This data is part of an HTTPS transaction, typically representing a request


or response between a client and server.

For each protocol, provide a brief explanation of its purpose and significance in a
network.

1. Ethernet

 Purpose: Ethernet is a widely-used data link layer protocol that defines the
standards for network communication over wired networks.

 Significance: It enables devices on the same local area network (LAN) to


communicate with each other using unique hardware addresses (MAC
addresses). Ethernet supports various speeds (10 Mbps to 100 Gbps and
beyond) and is foundational for most wired networks.

2. IPv6 (Internet Protocol Version 6)

 Purpose: IPv6 is the most recent version of the Internet Protocol, designed to
replace IPv4.

 Significance: It addresses the limitations of IPv4, primarily its address


exhaustion, by providing a vastly larger address space (128-bit addresses). IPv6
improves routing e iciency, supports autoconfiguration, and includes built-in
security features, making it essential for the continued growth of the internet.

3. UDP (User Datagram Protocol)

 Purpose: UDP is a transport layer protocol that enables fast, connectionless


data transmission between applications.

 Significance: It is used in applications where speed is more critical than


reliability, such as video streaming, gaming, and voice over IP (VoIP). UDP does
not guarantee delivery, ordering, or error-checking, making it lightweight and
suitable for real-time applications.

4. QUIC (Quick UDP Internet Connections)

 Purpose: QUIC is a modern transport protocol developed by Google that


integrates features of TCP and TLS for secure and low-latency communication.
 Significance: Designed to reduce connection and transport latency, QUIC is
particularly beneficial for web applications. It improves performance for HTTP/3
tra ic and enables faster page loads and reduced bu ering for streaming
services.

5. DNS (Domain Name System)

 Purpose: DNS translates human-readable domain names (like


www.example.com) into IP addresses that computers use to identify each other
on the network.

 Significance: It is a critical component of the internet, enabling users to access


websites using easy-to-remember names instead of numerical IP addresses.
DNS enhances user experience and facilitates e icient network communication.

6. DHCPv6 (Dynamic Host Configuration Protocol for IPv6)

 Purpose: DHCPv6 automatically assigns IPv6 addresses and configuration


settings to devices on a network.

 Significance: In large networks, manual IP address configuration can be


impractical. DHCPv6 simplifies network management by dynamically assigning
addresses, reducing the potential for conflicts and ensuring that devices can
connect to the network seamlessly.
Excessive DNS Queries:

 Detect potential DNS tunneling or DDoS attacks by filtering for excessive DNS
requests:
ARP Spoofing:

 Filter for ARP packets that might indicate spoofing:

1. DNS Tra ic Observation

It looks like the source IP 192.168.125.94 is querying the destination IP 192.168.125.245


frequently. The high frequency of DNS requests within a short time span (e.g., under 10
seconds) suggests potential issues, such as:
 DNS Flooding or Amplification Attacks: A flood of DNS queries could be a
symptom of a DNS amplification attack, where an attacker sends numerous
queries to overload the DNS resolver.

 Potential Malware Activity: Malware often uses DNS queries to communicate


with command-and-control (C2) servers or exfiltrate data through DNS
tunneling.

2. Key Findings in DNS Responses

Here are a few suspicious or noteworthy domain names and responses found in the
logs:

 beacons.gcp.gvt2.com: This is related to Google services but could also be


exploited by malware for tracking or communication with C2 servers.

 update.googleapis.com: Although this is a Google service, repeated queries to


update servers could indicate suspicious activity, such as malware trying to
bypass security patches.

 edge.microsoft.com and bing.com: These are legitimate services, but repeated


queries could be tied to malware that mimics legitimate tra ic to evade
detection.

3. Identifying Security Issues

1. DNS Tunneling

 Explanation: DNS tunneling involves encoding data into DNS queries, allowing
attackers to bypass network firewalls and exfiltrate data via DNS tra ic.

 Indicators: The large number of DNS queries sent to the same destination in a
very short time frame (e.g., beacons.gcp.gvt2.com) could indicate DNS
tunneling.

2. Potential C2 (Command-and-Control) Communication

 Explanation: Malware frequently communicates with external servers to receive


instructions from attackers or send back data.

 Indicators: Look for requests to unusual or unfamiliar domains like


jayakumars.in. These could indicate an attempt by malware to communicate
with a C2 server.
Here are proposed countermeasures:

1. DNS Tra ic Monitoring and Analysis

 Implement DNS Logging: Ensure that DNS queries are logged for future
analysis. Use tools that can capture and analyze DNS tra ic to detect anomalies,
such as high-frequency requests to the same domain.

 Real-time Monitoring: Utilize network monitoring tools that can provide real-
time alerts for unusual DNS activity, such as unexpected spikes in tra ic or
queries to known malicious domains.

2. Threat Intelligence Integration

 Use Threat Intelligence Feeds: Integrate threat intelligence feeds to


automatically block queries to known malicious domains. This can help prevent
communication with command-and-control servers or harmful domains.

 Domain Whitelisting/Blacklisting: Maintain a list of approved domains


(whitelist) and block known malicious domains (blacklist). Regularly update
these lists based on the latest threat intelligence.

3. Network Segmentation

 Isolate Critical Systems: Segment the network to separate critical systems


from less secure areas. This can prevent potential malware from spreading if it
originates from a compromised device.

 Use Separate DNS Servers: For sensitive environments, consider using


dedicated DNS servers that only resolve approved domains to minimize
exposure to potentially harmful DNS queries.

4. Enhanced Endpoint Protection

 Deploy Endpoint Detection and Response (EDR): Utilize EDR solutions that
can detect and respond to suspicious activity on endpoints, including unusual
DNS queries.

 Regularly Update and Patch Software: Ensure that all systems and
applications are regularly updated to mitigate vulnerabilities that could be
exploited by malware.

5. User Education and Awareness

 Conduct Training Sessions: Educate employees about the risks associated with
phishing and suspicious domains. Training can help users identify and report
unusual activity.
 Encourage Reporting: Create a culture of awareness where users can easily
report suspicious activities without fear of repercussions.

6. Implement DNS Security Extensions (DNSSEC)

 Use DNSSEC: Implement DNS Security Extensions (DNSSEC) to protect against


DNS spoofing and cache poisoning attacks. DNSSEC ensures the integrity and
authenticity of DNS responses.

7. Firewall and Intrusion Prevention Systems (IPS)

 Configure Firewalls: Use firewalls to block outbound tra ic to known malicious


domains and implement rules to restrict DNS queries to approved DNS servers
only.

 Deploy Intrusion Prevention Systems (IPS): An IPS can detect and block
malicious tra ic patterns, including unusual DNS requests indicative of malware
or data exfiltration attempts.

8. Incident Response Plan

 Develop and Maintain an Incident Response Plan: Create a comprehensive


incident response plan that outlines the steps to take in case of a security
incident. This should include roles, responsibilities, and communication
strategies to mitigate damage quickly.

 Conduct Regular Drills: Perform regular incident response drills to ensure that
the team is prepared to respond to real threats e iciently.
Key Information from the Capture:

 Standard Query Response for A Record:

o The domain jayakumars.in resolved to the IP address 46.28.45.212.

 Standard Query Response for AAAA Record:

o An IPv6 address 2302:4780:11:1428:8:29:7043:2 was also assigned for


jayakumars.in.

 Standard Query for HTTPS:

o The domain appears to have HTTPS communication through the server


identified by its name servers, such as ns1.dns-parking.com

There are numerous DNS queries from the IP address 192.168.125.94 to


192.168.125.245.

 Key Observations:

o Multiple queries for domains like activity.windows.com, google.com, and


others.

o High frequency of queries in a short period, suggesting potential


automated activity.
Key Observations:

 GET Requests:

o Numerous GET requests are being sent to URLs under the path
/d/msdownload/update/others/. These seem to be related to Microsoft
update files, which end in .cab, .crl, etc.

o The requests are for downloading various files, possibly for updating
software like Microsoft O ice, as indicated by terms like O ice/Data.

 Server Details:

o The destination server IP for many of these requests is 103.53.14.4


(presumably belonging to Microsoft or a CDN server delivering the update
files).

o The IPv6 addresses in your capture (such as 2409:40f4:101a:c24f) are


communicating with these servers as well, suggesting dual-stack support
(both IPv4 and IPv6).

Significance of Identifying Server Details:

 IP and Domain Mapping:

o Mapping the domain jayakumars.in to its IP address helps you identify


which server is responsible for handling the tra ic.

o If the server is misconfigured or vulnerable, it poses a security risk.

 Potential Security Risks of HTTP Methods:

o GET Requests: May expose sensitive information in URLs or query


parameters.

o POST Requests: Can be used to send sensitive data (like login


credentials) and if not properly secured (e.g., over HTTP instead of
HTTPS), it could lead to data leakage.

o Identifying these methods helps in assessing whether secure


communication methods (like HTTPS) are being used or if there are
vulnerabilities, such as potential data exposure.
Potential Risks:

 Lack of Encryption: If any of the captured tra ic is over HTTP instead of HTTPS,
it could be intercepted, leading to a man-in-the-middle (MITM) attack.

 Exposure of Sensitive Information: If any sensitive data is being transmitted in


GET or POST requests without encryption, this could be a major vulnerability.

Top Three Protocols by Bandwidth Consumption

1. Hypertext Transfer Protocol (HTTP)

o Bytes: 4,086,481 (approx. 4.09 MB)

o Packets: 1,160

o Percentage of Total Bandwidth: 32.7%

o Analysis: HTTP is a major contributor to bandwidth usage, indicating that


a significant amount of web tra ic is being transmitted. This could include
page loads, images, scripts, and other web content.

2. User Datagram Protocol (UDP)

o Bytes: 499,392 (approx. 499.39 KB)

o Packets: 1,732
o Percentage of Total Bandwidth: 7.0%

o Analysis: UDP is used for various applications such as video streaming,


gaming, and real-time communications. Its lower overhead allows for
faster transmission, but it lacks reliability features, which may a ect
performance if packets are lost.

3. Transmission Control Protocol (TCP)

o Bytes: 528,893 (approx. 528.89 KB)

o Packets: 183

o Percentage of Total Bandwidth: 58.6%

o Analysis: TCP ensures reliable transmission of data, making it suitable for


applications where accuracy is critical. Its high packet count and
substantial bandwidth usage suggest frequent handshakes and
acknowledgments, which could be a ecting overall network
performance.

Source/Destination IPs:

 The TCP packets are between di erent IPv6 addresses and the same
192.168.125.245.

TLSv1.2:

 Multiple entries indicate that the tra ic includes TLS (Transport Layer Security)
packets, showing a secure communication channel is being used.
 The client key exchange and encrypted handshake messages suggest the
establishment of a secure connection.

HTTP Tra ic:

 There are several entries indicating HTTP tra ic, specifically showing HEAD
requests being sent to the server and 200 OK responses, confirming successful
retrieval of resources.

TCP Flags:

 Flags such as SYN, ACK, FIN, and retransmissions are present, indicating the
connection establishment and teardown process.

Timestamps:

 Similar to UDP, TCP packets also have timestamps.

observing numerous DNS queries (and responses) between two IP addresses:

 Source: 192.168.125.94 (likely your machine or a local device)

 Destination: 192.168.125.245 (your local DNS server).

The DNS queries are returning various A, AAAA, and CNAME records for di erent
Microsoft services, like:

 download.windowsupdate.com

 events.data.microsoft.com

 edge.microsoft.com
 o icecdn.microsoft.com

 ecn.dev.virtualearth.net

These are typically part of Microsoft's update services, telemetry, and cloud
connections.

Key DNS Records:

 A Records: These map domain names to IPv4 addresses.

 AAAA Records: These map domain names to IPv6 addresses.

 CNAME Records: These are alias records that point one domain name to
another domain name.

Key Points:

 Source/Destination: The main source IP is 192.168.125.245, and the primary


destination appears to be 103.53.14.4. Other external IPs such as 49.44.197.202
and 52.140.67.125 are also involved in the tra ic.

 HTTP Methods: The most common methods observed are GET and HEAD.
 Responses: Several HTTP responses indicate success (200 OK) and partial
content (206 Partial Content), which is typically used when a resource is too
large and is sent in chunks.

 JSON Content: There are also requests that return JSON content, indicating
some API interactions.

Based on the observations of the HTTP tra ic in your provided data, where there are
repeated GET and HEAD requests and instances of partial content transfers, optimizing
network bandwidth could involve several strategies. Here’s a breakdown of potential
strategies:

1. Caching Mechanism:

 Observation: The repeated GET and HEAD requests for similar resources (e.g.,
.cab files for O ice Data) suggest that some files might be downloaded multiple
times.

 Strategy: Implement a robust caching mechanism where frequently requested


static resources (like .cab files) are stored locally. This way, when the same
resources are requested again, they can be served from the cache instead of
being fetched from the server, saving bandwidth.

o HTTP Headers for Caching: Utilize HTTP headers such as Cache-Control


and ETag to manage cache expiration and validation.

2. Content Compression:

 Observation: Large data transfers are occurring with partial content responses
(206 Partial Content), indicating that files are likely being transferred in chunks.

 Strategy: Apply content compression techniques (like Gzip or Brotli) to reduce


the size of the data being transferred. By compressing resources, especially large
ones like .cab files, you can significantly reduce the bandwidth usage.

o Ensure that the server supports compressed content, and the clients are
set to accept compressed responses.

3. Optimizing Partial Content Delivery:

 Observation: Multiple occurrences of 206 Partial Content indicate that


resources are being delivered in chunks, which could be due to large file sizes.

 Strategy: Instead of transferring large files in multiple parts, consider optimizing


the process by:
o Reducing file sizes where possible (e.g., by compressing or removing
unnecessary components).

o If file chunking is required, minimize overhead by adjusting chunk sizes to


optimize throughput and reduce fragmentation.

o Use HTTP/2, which supports multiplexing multiple streams over a single


TCP connection, thereby reducing latency.

4. Reducing Redundant Requests:

 Observation: Redundant requests for similar resources (same .cab files) from
the same source indicate ine iciencies in handling these resources.

 Strategy: Implement proper resource validation using HTTP headers like If-None-
Match and Last-Modified so that servers return 304 Not Modified for unchanged
content. This ensures that clients do not download the same resources
unnecessarily, saving bandwidth.

5. Content Delivery Network (CDN) Usage:

 Observation: Requests are being sent to various IPs, some of which could be
geographically distant from the client (52.140.67.125, 49.44.197.202).

 Strategy: Utilize a Content Delivery Network (CDN) to serve static resources


closer to the end users. CDNs replicate content across multiple locations
worldwide, allowing users to download resources from the nearest server,
reducing latency and bandwidth consumption across the network.

o CDNs also help optimize bandwidth usage by o loading tra ic from the
origin server.

6. Connection Reuse (Keep-Alive):

 Observation: If multiple requests are being made from the same client to the
same server (e.g., between 192.168.125.245 and 103.53.14.4), each connection
establishment may be adding overhead.

 Strategy: Enable persistent HTTP connections (Keep-Alive) to allow multiple


requests to be sent over the same TCP connection without needing to establish
new connections for each one. This reduces the overhead caused by connection
setup and teardown.

o Persistent connections reduce the number of TCP handshakes and


associated overhead, conserving bandwidth.
Reflection on Running Wireshark for an Extended Period

Running Wireshark for an extended period of time provided several advantages, but also
introduced various challenges that needed to be managed e ectively.

Advantages:

1. Comprehensive Network Visibility:

o Capturing network tra ic over an extended period allowed for more


thorough analysis and identification of network patterns and potential
anomalies. This provided a holistic view of the network's behavior,
enabling the detection of rare events, intermittent issues, or specific
attack vectors that might otherwise be missed in short-term captures.

2. Trend Analysis:

o Extended captures made it easier to observe trends, such as repetitive


HTTP requests, bandwidth-heavy transfers, and potential security
vulnerabilities. These trends could help in optimizing network
performance, detecting persistent threats, and troubleshooting issues
like packet loss or slow response times.

3. Network Baseline Creation:

o Running Wireshark over a longer period helps establish a baseline for


typical network activity. By understanding the normal behavior of the
network, deviations from this baseline—such as spikes in tra ic or
unusual patterns—can be more easily identified and investigated.

4. Historical Analysis:

o With extended capture, it was possible to look back at past tra ic events,
providing historical data that could be crucial in investigating incidents or
breaches that occurred hours or even days earlier.
Running Wireshark for a prolonged duration brought several challenges, both technical
and practical. Here’s a reflection on these challenges and the strategies employed to
overcome them:

1. Storage and File Size Management:

o Challenge: Wireshark captures can result in extremely large files,


especially when capturing on high-tra ic networks over extended periods.
This can quickly consume disk space and slow down analysis.

o Solution: To manage storage, capture filters were applied to limit the


data collected to only specific protocols (e.g., HTTP) or IP addresses of
interest. Additionally, Wireshark supports ring bu ers and file rotation,
which can break up large capture files into smaller chunks, making them
more manageable and conserving disk space. Compression techniques
were also applied to archive older captures e iciently.

2. Performance Impact:

o Challenge: Capturing all network tra ic for extended periods can


consume significant CPU and memory resources, especially when
processing large volumes of tra ic, which could potentially slow down the
host system running Wireshark.

o Solution: One approach to mitigate this was to capture only specific


tra ic (using display filters like TCP or HTTP) to reduce the load on the
system. Additionally, o loading the capture to a dedicated machine,
instead of a personal workstation, helped avoid performance degradation
in other activities.

3. Analysis Complexity:

o Challenge: With a large volume of data captured over time, pinpointing


specific issues or anomalies became more complex due to the sheer
amount of information to sift through.

o Solution: Using display filters and color rules in Wireshark made the
data easier to parse and helped narrow down relevant tra ic more
e iciently. Automated tools and scripts (e.g., Tshark) were also used to
extract relevant tra ic and identify suspicious patterns. Exporting the
capture to other network analysis tools (like Splunk) allowed for deeper
analysis.

4. Privacy and Security Concerns:


o Challenge: Capturing network tra ic raises privacy concerns, as
sensitive information (such as credentials or personal data) may be
inadvertently logged.

o Solution: To mitigate this, data was captured with encrypted protocols


like HTTPS, and sensitive information was anonymized or masked in
reports. Where applicable, only public or non-sensitive tra ic was filtered
for analysis, and proper authorization was obtained before conducting
the captures.

5. Potential Legal and Ethical Issues:

o Challenge: Capturing all tra ic on a network may violate privacy policies


or legal regulations, especially if it includes tra ic from users who haven’t
consented to monitoring.

o Solution: Before running Wireshark for an extended period, it was crucial


to obtain appropriate permissions from network administrators and
ensure that the activity complied with organizational privacy policies and
any relevant legal regulations. Capture was limited to specific segments
of the network where monitoring was authorized.

6. Packet Loss:

o Challenge: During high-tra ic periods, there’s a risk that Wireshark might


drop packets, particularly if the hardware capturing the data isn't capable
of keeping up with the tra ic load.

o Solution: Optimizing the capture filters to reduce the amount of


unnecessary tra ic being logged, and upgrading the capture machine’s
hardware (such as adding more RAM or processing power) helped
mitigate packet loss. Additionally, capturing tra ic at lower layers (e.g.,
using span ports or taps on the network switch) can reduce the workload
on the capture device.

7. Monitoring in Real-Time:

o Challenge: Real-time monitoring of network tra ic over an extended


period requires vigilance and can lead to missed opportunities for
immediate action when an anomaly occurs.

o Solution: Implementing alerting systems and automating tra ic flagging


through Tshark scripts helped by notifying the team of abnormal tra ic in
real-time. This allowed for quicker response times when suspicious
activities were detected.

You might also like