CAS-005 CompTIA Exam Updated Questions
CAS-005 CompTIA Exam Updated Questions
best material for you to test all the related CompTIA exam topics. By using the
CAS-005 exam dumps questions and practicing your skills, you can increase
your confidence and chances of passing the CAS-005 exam.
Instant Download
Free Update in 3 Months
Money back guarantee
PDF and Software
24/7 Customer Support
Besides, Dumpsinfo also provides unlimited access. You can get all
Dumpsinfo files at lowest price.
1.A security engineer is assisting a DevOps team that has the following requirements for container
images:
Ensure container images are hashed and use version controls.
Ensure container images are up to date and scanned for vulnerabilities.
Which of the following should the security engineer do to meet these requirements?
A. Enable clusters on the container image and configure the mesh with ACLs.
B. Enable new security and quality checks within a CI/CD pipeline.
C. Enable audits on the container image and monitor for configuration changes.
D. Enable pulling of the container image from the vendor repository and deploy directly to operations.
Answer: B
Explanation:
Implementing security and quality checks in a CI/CD pipeline ensures that:
Container images are scanned for vulnerabilities before deployment.
Version control is enforced, preventing unauthorized changes.
Hashes validate image integrity.
Other options:
A (Configuring ACLs on mesh networks) improves access control but does not ensure scanning.
C (Audits on container images) detect changes but do not enforce best practices.
D (Pulling from a vendor repository) does not ensure vulnerability scanning.
Reference: CASP+ CAS-005 C DevSecOps and Secure Containerization
Which of the following is most likely the reason for the issue?
A. The user inadvertently tripped the geoblock rule in NGFW.
B. A threat actor has compromised the user's account and attempted to log in.
C. The user is not allowed to access the human resources system outside of business hours.
D. The user did not attempt to connect from an approved subnet.
Answer: A
Explanation:
The logs show that the user connected from Toronto (104.18.16.29) and Los Angeles (95.67.137.12)
within minutes. The sudden location change is a typical trigger for geoblocking in a Next-Generation
Firewall (NGFW), leading to the HR System being denied.
A compromised account (B) would show failed login attempts or unusual activities, but all other
access attempts were allowed.
Business hours restriction (C) is unlikely since the user was granted access earlier. Approved subnet
issues (D) would affect all applications, not just HR System access.
Reference: CompTIA SecurityX (CAS-005) Exam Objectives - Domain 4.0 (Security Operations),
Section on Firewall Rules and Network Traffic Analysis
3.SIMULATION
You are tasked with integrating a new B2B client application with an existing OAuth workflow that
must meet the following requirements:
. The application does not need to know the users' credentials.
. An approval interaction between the users and the HTTP service must be orchestrated.
. The application must have limited access to users' data.
INSTRUCTIONS
Use the drop-down menus to select the action items for the appropriate locations. All placeholders
must be filled.
Answer:
Select the Action Items for the Appropriate Locations:
Authorization Server:
Action Item: Grant access
The authorization server's role is to authenticate the user and then issue an authorization code or
token that the client application can use to access resources. Granting access involves the server
authenticating the resource owner and providing the necessary tokens for the client application.
Resource Server:
Action Item: Access issued tokens
The resource server is responsible for serving the resources requested by the client application. It
must verify the issued tokens from the authorization server to ensure the client has the right
permissions to access the requested data.
B2B Client Application:
Action Item: Authorize access to other applications
The B2B client application must handle the OAuth flow to authorize access on behalf of the user
without requiring direct knowledge of the user's credentials. This includes obtaining authorization
tokens from the authorization server and using them to request access to the resource server.
Detailed
OAuth 2.0 is designed to provide specific authorization flows for web applications, desktop
applications, mobile phones, and living room devices.
The integration involves multiple steps and components, including:
Resource Owner (User):
The user owns the data and resources that are being accessed.
Client Application (B2B Client Application):
Requests access to the resources controlled by the resource owner but does not directly handle the
user's credentials. Instead, it uses tokens obtained through the OAuth flow. Authorization Server:
Handles the authentication of the resource owner and issues the access tokens to the client
application upon successful authentication.
Resource Server:
Hosts the resources that the client application wants to access. It verifies the access tokens issued by
the authorization server before granting access to the resources. OAuth Workflow:
The resource owner accesses the client application.
The client application redirects the resource owner to the authorization server for authentication. The
authorization server authenticates the resource owner and asks for consent to grant access to the
client application.
Upon consent, the authorization server issues an authorization code or token to the client application.
The client application uses the authorization code or token to request access to the resources from
the resource server.
The resource server verifies the token with the authorization server and, if valid, grants access to the
requested resources.
Reference: CompTIA Security+ Study Guide: Provides comprehensive information on various
authentication and authorization protocols, including OAuth.
OAuth 2.0 Authorization Framework (RFC 6749): The official documentation detailing the OAuth 2.0
framework, its flows, and components.
OAuth 2.0 Simplified: A book by Aaron Parecki that provides a detailed yet easy-to-understand
explanation of the OAuth 2.0 protocol.
By ensuring that each component in the OAuth workflow performs its designated role, the B2B client
application can securely access the necessary resources without compromising user credentials,
adhering to the principle of least privilege.
4.Recent repents indicate that a software tool is being exploited Attackers were able to bypass user
access controls and load a database. A security analyst needs to find the vulnerability and
recommend a mitigation.
The analyst generates the following output:
Which of the following would the analyst most likely recommend?
A. Installing appropriate EDR tools to block pass-the-hash attempts
B. Adding additional time to software development to perform fuzz testing
C. Removing hard coded credentials from the source code
D. Not allowing users to change their local passwords
Answer: C
Explanation:
The output indicates that the software tool contains hard-coded credentials, which attackers can
exploit to bypass user access controls and load the database. The most likely recommendation is to
remove hard-coded credentials from the source code.
Here’s why:
Security Best Practices: Hard-coded credentials are a significant security risk because they can be
easily discovered through reverse engineering or simple inspection of the code. Removing them
reduces the risk of unauthorized access.
Credential Management: Credentials should be managed securely using environment variables,
secure vaults, or configuration management tools that provide encryption and access controls.
Mitigation of Exploits: By eliminating hard-coded credentials, the organization can prevent attackers
from easily bypassing authentication mechanisms and gaining unauthorized access to sensitive
systems.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl OWASP Top
Ten: Insecure Design
NIST Special Publication 800-53: Security and Privacy Controls for Information Systems and
Organizations
6.Operational technology often relies upon aging command, control, and telemetry subsystems that
were created with the design assumption of:
A. operating in an isolated/disconnected system.
B. communicating over distributed environments
C. untrustworthy users and systems being present.
D. an available EtherneVIP network stack for flexibility.
E. anticipated eavesdropping from malicious actors.
Answer: A
Explanation:
Comprehensive and Detailed Step by Step
Understanding the Scenario: The problem is legitimate bot traffic overloading the web server, causing
performance issues. The goal is to mitigate this without adding more server resources. Analyzing the
Answer Choices:
A. Block all bot traffic using the IPS: This is too drastic. Blocking all bot traffic can negatively impact
legitimate bots, like search engine crawlers, which are important for SEO.
Reference: While IPS (Intrusion Prevention System) is a valuable security tool, blocking legitimate
traffic is generally undesirable. CASP+ emphasizes understanding the business impact of security
decisions.
B. Monitor legitimate SEO bot traffic for abnormalities: Monitoring is good practice, but it doesn't
actively solve the performance issue caused by the legitimate bots.
Reference: Monitoring aligns with CASP+ objectives related to threat and vulnerability management,
but it's a passive approach in this particular case.
C. Configure the WAF to rate-limit bot traffic: Rate limiting is a good option, but it might be too
aggressive if not carefully tuned. It could still impact the legitimate bots' ability to function correctly. A
WAF is better used to identify and block malicious traffic.
Reference: WAFs (Web Application Firewalls) are a key topic in CASP+. However, the question
states the bot traffic is legitimate, making rate-limiting a less-than-ideal solution initially.
D. Update robots.txt to slow down the crawling speed: This is the most appropriate solution. The
robots.txt file is a standard used by websites to communicate with web crawlers (bots). It can specify
which parts of the site should not be crawled and, crucially in this case, suggest a crawl delay.
Reference: The proper use of robots.txt is a common web security practice covered in CASP+
material. It's the least intrusive method to manage the behavior of legitimate bots.
Why D is the Correct Answer
robots.txt provides a way to politely request that well-behaved bots reduce their crawling speed. The
Crawl-delay directive can be used to specify a delay (in seconds) between successive requests.
This approach directly addresses the performance issue by reducing the load caused by the bots
without completely blocking them or requiring complex WAF configurations.
CASP+ Relevance: This solution aligns with the CASP+ focus on understanding and applying web
application security best practices, managing risks associated with web traffic, and choosing
appropriate controls based on specific scenarios.
How it works (elaboration based on web standards and security practices)
robots.txt: This file is placed in the root directory of a website.
Crawl-delay directive: Crawl-delay: 10 would suggest a 10-second delay between requests.
Respectful Bots: Legitimate search engine crawlers (like Googlebot) are designed to respect the
directives in robots.txt.
In conclusion, updating the robots.txt file to slow down the crawling speed is the best solution in this
scenario because it directly addresses the issue of aggressive bot traffic causing performance
problems without blocking legitimate bots or requiring significant configuration changes. It is a
targeted and appropriate solution aligned with web security principles and CASP+ objectives.
7.A security architect is establishing requirements to design resilience in un enterprise system trial will
be extended to other physical locations.
The system must
• Be survivable to one environmental catastrophe
• Re recoverable within 24 hours of critical loss of availability
• Be resilient to active exploitation of one site-to-site VPN solution
A. Load-balance connection attempts and data Ingress at internet gateways
B. Allocate fully redundant and geographically distributed standby sites.
C. Employ layering of routers from diverse vendors
D. Lease space to establish cold sites throughout other countries
E. Use orchestration to procure, provision, and transfer application workloads lo cloud services
F. Implement full weekly backups to be stored off-site for each of the company's sites
Answer: B
Explanation:
To design resilience in an enterprise system that can survive environmental catastrophes, recover
within 24 hours, and be resilient to active exploitation, the best strategy is to allocate fully redundant
and geographically distributed standby sites.
Here’s why:
Geographical Redundancy: Having geographically distributed standby sites ensures that if one site is
affected by an environmental catastrophe, the other sites can take over, providing continuity of
operations.
Full Redundancy: Fully redundant sites mean that all critical systems and data are replicated,
enabling quick recovery in the event of a critical loss of availability.
Resilience to Exploitation: Distributing resources across multiple sites reduces the risk of a single
point of failure and increases resilience against targeted attacks.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl
NIST Special Publication 800-34: Contingency Planning Guide for Federal Information Systems
ISO/IEC 27031:2011 - Guidelines for Information and Communication Technology Readiness for
Business Continuity
8.SIMULATION
You are a security analyst tasked with interpreting an Nmap scan output from company’s privileged
network.
The company’s hardening guidelines indicate the following:
There should be one primary server or service per device.
Only default ports should be used.
Non-secure protocols should be disabled.
INSTRUCTIONS
Using the Nmap output, identify the devices on the network and their roles, and any open ports that
should be closed.
For each device found by Nmap, add a device entry to the Devices Discovered list, with the following
information:
The IP address of the device
The primary server or service of the device (Note that each IP should by associated with one
service/port only)
The protocol(s) that should be disabled based on the hardening guidelines (Note that multiple ports
may need to be closed to comply with the hardening guidelines)
If at any time you would like to bring back the initial state of the simulation, please click the Reset All
button.
Answer:
9.Within a SCADA a business needs access to the historian server in order together metric about the
functionality of the environment.
Which of the following actions should be taken to address this requirement?
A. Isolating the historian server for connections only from The SCADA environment
B. Publishing the C$ share from SCADA to the enterprise
C. Deploying a screened subnet between 11 and SCADA
D. Adding the business workstations to the SCADA domain
Answer: A
Explanation:
The best action to address the requirement of accessing the historian server within a SCADA system
is to isolate the historian server for connections only from the SCADA environment. Here’s why:
Security and Isolation: Isolating the historian server ensures that only authorized devices within the
SCADA environment can connect to it. This minimizes the attack surface and protects sensitive data
from unauthorized access.
Access Control: By restricting access to the historian server to only SCADA devices, the organization
can better control and monitor interactions, ensuring that only legitimate queries and data retrievals
occur.
Best Practices for Critical Infrastructure: Following the principle of least privilege, isolating critical
components like the historian server is a standard practice in securing SCADA systems, reducing the
risk of cyberattacks.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl
NIST Special Publication 800-82: Guide to Industrial Control Systems (ICS) Security
ISA/IEC 62443 Standards: Security for Industrial Automation and Control Systems
10.During a gap assessment, an organization notes that OYOD usage is a significant risk. The
organization implemented administrative policies prohibiting BYOD usage However, the organization
has not implemented technical controls to prevent the unauthorized use of BYOD assets when
accessing the organization's resources.
Which of the following solutions should the organization implement to b»« reduce the risk of OYOD
devices? (Select two).
A. Cloud 1AM to enforce the use of token based MFA
B. Conditional access, to enforce user-to-device binding
C. NAC, to enforce device configuration requirements
D. PAM. to enforce local password policies
E. SD-WAN. to enforce web content filtering through external proxies
F. DLP, to enforce data protection capabilities
Answer: B, C
Explanation:
To reduce the risk of unauthorized BYOD (Bring Your Own Device) usage, the organization should
implement Conditional Access and Network Access Control (NAC).
Why Conditional Access and NAC?
Conditional Access:
User-to-Device Binding: Conditional access policies can enforce that only registered and compliant
devices are allowed to access corporate resources.
Context-Aware Security: Enforces access controls based on the context of the access attempt, such
as user identity, device compliance, location, and more. Network Access Control (NAC):
Device Configuration Requirements: NAC ensures that only devices meeting specific security
configurations are allowed to connect to the network.
Access Control: Provides granular control over network access, ensuring that BYOD devices comply
with security policies before gaining access.
Other options, while useful, do not address the specific need to control and secure BYOD devices
effectively:
A. Cloud IAM to enforce token-based MFA: Enhances authentication security but does not control
device compliance.
D. PAM to enforce local password policies: Focuses on privileged account management, not BYOD
control.
E. SD-WAN to enforce web content filtering: Enhances network performance and security but does
not enforce BYOD device compliance.
F. DLP to enforce data protection capabilities: Protects data but does not control BYOD device
access and compliance.
Reference: CompTIA SecurityX Study Guide
"Conditional Access Policies," Microsoft Documentation
"Network Access Control (NAC)," Cisco Documentation
11.An organization is implementing Zero Trust architecture A systems administrator must increase
the effectiveness of the organization's context-aware access system.
Which of the following is the best way to improve the effectiveness of the system?
A. Secure zone architecture
B. Always-on VPN
C. Accurate asset inventory
D. Microsegmentation
Answer: D
Explanation:
Microsegmentation is a critical strategy within Zero Trust architecture that enhances context-aware
access systems by dividing the network into smaller, isolated segments. This reduces the attack
surface and limits lateral movement of attackers within the network. It ensures that even if one
segment is compromised, the attacker cannot easily access other segments. This granular approach
to network security is essential for enforcing strict access controls and monitoring within Zero Trust
environments.
Reference: CompTIA SecurityX Study Guide, Chapter on Zero Trust Security, Section on
Microsegmentation and Network Segmentation.
12.After an incident response exercise, a security administrator reviews the following table:
Which of the following should the administrator do to beat support rapid incident response in the
future?
A. Automate alerting to IT support for phone system outages.
B. Enable dashboards for service status monitoring
C. Send emails for failed log-In attempts on the public website
D. Configure automated Isolation of human resources systems
Answer: B
Explanation:
Enabling dashboards for service status monitoring is the best action to support rapid incident
response. The table shows various services with different risk, criticality, and alert severity ratings. To
ensure timely and effective incident response, real-time visibility into the status of these services is
crucial.
Why Dashboards for Service Status Monitoring? Real-time Visibility: Dashboards provide an at-a-
glance view of the current status of all critical services, enabling rapid detection of issues.
Centralized Monitoring: A single platform to monitor the status of multiple services helps streamline
incident response efforts.
Proactive Alerting: Dashboards can be configured to show alerts and anomalies immediately,
ensuring that incidents are addressed as soon as they arise.
Improved Decision Making: Real-time data helps incident response teams make informed decisions
quickly, reducing downtime and mitigating impact.
Other options, while useful, do not offer the same level of comprehensive, real-time visibility and
proactive alerting:
A. Automate alerting to IT support for phone system outages: This addresses one service but does
not provide a holistic view.
C. Send emails for failed log-in attempts on the public website: This is a specific alert for one type of
issue and does not cover all services.
D. Configure automated isolation of human resources systems: This is a reactive measure for a
specific service and does not provide real-time status monitoring.
Reference: CompTIA SecurityX Study Guide
NIST Special Publication 800-61 Revision 2, "Computer Security Incident Handling Guide"
"Best Practices for Implementing Dashboards," Gartner Research
13.Users are experiencing a variety of issues when trying to access corporate resources examples
include
• Connectivity issues between local computers and file servers within branch offices
• Inability to download corporate applications on mobile endpoints wtiilc working remotely
• Certificate errors when accessing internal web applications
Which of the following actions are the most relevant when troubleshooting the reported issues?
(Select two).
A. Review VPN throughput
B. Check IPS rules
C. Restore static content on lite CDN.
D. Enable secure authentication using NAC
E. Implement advanced WAF rules.
F. Validate MDM asset compliance
Answer: A, F
Explanation:
The reported issues suggest problems related to network connectivity, remote access, and certificate
management:
A. Review VPN throughput: Connectivity issues and the inability to download applications while
working remotely may be due to VPN bandwidth or performance issues. Reviewing and optimizing
VPN throughput can help resolve these problems by ensuring that remote users have adequate
bandwidth for accessing corporate resources.
F. Validate MDM asset compliance: Mobile Device Management (MDM) systems ensure that mobile
endpoints comply with corporate security policies. Validating MDM compliance can help address
issues related to the inability to download applications and certificate errors, as non-compliant devices
might be blocked from accessing certain resources.
B. Check IPS rules: While important for security, IPS rules are less likely to directly address the
connectivity and certificate issues described.
C. Restore static content on the CDN: This action is related to content delivery but does not address
VPN or certificate-related issues.
D. Enable secure authentication using NAC: Network Access Control (NAC) enhances security but
does not directly address the specific issues described.
E. Implement advanced WAF rules: Web Application Firewalls protect web applications but do not
address VPN throughput or mobile device compliance.
Reference: CompTIA Security+ Study Guide
NIST SP 800-77, "Guide to IPsec VPNs"
CIS Controls, "Control 11: Secure Configuration for Network Devices"
14.During a security assessment using an CDR solution, a security engineer generates the following
report about the assets in me system:
After five days, the EDR console reports an infection on the host 0WIN23 by a remote access Trojan
Which of the following is the most probable cause of the infection?
A. OW1N23 uses a legacy version of Windows that is not supported by the EDR
B. LN002 was not supported by the EDR solution and propagates the RAT
C. The EDR has an unknown vulnerability that was exploited by the attacker.
D. 0W1N29 spreads the malware through other hosts in the network
Answer: A
Explanation:
OWIN23 is running Windows 7, which is a legacy operating system. Many EDR solutions no longer
provide full support for outdated operating systems like Windows 7, which has reached its end of life
and is no longer receiving security updates from Microsoft. This makes such systems more vulnerable
to infections and attacks, including remote access Trojans (RATs).
A. OWIN23 uses a legacy version of Windows that is not supported by the EDR: This is the most
probable cause because the lack of support means that the EDR solution may not fully protect or
monitor this system, making it an easy target for infections.
B. LN002 was not supported by the EDR solution and propagates the RAT: While LN002 is
unmanaged, it is less likely to propagate the RAT to OWIN23 directly without an established vector.
C. The EDR has an unknown vulnerability that was exploited by the attacker: This is possible but less
likely than the lack of support for an outdated OS.
D. OWIN29 spreads the malware through other hosts in the network: While this could happen, the
status indicates OWIN29 is in a bypass mode, which might limit its interactions but does not directly
explain the infection on OWIN23.
Reference: CompTIA Security+ Study Guide
NIST SP 800-53, "Security and Privacy Controls for Information Systems and Organizations"
Microsoft's Windows 7 End of Support documentation
16.An organization determines existing business continuity practices are inadequate to support critical
internal process dependencies during a contingency event. A compliance analyst wants the Chief
Information Officer (CIO) to identify the level of residual risk that is acceptable to guide remediation
activities.
Which of the following does the CIO need to clarify?
A. Mitigation
B. Impact
C. Likelihood
D. Appetite
Answer: D
Explanation:
Comprehensive and Detailed
Understanding Residual Risk:
Residual risk is the amount of risk remaining after controls and mitigations have been applied. Risk
appetite defines the level of risk an organization is willing to accept before taking additional actions.
Why Option D is Correct:
The CIO must clarify the organization’s "Risk Appetite" to determine how much residual risk is
acceptable.
If risk exceeds the appetite, additional security measures need to be implemented.
This aligns with ISO 31000 and NIST Risk Management Framework (RMF).
Why Other Options Are Incorrect:
A (Mitigation): Mitigation refers to reducing risk, but it doesn’t define the acceptable level of residual
risk.
B (Impact): Impact assessment measures potential damage, but it does not determine what is
acceptable.
C (Likelihood): Likelihood is the probability of risk occurring, but not what level is acceptable.
Reference: CompTIA SecurityX CAS-005 Official Study Guide: Risk Management & Business
Continuity
NIST SP 800-37: Risk Management Framework
ISO 27005: Risk Tolerance & Acceptance
18.A security analyst Detected unusual network traffic related to program updating processes The
analyst collected artifacts from compromised user workstations. The discovered artifacts were binary
files with the same name as existing, valid binaries but. with different hashes which of the following
solutions would most likely prevent this situation from reoccurring?
A. Improving patching processes
B. Implementing digital signature
C. Performing manual updates via USB ports
D. Allowing only dies from internal sources
Answer: B
Explanation:
Implementing digital signatures ensures the integrity and authenticity of software binaries. When a
binary is digitally signed, any tampering with the file (e.g., replacing it with a malicious version) would
invalidate the signature. This allows systems to verify the origin and integrity of binaries before
execution, preventing the execution of unauthorized or compromised binaries.
A. Improving patching processes: While important, this does not directly address the issue of verifying
the integrity of binaries.
B. Implementing digital signatures: This ensures that only valid, untampered binaries are executed,
preventing attackers from substituting legitimate binaries with malicious ones.
C. Performing manual updates via USB ports: This is not practical and does not scale well, especially
in large environments.
D. Allowing only files from internal sources: This reduces the risk but does not provide a mechanism
to verify the integrity of binaries.
Reference: CompTIA Security+ Study Guide
NIST SP 800-57, "Recommendation for Key Management"
OWASP (Open Web Application Security Project) guidelines on code signing
19.A company’s internal network is experiencing a security breach, and the threat actor is still active.
Due to business requirements, users in this environment are allowed to utilize multiple machines at
the same time.
Given the following log snippet:
Which of the following accounts should a security analyst disable to best contain the incident without
impacting valid users?
A. user-a
B. user-b
C. user-c
D. user-d
Answer: C
Explanation:
User user-c is showing anomalous behavior across multiple machines, attempting to run
administrative tools such as cmd.exe and appwiz.CPL, which are commonly used by attackers for
system modification. The activity pattern suggests a lateral movement attempt, potentially indicating a
compromised account.
user-a (A) and user-b (B) attempted to run applications but only on one machine, suggesting less
likelihood of compromise.
user-d (D) was blocked running cmd.com, but user-c’s pattern is more consistent with an attack
technique.
Reference: CompTIA SecurityX (CAS-005) Exam Objectives - Domain 4.0 (Security Operations),
Section on Threat Intelligence and Indicators of Attack
20.SIMULATION
A security engineer needs to review the configurations of several devices on the network to meet the
following requirements:
• The PostgreSQL server must only allow connectivity in the 10.1.2.0/24 subnet.
• The SSH daemon on the database server must be configured to listen to port 4022.
• The SSH daemon must only accept connections from a Single workstation.
• All host-based firewalls must be disabled on all workstations.
• All devices must have the latest updates from within the past eight days.
• All HDDs must be configured to secure data at rest.
• Cleartext services are not allowed.
• All devices must be hardened when possible. Instructions:
Click on the various workstations and network devices to review the posture assessment results.
Remediate any possible issues or indicate that no issue is found.
Click on Server A to review output data. Select commands in the appropriate tab to remediate
connectivity problems to the pOSTGREsql DATABASE VIA ssh
WAP A
PC A
Laptop A
Switch A
Switch B:
Laptop B
PC B
PC C
Server A
Answer:
WAP A: No issue found. The WAP A is configured correctly and meets the requirements.
PC A = Enable host-based firewall to block all traffic
This option will turn off the host-based firewall and allow all traffic to pass through. This will comply
with the requirement and also improve the connectivity of PC A to other devices on the network.
However, this option will also reduce the security of PC A and make it more vulnerable to attacks.
Therefore, it is recommended to use other security measures, such as antivirus, encryption, and
password complexity, to protect PC A from potential threats.
Laptop A: Patch management
This option will install the updates that are available for Laptop A and ensure that it has the most
recent security patches and bug fixes. This will comply with the requirement and also improve the
performance and stability of Laptop
A. However, this option may also require a reboot of Laptop A and some downtime during the update
process. Therefore, it is recommended to backup any important data and close any open applications
before applying the updates.
Switch A: No issue found. The Switch A is configured correctly and meets the requirements.
Switch B: No issue found. The Switch B is configured correctly and meets the requirements.
Laptop B: Disable unneeded services
This option will stop and disable the telnet service that is using port 23 on Laptop B. Telnet is a
cleartext service that transmits data in plain text over the network, which exposes it to eavesdropping,
interception, and modification by attackers. By disabling the telnet service, you will comply with the
requirement and also improve the security of Laptop B. However, this option may also affect the
functionality of Laptop B if it needs to use telnet for remote administration or other purposes.
Therefore, it is recommended to use a secure alternative to telnet, such as SSH or HTTPS, that
encrypts the data in transit.
PC B: Enable disk encryption
This option will encrypt the HDD of PC B using a tool such as BitLocker or VeraCrypt. Disk encryption
is a technique that protects data at rest by converting it into an unreadable format that can only be
decrypted with a valid key or password. By enabling disk encryption, you will comply with the
requirement and also improve the confidentiality and integrity of PC B’s data. However, this option
may also affect the performance and usability of PC B, as it requires additional processing time and
user authentication to access the encrypted data. Therefore, it is recommended to backup any
important data and choose a strong key or password before encrypting the disk.
PC C: Disable unneeded services
This option will stop and disable the SSH daemon that is using port 22 on PC C. SSH is a secure
service that allows remote access and command execution over an encrypted channel. However, port
22 is the default and well-known port for SSH, which makes it a common target for brute-force attacks
and port scanning. By disabling the SSH daemon on port 22, you will comply with the requirement
and also improve the security of PC C. However, this option may also affect the functionality of PC C
if it needs to use SSH for remote administration or other purposes. Therefore, it is recommended to
enable the SSH daemon on a different port, such as 4022, by editing the configuration file using the
following command:
sudo nano /etc/ssh/sshd_config
Server
A. Need to select the following:
21.A software company deployed a new application based on its internal code repository Several
customers are reporting anti-malware alerts on workstations used to test the application.
Which of the following is the most likely cause of the alerts?
A. Misconfigured code commit
B. Unsecure bundled libraries
C. Invalid code signing certificate
D. Data leakage
Answer: B
Explanation:
The most likely cause of the anti-malware alerts on customer workstations is unsecure bundled
libraries. When developing and deploying new applications, it is common for developers to use third-
party libraries. If these libraries are not properly vetted for security, they can introduce vulnerabilities
or malicious code.
Why Unsecure Bundled Libraries?
Third-Party Risks: Using libraries that are not secure can lead to malware infections if the libraries
contain malicious code or vulnerabilities.
Code Dependencies: Libraries may have dependencies that are not secure, leading to potential
security risks.
Common Issue: This is a frequent issue in software development where libraries are used for
convenience but not properly vetted for security.
Other options, while relevant, are less likely to cause widespread anti-malware alerts:
A. Misconfigured code commit: Could lead to issues but less likely to trigger anti-malware alerts.
C. Invalid code signing certificate: Would lead to trust issues but not typically anti-malware alerts.
D. Data leakage: Relevant for privacy concerns but not directly related to anti-malware alerts.
Reference: CompTIA SecurityX Study Guide
"Securing Open Source Libraries," OWASP
"Managing Third-Party Software Security Risks," Gartner Research
23.An audit finding reveals that a legacy platform has not retained loos for more than 30 days The
platform has been segmented due to its interoperability with newer technology. As a temporary
solution, the IT department changed the log retention to 120 days.
Which of the following should the security engineer do to ensure the logs are being properly retained?
A. Configure a scheduled task nightly to save the logs
B. Configure event-based triggers to export the logs at a threshold.
C. Configure the SIEM to aggregate the logs
D. Configure a Python script to move the logs into a SQL database.
Answer: C
Explanation:
To ensure that logs from a legacy platform are properly retained beyond the default retention period,
configuring the SIEM to aggregate the logs is the best approach. SIEM solutions are designed to
collect, aggregate, and store logs from various sources, providing centralized log management and
retention. This setup ensures that logs are retained according to policy and can be easily accessed
for analysis and compliance purposes.
Reference: CompTIA SecurityX Study Guide: Discusses the role of SIEM in log management and
retention. NIST Special Publication 800-92, "Guide to Computer Security Log Management":
Recommends the use of centralized log management solutions, such as SIEM, for effective log
retention and analysis. "Security Information and Event Management (SIEM) Implementation" by
David Miller: Covers best practices for configuring SIEM systems to aggregate and retain logs from
various sources.
24.A company plans to implement a research facility with Intellectual property data that should be
protected.
The following is the security diagram proposed by the security architect
25.A systems administrator works with engineers to process and address vulnerabilities as a result of
continuous scanning activities. The primary challenge faced by the administrator is differentiating
between valid and invalid findings.
Which of the following would the systems administrator most likely verify is properly configured?
A. Report retention time
B. Scanning credentials
C. Exploit definitions
D. Testing cadence
Answer: B
Explanation:
When differentiating between valid and invalid findings from vulnerability scans, the systems
administrator should verify that the scanning credentials are properly configured. Valid credentials
ensure that the scanner can authenticate and access the systems being evaluated, providing
accurate and comprehensive results. Without proper credentials, scans may miss vulnerabilities or
generate false positives, making it difficult to prioritize and address the findings effectively.
Reference: CompTIA SecurityX Study Guide: Highlights the importance of using valid credentials for
accurate vulnerability scanning.
"Vulnerability Management" by Park Foreman: Discusses the role of scanning credentials in obtaining
accurate scan results and minimizing false positives.
"The Art of Network Security Monitoring" by Richard Bejtlich: Covers best practices for configuring
and using vulnerability scanning tools, including the need for valid credentials.