0% found this document useful (0 votes)
27 views10 pages

System Design Assignment 2

The document discusses various technical concepts including redundancy setups (active-active vs active-passive), consistency models, event-driven architecture challenges, authentication vs authorization, types of software testing, automated vs manual testing, CI/CD pipelines, security testing methods, and the differences between penetration testing and vulnerability scanning. Each section outlines key definitions, challenges, strategies, and objectives related to the respective topics. The information serves as a comprehensive overview for understanding these critical aspects of software development and security.

Uploaded by

Shivam Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

System Design Assignment 2

The document discusses various technical concepts including redundancy setups (active-active vs active-passive), consistency models, event-driven architecture challenges, authentication vs authorization, types of software testing, automated vs manual testing, CI/CD pipelines, security testing methods, and the differences between penetration testing and vulnerability scanning. Each section outlines key definitions, challenges, strategies, and objectives related to the respective topics. The information serves as a comprehensive overview for understanding these critical aspects of software development and security.

Uploaded by

Shivam Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1.

What is the difference between active-active and active-passive redundancy


setups, and how do they impact system availability?

Solution:

Active-Active and Active-Passive

Aspect Active-Active Active-Passive


Both systems are active, Only the primary system
Operation handling and balancing the operates; the secondary
load. system is on standby.
Seamless fail-over with no Fail-over leads to brief
downtime; the secondary downtime while the backup
Failover
system immediately takes system is activated and
over if needed. synchronized.
Only the primary system
Both systems are actively
Resource utilizes full resources;
engaged, maximizing
Utilization the secondary is idle
resources.
during normal operations.
High availability, but some
Very high availability as
System brief downtime during fail-
both systems handle
Availability over until the secondary
traffic continuously.
system activates.
Performance is enhanced Performance is limited, as
Performance through load balancing only one system is handling
between both systems. requests at a time.
Higher cost due to the Generally lower, since only
Cost need to keep both systems the primary system operates
running and synchronized. under normal conditions.
Simpler to maintain, as the
More complex to configure,
backup is idle until
Complexity monitor, and maintain with
needed, reducing
both systems working.
operational complexity.
Recovery time is slower due
Instant recovery without to the process of
Recovery
disruption, as there's no activating and
Speed
need for fail-over. synchronizing the backup
system.
2. Explain the Types of Consistency Models, Challenges of Maintaining
Consistency and Strategies for Managing Consistency.

Solution:

 Types of Consistency Models

1. Strong Consistency: Guarantees that after a write, all reads will return the latest
written value. This ensures immediate synchronization across all replicas but may
introduce delays due to synchronization mechanisms.
2. Session Consistency: Guarantees that a user will always see their most recent
write during a session. However, once the session ends, consistency across users
may not be guaranteed.
3. Weak Consistency: No guarantee of consistency between replicas, meaning data
can be inconsistent temporarily. Often used in systems that value responsiveness
over perfect accuracy.

 Challenges of Maintaining Consistency

1. Latency: Achieving consistency across distributed systems introduces delays,


especially when replicas are spread across various geographical locations. This
latency can impact user experience and system performance.
2. Concurrency: Simultaneous updates to the same data by multiple users can
result in conflicts, requiring mechanisms to ensure that data remains consistent
despite conflicting operations.
3. CAP Theorem: The theorem states that only two of Consistency, Availability, or
Partition Tolerance can be guaranteed at a time. This presents a fundamental
challenge for designing systems that need to balance these three properties.

 Strategies for Managing Consistency

1. Quorum-Based Replication: In quorum-based systems, a majority of replicas


must confirm a write operation to ensure consistency, preventing any node from
diverging drastically from others. This balances consistency and availability.
2. Leader-Based Replication: One node is designated as the leader and handles all
write operations, ensuring consistency by serializing updates. Other nodes act as
followers, replicating the leader’s updates. This model is simple but introduces a
single point of failure.
3. Two-Phase Commit (2PC): A consensus protocol ensuring that all participants
in a distributed transaction either commit or abort an operation, ensuring all
nodes in the transaction agree. This guarantees consistency but can cause delays
and vulnerabilities in case of failure.
3. What are some common challenges associated with implementing an event-
driven architecture, and how can they be addressed?

Solution:

 Complexity in Event Handling


 Challenge: Managing a large number of events and event handlers can be complex,
especially when systems grow. Different services may generate, process, or consume
events in a variety of formats and systems.
 Solution: Use standardized event formats and messaging protocols (e.g., JSON, Avro) to
ensure that events can be easily interpreted across services. Additionally, employing
event streaming platforms (e.g., Kafka, RabbitMQ) can centralize and streamline event
handling.
 Event Ordering and Message Duplication
 Challenge: In distributed systems, events may arrive out of order, or messages may be
duplicated, potentially causing inconsistent processing or errors.
 Solution: Implement event sequencing and deduplication strategies. Event IDs,
timestamps, or sequence numbers can help order events, and idempotency (ensuring
operations can safely run multiple times without side effects) can help handle duplicates.
 Failure Handling and Reliability
 Challenge: If an event is lost or not properly handled due to system failures (e.g.,
network disruptions, service crashes), it can result in inconsistencies across systems.
 Solution: Use reliable messaging systems like Kafka or Amazon SNS/SQS that provide
persistence and allow for retries. Implement dead-letter queues (DLQs) to capture
failed events, which can be reprocessed after resolving the issue.
 Debugging and Monitoring
 Challenge: It’s difficult to trace and debug issues in an event-driven system, particularly
when events flow between multiple services. The asynchronicity of events makes
debugging challenging.
 Solution: Leverage centralized logging systems (e.g., ELK Stack, Prometheus) and
distributed tracing (e.g., Jaeger, Zipkin) to track event flows and service interactions.
Employing monitoring tools to visualize event processing flow and set up alerts for
failures or performance bottlenecks helps in quick issue identification.
 Event Sourcing and Data Consistency
 Challenge: Storing and managing event data over time, especially when using event
sourcing, can lead to issues in data consistency, especially when events change the state
of a system in unpredictable ways.
 Solution: Implement careful event versioning to manage schema changes in events. You
can also use CQRS to separate event handling (commands) from query operations,
ensuring a more manageable state while improving data consistency.
 Scalability and Performance
 Challenge: Event-driven systems often require significant resources to scale, especially
as the number of events and subscribers increases. If poorly designed, this can lead to
performance bottlenecks.
 Solution: Adopt a microservices architecture, where services can independently scale
based on event demands. Utilizing event stream processors (like Apache Kafka Streams)
or serverless platforms (AWS Lambda, Azure Functions) helps to dynamically scale
without managing infrastructure.
4. What are the key differences between authentication and authorization in the
context of securing a web application?

Solution:

 Definition:

 Authentication: The process of verifying the identity of a user, typically through


credentials like usernames and passwords, or more advanced methods like
biometrics, multi-factor authentication (MFA), or tokens.
 Authorization: The process of granting or denying access to specific resources or
actions based on the user's identity and their permissions or roles within the
application.

 Purpose:

 Authentication: Confirms who the user is. It establishes the identity of a user
attempting to access the system.
 Authorization: Determines what the authenticated user is allowed to do within the
application. It assigns or restricts privileges based on the user’s role or access rights.

 Process:

 Authentication: Involves verifying credentials (e.g., login form input, security


questions, or tokens) to validate the user’s identity.
 Authorization: After authentication, the system checks the user's permissions, roles,
or policies to decide which resources or actions are allowed.

 Example:

 Authentication: A user logs into their account with a username and password.
 Authorization: The user is granted access to certain pages or actions within the web
application, based on their role, such as an "Admin" being able to view user data,
while a "User" can only access their profile.

 Order:

 Authentication: Always happens before authorization. The user must be


authenticated before the application can check their access permissions.
 Authorization: Happens after successful authentication to grant or deny access
based on user permissions.

 Focus:

 Authentication: Focuses on verifying identity and confirming that the user is who
they say they are.
 Authorization: Focuses on what the authenticated user can do or access within the
system.
5. What are the main types of software testing, and how do they differ in terms of
objectives and techniques?

Solution:

 Unit Testing

 Objective: Verify that functions or components of code work correctly in isolation.


 Technique: Automated testing frameworks (e.g., JUnit, NUnit) test small units of
code to ensure correct behavior.

 Integration Testing

 Objective: Ensure that different modules or services work together as expected.


 Technique: Tests the interaction between integrated components or APIs.

 System Testing

 Objective: Validate the complete and integrated application against the specified
requirements.
 Technique: End-to-end tests are executed to ensure that the entire system functions
properly as a whole.

 Regression Testing

 Objective: Ensure that new code changes don’t affect existing functionality.
 Technique: Re-runs previous test cases (often automated) after updates or
enhancements to confirm no regressions.

 Acceptance Testing

 Objective: Verify if the application meets requirements and is ready for deployment.
 Technique: Often conducted by users or stakeholders (User Acceptance Testing),
focusing on functionality, usability, and business scenarios.

 Performance Testing

 Objective: Assess how the application performs under different levels of load,
identifying potential bottlenecks.
 Technique: Load testing and stress testing using tools like Apache JMeter or
LoadRunner to simulate varying traffic.

 Security Testing

 Objective: Identify security vulnerabilities and ensure the system is resistant to


potential threats and attacks.
 Technique: Penetration testing, vulnerability scanning, and reviewing the
application's security measures.
6. How does automated testing compare to manual testing, and what are the
advantages and limitations of each?

Solution: Speed and Efficiency

 Automated Testing: It executes faster, especially for repetitive tests such as


regression testing. Automation can run tests continuously and requires no human
intervention once set up.
 Manual Testing: Slower since testers must manually execute each test. It is suitable
for testing scenarios that are not easily automated but can be more resource-
intensive over time.

 Reusability

 Automated Testing: Once created, automated tests can be reused in multiple testing
cycles. Automated scripts are efficient for testing during development phases.
 Manual Testing: No need to write scripts, but every test is done from scratch. It's not
reusable and requires human effort for each round of testing.

 Accuracy

 Automated Testing: Delivers high precision and consistency, reducing human error in
repetitive tasks. Test results will be the same each time, ensuring accuracy.
 Manual Testing: While capable of detecting nuanced issues, it is prone to human
error, especially when testers become fatigued or overlook details.

 Cost-Effectiveness

 Automated Testing: High initial setup costs, including creating the test scripts and
tools. However, it is cost-effective in the long term for with repetitive testing needs.
 Manual Testing: Lower upfront cost as it does not require writing scripts. However,
for large applications, it becomes costly as it relies on human and time for each test.

 Test Coverage

 Automated Testing: Capable of executing a large number of tests quickly, covering


broad functionalities across different environments or platforms.
 Manual Testing: More suited for testing complex, visual, or user-experience-related
scenarios. Coverage can be limited by time and resources.

 Adaptability to Changes

 Automated Testing: Adapting to changes in the application may require time and
effort to update the test scripts. However, once updated, it can easily handle
frequent code changes.
 Manual Testing: Easily adaptable to application changes without needing script
revisions. However, it can be inconsistent and time-consuming with frequent
updates.
7. How can continuous integration and continuous deployment (CI/CD) pipelines
enhance software quality assurance practices?

Solution: Frequent and Consistent Testing

 Continuous Integration (CI) ensures that code is integrated into a shared repository
several times a day. Every time code changes are pushed, automated tests are
triggered to verify the new build's quality.
 Benefit: Ensures consistent testing, identifying defects early and preventing issues
from piling up. This leads to more reliable code at all times.

 Early Detection of Bugs

 CI/CD allows for automated testing to run after every code commit or pull request.
This frequent testing helps in identifying and fixing bugs at an early stage.
 Benefit: Minimizes the time and cost to fix bugs by addressing issues immediately,
improving overall code quality.

 Faster Feedback for Developers

 CI/CD provides fast feedback on each commit, helping developers identify issues in
real-time. This means that if tests fail, developers can address them before pushing
further changes.
 Benefit: Promotes a proactive approach in fixing issues, which increases the
efficiency of development cycles.

 Consistent Deployment Process

 Continuous Deployment (CD) automates the deployment of code changes to


production after passing automated tests, making the deployment process
repeatable, reliable, and less error-prone.
 Benefit: Reduces manual errors, improves system stability, and ensures that all new
code versions are properly tested before release.

 Automation and Reduced Human Error

 The CI/CD pipeline reduces the need for manual intervention by automating testing,
building, and deployment processes.
 Benefit: This minimizes the risk of human error, ensuring tests are executed
consistently every time, and reduces the time spent on manual testing and
deployment tasks.

 Improved Collaboration and Transparency

 CI/CD promotes collaboration among development, QA, and operations teams by


providing a shared platform where everyone can see the status of tests.
 Benefit: Enhances communication, transparency, and coordination, making the QA
process more integrated within the software development lifecycle.
8. What are some common methods used in security testing to identify
vulnerabilities in a software application?

Solution: Penetration Testing

 Description: Simulates attacks on the application to identify vulnerabilities that


could be exploited by malicious users. Ethical hackers attempt to breach the system
by exploiting weaknesses.
 Purpose: Identify real-world vulnerabilities and assess the effectiveness of current
security defenses.

 Static Application Security Testing (SAST)

 Description: Analyzes source code, binaries, or bytecode of the application without


executing it. SAST tools scan for coding errors, security flaws, and potential
vulnerabilities within the codebase.
 Purpose: Detect vulnerabilities early in the development process to prevent security
flaws in the final product.

 Dynamic Application Security Testing (DAST)

 Description: Tests the application while it is running by simulating attacks such as


input validation exploits, SQL injection, cross-site scripting (XSS), and others.
 Purpose: Assess security weaknesses in real-time, including those related to external
interfaces or execution environment vulnerabilities.

 Code Review

 Description: Security experts conduct manual or semi-automated reviews of the


application’s source code to identify potential security issues.
 Purpose: Identify flaws that automated tools might miss, focusing on the code’s logic
and the developer's implementation of security practices.

 Black Box Testing

 Description: Involves testing the software without knowledge of its internal


structure or source code. Attackers treat the application as an external entity to find
security flaws.
 Purpose: Mimic real-world attacks where the attacker has no knowledge of the
application's inner workings.

 White Box Testing

 Description: Involves testing the internal workings of the software, including source
code, configuration settings, and user input validation mechanisms. Testers have full
knowledge of the system's design.
 Purpose: Focuses on identifying security vulnerabilities at every level of the
application, from code logic to configuration
9. How does penetration testing differ from vulnerability scanning, and what role
does each play in a comprehensive security testing strategy?

Solution:  Objective

 Penetration Testing: The main goal is to simulate real-world cyberattacks to identify


and exploit vulnerabilities within the system to gain unauthorized access.
Penetration testing actively attempts to exploit security weaknesses to understand
the potential impact of an attack.
 Vulnerability Scanning: The focus is on scanning an application or network to
identify known vulnerabilities and weaknesses, such as outdated software versions
or missing security patches. It does not attempt to exploit or penetrate the system.

 Approach

 Penetration Testing: Conducted manually or with the help of tools by ethical hackers
(pen testers) who imitate an attacker’s behavior. This test often goes beyond known
vulnerabilities, exploring unanticipated attack vectors.
 Vulnerability Scanning: Automated and routine. Tools like Nessus continuously scan
the system or network for known issues based on a database of vulnerabilities.

 Depth of Testing

 Penetration Testing: Comprehensive and deeper, covering the real-world potential


impact of vulnerabilities and actively attempting to exploit them. The tester tries to
escalate privileges or exfiltrate data, simulating a malicious attack.
 Vulnerability Scanning: More surface-level. Scans primarily for common, known
security flaws but typically do not test the system's actual response to the
discovered vulnerabilities (i.e., it doesn’t exploit the flaws).

 Frequency

 Penetration Testing: Performed periodically, usually at critical milestones such as


after major updates or before software release. It is often conducted annually or
quarterly.
 Vulnerability Scanning: Done more frequently, such as weekly or daily, especially in
continuous security practices. It is often a regular part of maintaining a secure
environment.

 Role in Comprehensive Security Testing Strategy

 Penetration Testing: Acts as a “red team” approach, offering deeper insights into the
potential real-world impact of exploiting vulnerabilities and helping to identify new
or unique attack vectors that automated tools might not detect.
 Vulnerability Scanning: Serves as an ongoing monitoring tool, helping organizations
stay on top of known security flaws and outdated systems. It is excellent for
continuous security assessment but often needs to be complemented by penetration
testing to understand the real impact of identified weaknesses.
10. What are the key principles of Structured Analysis and Structured Design
(SA/SD), and how do they contribute to effective system design?

Soution:

 Separation of Data and Processes:

 This principle involves clearly distinguishing data from the processes that operate on it.
Data structures are defined independently of the functions.
 Contribution: It simplifies the system, improving clarity, maintainability, and scalability.
 Decomposition:
 The system is broken down into smaller, manageable components, starting from high-
level functions and gradually detailed into sub-functions.
 Contribution: Allows for simpler development and troubleshooting by focusing on
individual, smaller components.
 Modularization:
 Dividing the system into well-defined, independent modules that can be developed and
tested separately.
 Contribution: Enhances reusability, reduces complexity, and facilitates changes without
impacting other parts of the system.
 Top-Down Approach:
 The design begins at a high level and progresses to lower levels of detail.
 Contribution: Ensures alignment with business requirements while providing a broad
understanding before delving into details.
 Focus on Data Flow:
 Systems are designed with an emphasis on how data flows through the processes, using
tools like Data Flow Diagrams (DFDs) to map data movement.
 Contribution: Helps ensure data consistency and correct processing, improving system
efficiency.
 Information Hiding:
 Each module hides its internal workings, exposing only necessary interfaces to other
components.
 Contribution: Reduces dependencies between modules, promoting flexibility and ease
of maintenance.
 Iterative Refinement:
 The design process is iterative, with the system being continuously refined as more
details emerge.

Contribution to Effective System Design:

 Organized and Maintainable System: SA/SD helps structure systems clearly, making
them easier to develop and modify.
 Modular Approach: By emphasizing modularization, systems become easier to
maintain, test, and upgrade.
 Risk Mitigation: Tools like DFDs allow potential issues to be identified early in the
design phase, preventing future problems.

You might also like