0% found this document useful (0 votes)
18 views

Assignment 2

assignments

Uploaded by

zeegamer100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Assignment 2

assignments

Uploaded by

zeegamer100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

KCA UNIVERSITY

BSD 2103: SOFTWARE ENGINEERING PRINCIPLES


LECTURER GLADYS MANGE
JANUARY - APRIL 2024
ASSIGNMENT 2
GROUP MEMBERS
George Okola Muthami - 22/05768
Wagereka Teddie Muthami - 22/05775
Mwau Rose Kanini - 22/08105
Kyose Lydia Nditi - 22/05615
Kuria Brian Maina - 22/04054
Rotich Stanley Kipkoech - 22/06164
Atieno Derrick Odhiambo - 22/06167

1. Specific quality objectives for the e-learning platform and the key
performance indicators and metrics used.

Objective: to ensure that the learning materials are accurate, engaging and up to date.
The KPIs and metrics used include; content accuracy rate which checks the percentage of correct
information in the learning materials, content update frequency which checks how often content
is reviewed and updated to reflect changes in the subject matter and user engagement metrics
which checks the time spent per session and the completion rate of courses.

Objective: to provide a positive learning experience for users.


The KPIs and metrics used include; user feedback rating for the average ratings provided by
users for course materials, platform usability and the overall experience. Also, customer support
response time which is the time taken to resolve user queries and issues.

Objective: to measure the effectiveness of the platform in achieving learning objectives.


The KPIs and metrics used:knowledge retention - measure how well learners retain and apply the
knowledge gained from the platform. Skill retention - progression in acquiring specific skills as
demonstrated through assessments or practical applications. Assessment scores - average scores
achieved by learners in quizzes, tests or assignments.

Objective: to ensure that the platform is accessible to all users, regardless of their abilities or
backgrounds.
KPIs and metrics used: user accessibility feedback - feedback from users regarding the
accessibility features and usability of the platform.
Objective: to encourage continued usage and active engagement with the platform.
KPIs and metrics used: user retention rate - percentage of users who continue to use the platform
over a specific period. Frequency of visits - average number of visits per user within a defined
timeframe. Social interaction metrics - participation in discussion forums, collaborative activities
or peer-to-peer interactions.

A comprehensive testing strategy for the e-learning platform.


Functional testing
Testing the registration process for different types of users(students, teachers, administrators) and
their respective account management features.
Validating the creation, modification and deletion of courses, as well as the enrollment and
dropping of students.
Ensuring that all learning materials(videos, text, quizzes, assignments)are accessible and
functional.

Verifying the accuracy of scores, grades and feedback for assessments.

Testing levels used

unit testing level because it tests the individual components or modules of the e-learning
platform in isolation to ensure they function properly.

Integration testing level because it verifies the interaction and integration between different
components of the e-learning platform.

System testing level because it validates the end-to-end functionality and performance of the
entire platform as a complete system.

Non-functional testing

Testing the system’s performance under varying loads to ensure it can handle peak usage times
without crashing or significant slowdowns.

Validating the user interface’s intuitiveness and ease of use for all types of users.

Ensuring that user data is protected and that there are no vulnerabilities that could be exploited.

Verifying that the platform meets accessibility standards to accommodate users with disabilities.

Testing levels used


Usability testing level because it evaluates the usability, accessibility and user friendliness of the
e-learning platform from the end user’s perspective.

Usability testing

Checking if the user interface is self explanatory, intuitive and responsive.

Measuring the percentage of tasks users can complete without assistance.

Collecting user feedback on their experiences using the platform.

Security testing

Testing whether the system accepts only valid inputs and rejects invalid ones.

Verifying that only authorized users can access certain features or data.

Checking that users can only perform actions permitted by their roles.

Data encryption - ensuring that sensitive data is stored and transmitted securely.

Strategy for conducting usability testing to ensure a positive user experience.

Defining clear objectives - these objects should outline what aspects of the platform’s usability
one wants to evaluate and improve.

Recruiting diverse participants - recruiting a diverse group of participants represents the target
audience of the platform and ensures that feedback received is comprehensive and reflective of
different user perspectives.

Create realistic scenarios - the scenarios mimic real-world interactions with the platform and
help access its usability in practical situations. The tasks should cover a range of functionalities,
such as navigating the platform, accessing course materials and submitting assignments.

Choosing appropriate testing methods - the most suitable testing methods are chosen based on
the objectives of the usability testing. Some common methods include; moderated usability
testing, unmoderated remote usability testing and expert reviews.

Observing and collecting data - clear instructions and guidance should be provided to the
participants and then observe participants as they interact with the platform. Collect both
quantitative data(such as task completion rates and time taken) and qualitative feedback(such as
comments and observations)

Iterate and improve - analyze the data collected from usability testing sessions to identify
common areas of confusion and use the feedback to iterate on the platform’s design and
functionality, making improvements that enhance overall usability.

Collect ongoing user feedback - establish channels for collecting ongoing feedback from
end-users such as feedback forms, suggestion boxes or user forums. Encourage the users to
provide feedback proactively and regularly, addressing their concerns and suggestions to enhance
usability and user experience over time.

Gathering feedback from end-users

Surveys and questionnaires - include questions about ease of use, satisfaction levels and
suggestions for improvement.

User interviews - conduct one-on-one interviews with end-users to dig deeper into their
experiences and preferences regarding the platform’s usability. These interviews can provide
valuable insight that may not surface through other feedback channels.

Feedback forms within the platform - integrating feedback forms directly within the platform
allows users to submit their comments, suggestions or complaints easily. This real-time feedback
mechanism can capture user sentiments as they engage with the platform.

Analytics data analysis - utilise analytic tools to track behaviour on the platform such as
click-through rates and navigation patterns. Analyzing this data can highlight areas of friction or
confusion that require attention.

Measures to be taken to perform security testing on the e-learning platform.

Identifying security threats and vulnerabilities - conduct a thorough risk assessment to identify
potential security threats and vulnerabilities specific to the e-learning platform. The threats may
include;data breaches, unauthorized access or denial of service attacks.

Define security testing scope and objectives - define the scope and objectives of security testing,
including the systems, components and functionalities used. Determine whether testing will
focus on specific areas such as network security or data protection.

Select security testing techniques and tools - choose appropriate security testing techniques and
tools to evaluate the e-learning platform’s security posture.
Perform security testing activities - conduct a comprehensive series of security testing activities
based on the selected techniques and tools. These may include web application security testing,
network security testing and data protection testing.

Document and prioritize security findings - document all identified security vulnerabilities,
including their severity, impact and potential exploitability. Also prioritize security findings
based on their risk level, likelihood of exploitation and potential impact on users and the
platform.

Implement security controls and remediation measures - collaborate with developers and system
administrators to implement appropriate security controls to address identified vulnerabilities.
Follow secure coding practices, apply patches and updates and configure security settings to
mitigate security risks effectively.

Conduct regular security audits and reviews - establish a schedule for conducting regular security
audits, review and assessments to proactively identify and address emerging security threats and
vulnerabilities.

Provide security awareness training - educate users, administrators and developers about best
practices, common threats and preventive measures through security awareness training
programs.

Potential security threats and vulnerabilities

SQL injection - this involves injecting malicious SQL commands into user inputs to manipulate
the database. This can lead to data breaches and leaks.

Insecure communications - this involves transmitting sensitive data over an insecure channel,
such as HTTP.

Insufficient logging and monitoring - this involves not logging or monitoring system activities,
making it difficult to detect and respond to security incidents.

Lack of security updates - this involves not keeping software up-to-date with the latest security
patches, making it vulnerable to known attacks.

Social engineering attacks - this involves tricking users into revealing sensitive information or
performing unwanted actions through social engineering techniques such as phishing.

Denial of service attacks - this involves overwhelming the system with trafficor requests, making
it unavailable to legitimate users.

Defect management process


Defect identification - involves identifying any deviation from the expected behaviour of the
system or product. Defects can be identified through various means such as testing, user
feedback, monitoring or customer complaints.

Logging - once a defect is identified, it needs to be logged in a defect tracking system. The log
should include details such as description of the defect, steps to reproduce it, affected
components and any other relevant information. This helps in maintaining a centralized
repository of defects for easy tracking and management.

Prioritization - after logging the defect, it needs to be prioritized based on factors like severity,
impact on users, frequency of occurrence and addressing critical issues first.

Resolution - developers work on fixing the issue. The process involves assigning defects to
developers, then the developers schedule the defect to be fixed as per priority, then defects are
fixed and finally developers send a report of resolution to the test manager. This process helps to
fix and track defects easily.

Verification - after the defect is resolved, it needs to be verified to ensure that the fix has been
effective. It involves retesting the system to confirm that the defect has been successfully
addressed and does not reoccur.

Communication with stakeholders - regular updates on defect status helps stakeholders stay
informed and make informed decisions. Communication can be done through various channels
such as status reports, meetings, emails or project management tools.

How to communicate defect status to stakeholders

Regular status reports - provide regular updates on the status of defects including new defects
identified, resolved defects and any pending issues.

Meetings - conduct regular meetings with stakeholders to discuss the overall progress of defect
management and address any concerns or questions.

Email updates - send out email updates highlighting key metrics related to defects such as open
defects, resolved defects or aging defects.

Project management tools - utilize project management tools that allow stakeholders to track
defects in real time and provide visibility into the status of each defect.

You might also like