0% found this document useful (0 votes)
46 views10 pages

CIS2205-24-25-Assignment 2

The document outlines the assignment specification for Module CIS2205: Introduction to Artificial Intelligence at the University of Huddersfield, focusing on a data-driven AI project worth 50% of the final grade. It details the assessment tasks, including data collection, model training, evaluation, and visualization, along with submission guidelines and academic integrity expectations. Additionally, it provides a grading rubric and optional bonus challenges for students seeking to enhance their marks.

Uploaded by

saadqureshiksa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views10 pages

CIS2205-24-25-Assignment 2

The document outlines the assignment specification for Module CIS2205: Introduction to Artificial Intelligence at the University of Huddersfield, focusing on a data-driven AI project worth 50% of the final grade. It details the assessment tasks, including data collection, model training, evaluation, and visualization, along with submission guidelines and academic integrity expectations. Additionally, it provides a grading rubric and optional bonus challenges for students seeking to enhance their marks.

Uploaded by

saadqureshiksa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

THE UNIVERSITY OF HUDDERSFIELD

School of Computing and Engineering


ASSIGNMENT SPECIFICATION
Module Details
Module Code CIS2205
Module Title Introduction to Artificial Intelligence
Course Title/s BSc (Hons)/MSci Computer Science, BSc (Hons)/MEng
Software Engineering, BSc (Hons) Computer Science with
Artificial Intelligence, BSc (Hons) Computer Science with Cyber
Security, BSc (Hons) Computer Science with Games
Programming, BSc (Hons)/MComp Computing

Assessment Weighting, Type and Contact Details


Title Assignment 2: Data-driven Artificial Intelligence
Weighting 50%
Mode of working for Individual
assessment task
Note: there should be no collusion or collaboration whilst
working on and subsequently submitting this assignment.
Module Leader Quratul-Ain Mahesar
Module Tutor/s Faisal Jamil

Submission Submission and Feedback Details


Hand-out date 11/11/2024
How to submit your Brightspace submission point.
work.
Submission date/s 13/12/2024 by 12:00 noon – if you have any technical issues
submitting your work, please contact the Module Leader as
soon as possible.
Expected amount of 16 hours
independent time you
should allocate to
complete this
assessment
Submission type and A written report in the form of a PDF document.
format
Submission Submission and Feedback Details
Date by which your 20/01/2025
grade and feedback Feedback will be delivered through Brightspace
will be returned

Additional Guidance Details


Information
Your responsibility It is your responsibility to read and understand the University
regulations regarding conduct in assessment.
Please pay special attention to the assessment regulations
(section 10) on Academic Misconduct.
In brief: ensure that you;
1. DO NOT use the work of another student - this includes
students from previous years and other institutions, as well
as current students on the module.
2. DO NOT make your work available or leave insecure, for
other students to view or use.
3. Any examples provided by the module tutor should be
appropriately referenced, as should examples from external
sources.

Further guidance can be found in the SCEN Academic Skills


Resource and UoH Academic Integrity Resource module in
Brightspace.

If you experience difficulties with this assessment or with time


management, please speak to the module tutor/s, your
Personal Academic Tutor, or the School’s Guidance Team.
([email protected]).
Guidance on using AI: Level 1 – Not Permitted
The use of AI tools is not permitted in any part of this
assessment.
Additional Guidance Details
Information
School Guidance and If you experience difficulties with this assessment or with time
Support management, please speak to the module tutor/s, your
Personal Academic Tutor, or the Student Progress Mentors.
Student Progress Mentor – useful links.
• Brightspace Module - SCE Student Progress Mentors
(hud.ac.uk).
• Email - [email protected]
Booking an appointment - http://hud.ac/rgl
Requesting a Late It is expected that you complete your assessments by the
Submission published deadlines. However, it is recognised that there can
be unexpected circumstances which may affect you being able
to do so. In such circumstances, you may submit a request for
an extension.
Extension applications must be submitted before the published
assessment deadline has passed.
To apply for an extension, you should access the Extension
System on MyHud.
Extenuating An EC claim is appropriate in exceptional circumstances, when
Circumstances (ECs) an extension is not sufficient due to the nature of the request.
You can access details on the procedure for claiming ECs, on
the Registry website; Consideration of Personal Circumstances
- University of Huddersfield, where you can also access the
EC Claim Form.
You will need to submit independent, verifiable evidence for
your claim to be considered.
Once your EC claim has been reviewed you will get an EC
outcome email from Registry.
An approved EC will extend the submission date to the next
assessment period (e.g July resit period).
Late Submission Late submission, up to 5 working days, of the assessment
(No ECs approved) submission deadline, without an approved extension will result
in your grade being capped to a maximum of a pass mark.
Additional Guidance Details
Information
Submission after this period, will result in a 0% grade for this
assessment component.

Tutor Referral NO
available
Resources • Please note: you can access free Office365 software
and you have 100 Gb of free storage space available on
Microsoft’s OneDrive – Guidance on downloading Office
365.
Assignment 2: Data-driven Artificial Intelligence
1. Assignment Aim
• To develop proficiency in implementing machine learning workflows in Python.

• To enhance skills in data preprocessing, model selection, evaluation, and visualization


for data-driven problem-solving.

2. Learning Outcomes:
• Demonstrate a solid understanding of key stages in a machine-learning workflow.
• Implement and evaluate machine learning models using appropriate performance metrics.
• Effectively use Python tools for data collection, preprocessing, modeling, prediction,
evaluation, and visualization.

3. Assessment Brief:
This assignment consists of four questions, each designed to guide you through essential stages
of a machine learning project.
Section 1 – Data Collection and Preprocessing [20%]
• Select a dataset relevant to a predictive modeling task.
• Provide a brief description of the dataset, including feature descriptions and target
variable.
• Perform the following preprocessing steps:
1. Handle any missing values and outliers.
2. Encode categorical variables as needed.
3. Scale features if necessary.
4. Split the data into training and testing sets (e.g., 70% train, 30% test).
Provide and explain Python code snippets for each preprocessing step. Discuss the data
transformations applied and their relevance.
Section 2 – Model Selection and Training [25%]
• Select two machine learning algorithms suitable for the dataset and prediction task (e.g.,
classification or regression).
• For each model, implement the training process in Python, using the training dataset from
Section 1.
Required Steps:
1. Define each model and provide a brief explanation of why it is suitable for the task.
2. Train both models and save the trained models for evaluation.
Present code snippets and a brief justification for each model choice. Include any
hyperparameter tuning or optimizations performed.
Section 3 – Prediction and Evaluation [30%]
• Use the trained models from Question 2 to generate predictions on the test dataset.
• Evaluate each model’s performance using appropriate metrics:
o For classification tasks, report metrics such as accuracy, precision, recall, and F1-
score.
o For regression tasks, report metrics such as Mean Absolute Error (MAE), Root
Mean Square Error (RMSE), or R-squared (R²).
• Compare the models’ performances and discuss which model performs better and why.
Include Python code for generating predictions and calculating performance metrics. Interpret
and compare the results for each model.
Section 4 – Visualization and Insights [25%]
• Visualize key aspects of your machine learning project to help understand model
performance and data distribution.
Required Visualizations:
1. For classification tasks, provide a confusion matrix or ROC curve.
2. For regression tasks, plot predicted values against actual values.
3. Visualize feature importance (if applicable) to understand which features
contribute most to predictions.
Include code for each visualization and describe the insights gained from these visualizations.
Summarize your findings and observations based on the entire workflow.

Deliverables
1. A Python notebook (or script) containing well-commented code for each section.
2. A brief report summarizing your approach, findings, and key insights across the
assignment.
3. A video presentation demonstrating your project, with a maximum duration of 3 minutes.
**************************** Optional tasks ******************************************

Bonus Challenge (Optional, up to 20%)

For students seeking additional marks, complete one or more of the following tasks:

• Advanced Model Optimization (5%)


• Experiment with hyperparameter tuning using techniques like Grid Search or
Randomized Search.
• Use cross-validation (e.g., k-fold cross-validation) to ensure a robust evaluation
of your models.
• Document the optimization process and compare the optimized model's
performance to the original models.
• Ensemble Techniques (5%)
• Combine the models you’ve implemented using ensemble methods such as
bagging (e.g., Random Forest), boosting (e.g., XGBoost), or stacking.
• Compare the ensemble model’s performance with the individual models.
• Provide insights into the ensemble’s strengths and any observed improvements.
• Deep Learning Model Implementation (5%)
• Implement a basic neural network (using TensorFlow, PyTorch, or Keras) to
solve the same prediction task.
• Compare the deep learning model’s performance with the machine learning
models from the assignment.
• Discuss any differences in performance, advantages, or challenges
encountered.
• Comprehensive Error Analysis (5%)
• Conduct a deeper error analysis on the misclassifications or prediction errors.
For example, examine if specific data patterns contribute to higher error rates.
• Create visualizations highlighting patterns in the errors and propose potential
improvements.

Note: This section is optional and provides an opportunity for additional marks. Select
one or more tasks from the list above, and include any code, visualizations, and
discussions in your submission.

******************************************************************************************
.
4. Marking Scheme (Assignment 2)
Marking Criteria – Data Collection and Preprocessing (5 + 5 + 5 + 5 = 20%)

• Dataset Description (5%): Up to 5 points for a complete and clear description of the
dataset, including features and target variable.
• Handling Missing Values and Outliers (5%): Up to 5 points for correctly identifying and
addressing missing values and outliers with clear explanations.
• Feature Scaling/Encoding (5%): Up to 5 points for appropriate feature scaling and
encoding based on the dataset’s requirements.
• Data Splitting (5%): Up to 5 points for correctly splitting the data into training and testing
sets and providing a justification for the chosen split ratio.

Marking Criteria – Model Selection and Training (10 + 5 + 5 + 5 = 25%)

• Model Choice Justification (10%): Up to 10 points for choosing two suitable models
with thorough explanations of why each model fits the prediction task.
• Model Training (5%): Up to 5 points for successful training of both models, including
any tuning/optimization steps.
• Code Quality (5%): Up to 5 points for well-structured and well-documented code.
• Justification of Hyperparameter Choices (5%): Up to 5 points for clear justifications
of hyperparameters used, showing understanding of model settings.

Marking Criteria – Prediction and Evaluation (5 + 5 + 10 + 10 = 30%)

• Prediction Generation (5%): Up to 5 points for generating predictions on the test


dataset for both models.
• Performance Metrics Calculation (5%): Up to 5 points for correctly calculating and
reporting performance metrics.
• Model Comparison (10%): Up to 10 points for a critical comparison of the models,
analyzing the strengths and weaknesses of each based on the metrics.
• Interpretation and Discussion (10%): Up to 10 points for clear, insightful
interpretations of the results and conclusions drawn from model performance.

Marking Criteria – Visualization and Insights (5 + 5 + 5 + 10 = 25%)

• Confusion Matrix or ROC Curve (2.5%): Up to 2.5 points for a clear and accurate
visualization of classification model performance (or regression plot, if applicable).
• Feature Importance (2.5%): Up to 2.5 points for correctly visualizing feature
importance (if applicable), with explanations on the feature impacts.
• Additional Visualization (2.5%): Up to 2.5 points for any additional visualization, such
as error analysis or scatter plots of predictions vs. actuals.
• Insightful Analysis (7.5%): Up to 7.5 points for providing meaningful insights based on
the visualizations and summarizing findings effectively.
• Video demonstration (10%): A video presentation demonstrating your project, with a
maximum duration of 3 minutes.

5. Grading Rubric

These criteria are intended to help you understand how your work will be assessed. They describe
different levels of performance of a given criterion.

Criteria are not weighted equally, and the marking process involves academic judgement and
interpretation within the marking criteria.

The grades between Pass and Very Good should be considered as different levels of performance
within the normal bounds of the module. The Exceptional and Outstanding categories allow for
students who, in addition to fulfilling the Excellent requirements, perform at a superior level beyond
the normal boundaries of the module and demonstrate intellectual creativity, originality and
innovation.
INTERMEDIATE (FHEQ LEVEL 5)
90 Outstanding demonstration of scholarly application and critical understanding of subject
+ area knowledge
• well-structured assessment that addresses the learning outcomes and specific criteria for
the module
• critical understanding/application is evident through systematic, relevant and
comprehensive coverage of content
• clearly communicated in a style appropriate to the assessment brief
• very limited areas for improvement

80 Exceptional demonstration of scholarly application and critical understanding of subject area


+ knowledge
• well-structured assessment that addresses the learning outcomes and specific criteria for
the module
• critical understanding/application is evident through systematic, relevant and
comprehensive coverage of content.
• clearly communicated in a style appropriate to the assessment brief

70 Excellent demonstration of scholarly application and critical understanding of subject area


+ knowledge
• well-structured assessment that addresses the learning outcomes and specific criteria for
the module
• Critical understanding/application is evident through systematic and relevant coverage of
content.
• Clearly communicated in a style appropriate to the assessment brief

60 Very good demonstration of the scholarly application and critical understanding of subject
+ area knowledge
• well-structured assessment that addresses the learning outcomes and specific criteria for
the module
• Critical understanding/application is generally evident in the coverage of content
• Clearly communicated in a style appropriate to the assessment brief

INTERMEDIATE (FHEQ LEVEL 5)


50 Good demonstration of the scholarly application and critical understanding of subject area
+ knowledge
• fairly well-structured assessment that addresses the learning outcomes and specific criteria
for the module
• Some critical understanding/application is evident through coverage of content that is also
descriptive
• good communication in a style appropriate to the assessment brief
40 Adequate demonstration of scholarly application and critical understanding of subject area
+ knowledge
• adequately structured assessment that addresses the learning outcomes and specific
criteria for the module
• largely descriptive with some critical understanding/application evident through coverage
of content
• communicates in a style appropriate to the assessment brief

30 Limited demonstration of scholarly application and critical understanding of subject area


+ knowledge
• poorly structured assessment that does not completely address the module learning
outcomes and specific criteria for the module.
• Work is descriptive in its coverage of the content.
• Poor communication that does not use a style appropriate to the assessment brief

20 Minimal demonstration of scholarly application and critical understanding of subject area


+ knowledge
• poorly structured assessment that only addresses a small part of the module learning
outcomes and specific criteria for the module
• Work is descriptive in its coverage of the content, and in places may be inadequate.
• poor communication that does not use a style appropriate to the assessment brief

10 • poorly structured assessment that does not address the module learning outcomes and
+ specific criteria.
• coverage of the content is inadequate or incomplete.
• poor communication that does not use a style appropriate to the assessment brief

0+ Poorly structured assessment that does not address at all the learning outcomes and specific
criteria for the module

10

You might also like