Milestone 1 Assignment Description
Milestone 1 Assignment Description
Assignment Instructions:
1. Group Formation:
2. Deliverables:
As a team, prepare a Research Topic Proposal document that includes:
o Group Members' full name: Member1 full name and member 2 full name.
o Justification: Explain why this research is significant, how it aligns with the course's
objectives
o Collaboration Plan: Describe how you will divide tasks and manage your workflow
as a team.
3. Submission Guidelines:
o Submit one proposal per group in Word format or any editable format. The file
should be named CS670_M1_Proposal_First and Lastname1_First and
Lastname2.doc.
o Upload the document to Kodiak before the deadline - February 3, 2025 (11:59 PM).
Grading Criteria
o A clear, relevant, and feasible research topic aligned with ML/AI and course
objectives.
3. AI in Professional Development:
Investigate how AI can support professional growth, such as personalized learning
recommendations, skills assessment, or career path prediction systems.
8. Energy-Efficient AI Algorithms:
Research and develop algorithms that minimize energy consumption in large-scale
computing environments like data centers or IoT systems.
9. AI in Climate Modeling:
Build ML models to predict and analyze the impacts of climate change, focusing on disaster
preparedness or renewable energy optimization.
Example
• Team Members:
o Member 1: [Full Name]
o Member 2: [Full Name]
• Title:
Enhancing Explainability in AI for Healthcare Decision-Making
• Brief Description:
This research aims to improve the explainability of AI models used in healthcare
decision-making systems. It focuses on methods like SHAP (Shapley Additive
Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to interpret
predictions made by ML models in applications such as disease diagnosis and treatment
recommendations. The project will evaluate these methods for interpretability and
usability in real-world clinical settings, contributing to the responsible use of AI in
healthcare.
• Research Questions:
1. What are the key challenges in ensuring the explainability of AI models in
healthcare?
2. How do SHAP and LIME compare in terms of accuracy and usability in
healthcare decision-making?
3. What are the potential trade-offs between model accuracy and explainability?
• Justification:
Explainability is critical in AI applications where decisions impact human lives. In
healthcare, making AI models interpretable fosters trust, enhances adoption, and ensures
ethical deployment. This project aligns with the course objective of addressing technical
and societal challenges in ML/AI.
AI and ML, Department of Computer Science Hanieh Shabanian, PhD
• Collaboration Plan:
Team members will jointly select research questions. One member will focus on
reviewing existing methods (SHAP, LIME), while the other will analyze their
applications in healthcare. The team will collaborate to synthesize findings and prepare
future milestones.