0% found this document useful (0 votes)
8 views10 pages

Project Brain

project brain

Uploaded by

Mabtoor Mabx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views10 pages

Project Brain

project brain

Uploaded by

Mabtoor Mabx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Adapted methodlogy

VU Process Model for Brain Tumor Segmentation

1. Requirements Gathering (Waterfall Phase):

Objective: Clearly define functional and non-functional requirements for


brain tumor segmentation.
Collaborate with Medical Professionals: Engage in sessions with medical
professionals to understand workflow, challenges, and requirements.
Key Points: Conduct interviews, workshops, or surveys; identify brain tumor
types; explore clinical significance; document detailed requirements.

2. System Design (Waterfall Phase):

Objective: Develop a comprehensive design for the brain tumor


segmentation system.
Design the System Architecture: Develop an architecture, detailing
input/output data, module communication, algorithms, data formats, and
storage.
Key Points: Create a High-Level Design Document with architectural
diagrams, data flow diagrams, module roles, chosen algorithms, and data
structures/interfaces.

3. Implementation (Waterfall Phase):

Objective: Code modules, incorporate data augmentation, and develop the


evaluation module.
Key Points: Utilize suitable programming languages/frameworks, adhere to
coding standards, implement error handling/logging.
Implement Data Augmentation Techniques: Apply rotation, scaling,
flipping, and cropping to diversify the dataset; integrate seamlessly into the
preprocessing pipeline.
Develop the Evaluation Module: Translate system design into executable
code; ensure robust and generalizable deep learning model training.
4. Testing (Spiral Phase):

Objective: Early testing emphasis on unit testing, integration testing, and


collaboration with medical professionals.

Unit Testing:

Develop detailed test cases.

Execute tests for module functionality validation.

Identify and address defects.

Integration Testing:

Integrate modules and test interactions.

Verify data flow, interfaces, and communication.

Collaborate with medical professionals for tool validation.

Deliverables:

Unit Test Reports: Documenting cases, results, and issue resolutions.

Integration Test Reports: Documenting cases, results, and issue resolutions.

Collaboration Feedback: Input from medical professionals on functionality,


usability, and recommendations.

5. Risk Analysis (Spiral Phase):

Objective: Systematically identify and mitigate risks related to data quality, model
accuracy, and generalization.

Assessment of Risks:

Data Quality: Evaluate risks of insufficient or poor-quality training data.

Model Accuracy: Assess risks of overfitting, underfitting, and noisy data.

Generalization: Evaluate risks related to model generalization.


Mitigation Strategies:

Data Quality: Implement rigorous preprocessing and data augmentation.

Model Accuracy: Apply regularization techniques, validate against ground truth


data.

Generalization: Consider imaging protocol variations and demographics.

Importance in Spiral Model: Integral to the iterative and risk-driven approach,


ensuring continuous monitoring and proactive risk mitigation.

6. Refinement and Iteration (Spiral Phase):

Objective: Iterative refinement based on feedback, performance metrics, and real-


world testing.

Activities:

Gather feedback from medical professionals and stakeholders.

Refine the deep learning model based on performance metrics.

Revisit requirements and design, incorporating lessons learned.

7. Final Evaluation (Waterfall Phase):

Objective: Conduct a comprehensive final evaluation for system compliance.

Activities:

Perform extensive testing on the finalized system.

Evaluate system performance against established metrics.

Collaborate with medical professionals for a final assessment.


8. Documentation (Waterfall Phase):

Deliverables:

Provide a user manual for medical professionals.

Key Principles of VU Process Model:

The VU Process Model for Brain Tumor Identification adopts a hybrid approach,
combining waterfall and spiral models for a systematic and iterative development
process. Key principles include sequential progression, flexibility through
adaptation, and collaboration with medical professionals to align with real-world
needs. The methodology encompasses phases such as requirement analysis, data
collection, model development, and collaborative testing, emphasizing user-
friendly tool development. Rigorous model validation involves diverse dataset
testing, performance metrics, and iterative refinement. This comprehensive
approach ensures the creation of a robust brain tumor identification system that
meets practical medical requirements.

Methodology
Our methodology is evaluated on publicly available datasets, and the
results demonstrate that our approach achieves state-of-the-art
performance in terms of accuracy and robustness. Our methodology has
the potential to improve the accuracy and efficiency of brain tumor
segmentation, which can lead to better diagnosis, treatment, and research
outcomes.
1
provides a comprehensive survey of brain tumor detection and
classification using machine learning, including deep learning
techniques. The survey covers the anatomy of brain tumors, publicly
available datasets, enhancement techniques, segmentation, feature
extraction, classification, and deep learning, transfer learning, and
quantum machine learning for brain tumors analysis2 presents an
automatic brain tumor segmentation method based on deep learning
techniques, which uses the public and well-accepted dataset
BRATS3 proposes a novel coarse-to-fine method for brain tumor
segmentation that consists of preprocessing, deep learning network-
based classification, and post-processing4 presents an intelligent brain
tumor segmentation method using improved deep learning techniques,
which deploys a manual methodology of segmentation when diagnosing
brain tumors5 proposes a deep multi-task learning framework for brain
tumor segmentation, which is based on deep learning and can automate
medical image segmentation.

I found a research paper that proposes an efficient brain tumor


segmentation method based on adaptive moving self-organizing map
and fuzzy k-mean clustering (AMSOM-FKM) 1. The authors utilized the
Brats18 MRI Tumor Image database for this study.

Here are some steps that you can consider for an adapted methodology:

1. Data Collection and Preprocessing: Gather a diverse dataset of


brain MRI scans, ensuring data quality and integrity. Preprocess
the images to enhance their suitability for further analysis. Import
the image dataset of Brain MRI scans from the BraTs2019 dataset
2
.
2. Dataset Splitting: To facilitate model training and evaluation, split
the dataset into distinct sets for training, validation, and testing.
This step ensures that the deep learning model’s performance is
rigorously assessed and prevents overfitting.
3. Deep Learning Model Development: Design, implement, and fine-
tune deep learning models, such as AMSOM-FKM, to accurately
segment brain tumors from MRI images.
4. Model Evaluation: Establish robust evaluation metrics to assess the
performance of the deep learning models, ensuring that the
segmentation results are precise and reliable.

Here is a reference link to a research paper on brain tumor segmentation:


1
. The paper discusses an early detection and segmentation of brain
tumors using deep neural networks. It provides an efficient method for
brain tumor segmentation based on the Improved Residual Network
(ResNet). The proposed improved Resnet address all three main
components of existing ResNet: the flow of information through the
network layers, the residual building block, and the projection shortcut.

About Dataset

Ample multi-institutional routine clinically-acquired pre-operative multimodal


MRI scans of glioblastoma (GBM/HGG) and lower grade glioma (LGG), with
pathologically confirmed diagnosis and available OS, are provided as the training,
validation and testing data for this year’s BraTS challenge. Specifically, the
datasets used in this year's challenge have been updated, since BraTS'18, with
more routine clinically-acquired 3T multimodal MRI scans, with accompanying
ground truth labels by expert board-certified neuroradiologists.

Validation data will be released on July 15, through an email pointing to the
accompanying leaderboard. This, will allow participants to obtain preliminary
results in unseen data and also report it in their submitted papers, in addition to
their cross-validated results on the training data. The ground truth of the validation
data will not be provided to the participants, but multiple submissions to the online
evaluation platform (CBICA's IPP) will be allowed.
Finally, all participants will be presented with the same test data, which will be
made available through email during 26 August-7 September and for a limited
controlled time-window (48h), before the participants are required to upload their
final results in CBICA's IPP. The top-ranked participating teams will be invited
before the end of September to prepare slides for a short oral presentation of their
method during the BraTS challenge.

Imaging Data Description


All BraTS multimodal scans are available as NIfTI files (.nii.gz) and describe a)
native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d)
T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, and were acquired
with different clinical protocols and various scanners from multiple (n=19)
institutions, mentioned as data contributors here.

All the imaging datasets have been segmented manually, by one to four raters,
following the same annotation protocol, and their annotations were approved by
experienced neuro-radiologists. Annotations comprise the GD-enhancing tumor
(ET — label 4), the peritumoral edema (ED — label 2), and the necrotic and non-
enhancing tumor core (NCR/NET — label 1), as described both in the BraTS
2012-2013 TMI paper and in the latest BraTS summarizing paper (also see Fig.1).
The provided data are distributed after their pre-processing, i.e. co-registered to the
same anatomical template, interpolated to the same resolution (1 mm^3) and skull-
stripped.

Comparison with Previous BraTS datasets


The BraTS data provided since BraTS'17 differs significantly from the data
provided during the previous BraTS challenges (i.e., 2016 and backwards). The
only data that have been previously used and are utilized again (during
BraTS'17-'19) are the images and annotations of BraTS'12-'13, which have been
manually annotated by clinical experts in the past. The data used during
BraTS'14-'16 (from TCIA) have been discarded, as they described a mixture of
pre- and post-operative scans and their ground truth labels have been annotated by
the fusion of segmentation results from algorithms that ranked highly during
BraTS'12 and '13. For BraTS'17, expert neuroradiologists have radiologically
assessed the complete original TCIA glioma collections (TCGA-GBM, n=262 and
TCGA-LGG, n=199) and categorized each scan as pre- or post-operative.
Subsequently, all the pre-operative TCIA scans (135 GBM and 108 LGG) were
annotated by experts for the various glioma sub-regions and included in this year's
BraTS datasets.

This year we provide the naming convention and direct filename mapping between
the data of BraTS'19, BraTS'18, BraTS'17, and the TCGA-GBM and TCGA-LGG
collections, available through The Cancer Imaging Archive (TCIA).

Survival Data Description


The overall survival (OS) data, defined in days, are included in a comma-separated
value (.csv) file with correspondences to the pseudo-identifiers of the imaging data.
The .csv file also includes the age of patients, as well as the resection status. Note
that only subjects with resection status of GTR (i.e., Gross Total Resection) will be
evaluated, and you are only expected to send your predicted survival data for those
subjects.

Data Usage Agreement / Citations


You are free to use and/or refer to the BraTS datasets in your own research,
provided that you always cite the following three manuscripts:

[1] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et


al. "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)",
IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI:
10.1109/TMI.2014.2377694

[2] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al.,


"Advancing The Cancer Genome Atlas glioma MRI collections with expert
segmentation labels and radiomic features", Nature Scientific Data, 4:170117
(2017) DOI: 10.1038/sdata.2017.117

[3] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al.,


"Identifying the Best Machine Learning Algorithms for Brain Tumor
Segmentation, Progression Assessment, and Overall Survival Prediction in the
BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)

In addition, if there are no restrictions imposed from the journal/conference you


submit your paper about citing "Data Citations", please be specific and also cite the
following:

[4] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al.,


"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the
TCGA-GBM collection", The Cancer Imaging Archive, 2017. DOI:
10.7937/K9/TCIA.2017.KLXWJJ1Q

[5] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al.,


"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the
TCGA-LGG collection", The Cancer Imaging Archive, 2017. DOI:
10.7937/K9/TCIA.2017.GJQ7R0EF.

REFERENCES
(1) BRATS21 Dataset | Papers With Code.
https://paperswithcode.com/dataset/brats21.

(2) MICCAI BraTS 2017: Data - Perelman School of Medicine.


https://www.med.upenn.edu/sbia/brats2017/data.html.

(3) Overview - NVIDIA Docs. https://docs.nvidia.com/launchpad/ai/base-


command-brats/latest/bc-brats-overview.html.

(4) Multimodal Brain Tumor Segmentation Challenge 2019: Registration.


https://www.med.upenn.edu/cbica/brats2019/registration.html.

(5) undefined. http://braintumorsegmentation.org/.

(6) undefined. https://synapse.org/.

The BraTS challenge is a yearly competition that provides ample multi-


institutional routine clinically-acquired pre-operative multimodal MRI scans of
glioblastoma (GBM/HGG) and lower grade glioma (LGG), with pathologically
confirmed diagnosis and available OS, as the training, validation, and testing data
¹. The datasets used in this year's challenge have been updated since BraTS'18 with
more routine clinically-acquired 3T multimodal MRI scans, with accompanying
ground truth labels by expert board-certified neuroradiologists ¹.

If you are looking for references to the BraTS dataset.

- The official BraTS website ¹.

- The BraTS dataset page on Papers With Code ¹.

- The BraTS dataset page on the Perelman School of Medicine website ³.


- The BraTS dataset page on NVIDIA Docs ⁴

You might also like