Project A1
Project A1
1. What is Cyberbullying?
Cyberbullying means hurting someone through online messages, like insults, threats, or harmful
comments. Twitter is a platform where this happens often, because people can post short messages
(tweets) freely and anonymously.
This project is all about building a system that can automatically detect whether a tweet is harmful
or bullying in nature, using Artificial Intelligence (AI) and Deep Learning.
In deep learning, we use models that try to mimic how the brain works when processing
information.
A hybrid model means we're combining two types of deep learning models to make a stronger,
smarter system:
• CNN (Convolutional Neural Network): Usually used in image processing, but here it's used
to recognize patterns in text (like spotting bad words or phrases).
• LSTM (Long Short-Term Memory): A special kind of neural network that understands
sequences of words (like grammar or sentence context).
• AND also understand the context of a sentence (whether it’s really meant to harm or not).
• Prevents Harm: Helps detect and stop harmful content before it reaches the victim.
• Assists Moderators: Makes the job easier for platforms like Twitter by flagging harmful
content.
Step-by-Step:
1. Collect Tweets:
o Use Twitter’s API to get real tweets (both normal and bullying ones).
2. Preprocess Tweets:
o Host the model using a web framework so others can use it (like schools, parents,
platforms).
5. Technologies Involved
Technology Purpose
TensorFlow / Keras Libraries used to build and train deep learning models
NLP (Natural Language Processing) Used to process and understand human language