0% found this document useful (0 votes)
7 views

1python Full-1 Project

This document discusses analyzing Twitter data using Python. It introduces popular Python libraries for Twitter data analysis like Pandas, NumPy and Matplotlib. It also covers preprocessing techniques like sentiment analysis, topic modeling and network analysis that can be used to extract insights from Twitter data.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

1python Full-1 Project

This document discusses analyzing Twitter data using Python. It introduces popular Python libraries for Twitter data analysis like Pandas, NumPy and Matplotlib. It also covers preprocessing techniques like sentiment analysis, topic modeling and network analysis that can be used to extract insights from Twitter data.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

TWITTETR DATA ANALYSIS

CHAPTER 1

INTRODUCTION
The world today is increasingly being shaped by social media platforms, with Twitter being
one of the most prominent among them. With over 330 million monthly active users, Twitter
generates an enormous amount of data every day, consisting of tweets, likes, retweets, and
other interactions. This data can provide valuable insights into user behavior, public sentiment,
and emerging trends. However, analyzing such massive and complex data can be a challenging
task.

Python, a popular programming language in the data science community, provides powerful
tools for data analysis and visualization. Its extensive range of libraries, including NumPy,
Pandas, and Matplotlib, make it an ideal choice for analyzing large Twitter datasets. In this
project, we will explore the process of analyzing Twitter data using Python. We will use various
analytical techniques such as sentiment analysis, topic modeling, and network analysis to
extract valuable insights from the data.

The project will begin with accessing and extracting data using the Twitter API, followed by
preprocessing the data to clean up irrelevant information and format it for analysis. We will
then explore various analytical techniques to uncover insights from the data. Sentiment analysis
will help us determine the general public sentiment about a particular topic, while topic
modeling will help identify the most common topics being discussed. Network analysis will
help us identify influential users and communities on Twitter.

By the end of the project, we will have a better understanding of how to effectively analyze
and visualize Twitter data using Python. This will enable us to extract valuable insights from
social media platforms and understand the public sentiment towards a particular topic or brand.
The skills and techniques learned in this project can be applied to a wide range of real-world
scenarios, including business and marketing strategies, political campaigns, and social issues
analysis.

1.1 Introduction To Python Libraries:


Python is a popular programming language that is widely used for data analysis and machine
learning. One of the strengths of Python is its rich ecosystem of libraries that make it easier to
work with data. When it comes to analysing social media data, Twitter is one of the most
popular platforms, and Python has a variety of libraries that can be used for this purpose.

EWIT DEPARTMENT OF MCA Page 1


TWITTETR DATA ANALYSIS

Some of the popular Python libraries for Twitter data analysis include Tweepy, a library for
accessing the Twitter API; Text Blob, a library for natural language processing; Network, a
library for network analysis; and Pandas, a library for data manipulation and analysis. These
libraries make it easier to collect and analyse Twitter data, including tweets, followers,
retweets, and more.

Python libraries for Twitter data analysis can help users gain insights into their social media
presence, identify trends, track sentiment, and monitor their brand reputation. With the right
tools and techniques, Python can be a powerful platform for analyzing social media data and
gaining valuable insights that can inform marketing strategies and improve business
performance.

1.2 Introduction to NumPy:


NumPy is a powerful Python library that is commonly used for scientific computing and data
analysis. When it comes to analyzing Twitter data, NumPy provides a range of tools and
functionalities that can be used to manipulate and analyze large sets of data.

NumPy's main functionality is the ability to work with multi-dimensional arrays and matrices,
which are essential for data analysis. This library allows for efficient data manipulation and
computation, which is particularly useful when working with large Twitter datasets.

NumPy can be used to perform a variety of operations on Twitter data, such as filtering, sorting,
and aggregating data. Additionally, it provides a range of mathematical and statistical functions
that can be used to analyze the data.

NumPy is also widely used in conjunction with other Python libraries for data analysis, such
as Pandas and Matplotlib. Together, these libraries provide a powerful suite of tools for
working with Twitter data and gaining valuable insights.

Overall, NumPy is an essential tool for anyone working with large datasets, including Twitter
data. Its efficient data manipulation capabilities and broad range of functions make it an
indispensable part of any data analysis project.

1.3 Introduction to Pandas:

Pandas is a widely used Python library for data manipulation and analysis. When it comes to
analysing Twitter data, Pandas is particularly useful for handling large datasets and performing
complex data transformations.

EWIT DEPARTMENT OF MCA Page 2


TWITTETR DATA ANALYSIS

Pandas allows for easy manipulation of tabular data, such as Twitter data in the form of CSV
or Excel files. It provides a range of tools for filtering, merging, sorting, and transforming data,
making it easier to extract insights from the data.

With Pandas, it is also possible to perform advanced data analysis techniques, such as grouping
data by category or applying statistical functions to the data. Additionally, Pandas can be used
for data visualization, allowing for easy creation of charts and graphs.

Pandas is often used in conjunction with other Python libraries, such as NumPy and Matplotlib,
to create a powerful data analysis pipeline. Together, these libraries provide a comprehensive
suite of tools for working with Twitter data.

Overall, Pandas is an essential tool for anyone working with large datasets, including Twitter
data. Its flexibility and powerful capabilities make it an indispensable part of any data analysis
project.

1.4 Introduction to Matplotlib:

Matplotlib is a popular Python library for data visualization. When it comes to analysing
Twitter data, Matplotlib can be used to create various types of charts and graphs to better
understand the data.

With Matplotlib, it is possible to create scatter plots, line charts, bar charts, histograms, and
more. These visualizations can be used to gain insights into various aspects of Twitter data,
such as the number of followers, the frequency of tweets, or the sentiment of tweets.

Matplotlib provides a range of customization options for creating visually appealing and
informative visualizations. It allows for customization of colours, labels, axes, and more,
making it possible to create professional-quality visualizations that communicate insights
effectively.

Matplotlib is often used in conjunction with other Python libraries, such as Pandas and NumPy,
to create a comprehensive data analysis pipeline. Together, these libraries provide a powerful
suite of tools for working with Twitter data and gaining valuable insights.

Overall, Matplotlib is an essential tool for anyone working with Twitter data or any other type
of data that requires visualization. Its powerful capabilities and customization options make it
an indispensable part of any data analysis project.

EWIT DEPARTMENT OF MCA Page 3


TWITTETR DATA ANALYSIS

CHAPTER 2

LITERATURE SURVEY

Twitter data analysis has gained popularity in recent years due to the large volume of data
generated by Twitter users. In this literature survey, I will discuss some of the important studies
on Twitter data analysis.
1. Twitter Sentiment Analysis: A Review - This study, published in 2018, reviews the existing
research on sentiment analysis of Twitter data. The study concludes that Twitter sentiment
analysis can be used in a variety of fields such as politics, business, and healthcare.
2. Twitter Data Analytics: Methods, Tools, and Applications - This book, published in 2019,
provides an overview of methods, tools, and applications for Twitter data analytics. It covers
topics such as data collection, preprocessing, visualization, and machine learning.
3. Analyzing Twitter Data with Python: A Comprehensive Guide to Extracting, Processing,
Analyzing, and Visualizing Twitter Data - This book, published in 2019, provides a
comprehensive guide to analyzing Twitter data using Python. It covers topics such as data
collection, preprocessing, sentiment analysis, and network analysis.
4. A survey of Twitter data analysis using computational intelligence - This study, published in
2020, reviews the existing research on Twitter data analysis using computational intelligence
techniques such as machine learning and natural language processing. The study concludes that
these techniques can be used to extract valuable insights from Twitter data.
5. Twitter Data Analysis: An Overview - This study, published in 2020, provides an overview of
Twitter data analysis techniques. The study covers topics such as data collection, preprocessing,
sentiment analysis, and network analysis.
Overall, these studies demonstrate the importance of Twitter data analysis in various fields and
provide a comprehensive guide to analyzing Twitter data using different techniques.

EWIT DEPARTMENT OF MCA Page 4


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 3

REQUIREMENT ANALYSIS
3.1 SOFTWARE AND HARDWARE SPECIFICATION
Software Requirement
• Python
• Python IDLE
➢ Software Libraries Required
• Pandas
• NumPy
• Matplotlib
• Seaborn

Hardware Requirement

• PC with Minimum requirement of 4 GB RAM and 80GB HDD

3.2 SOFTWARE SPECIFICATION

Python
Python is an interpreted high-level general-purpose programming language. Python's design
philosophy emphasizes code readability with its notable use of significant indentation. Its
language constructs as well as its object-oriented approach aim to help programmers write
clear, logical code for small and large-scale projects.

Python is dynamically-typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly, procedural), object-oriented and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library

3.3 Existing System:

• Limited customization and flexibility in Twitter data analysis


• Lack of industry-specific insights and customized analysis
• High cost of use for existing tools
• Limited data analysis customization options
• Limitations in the amount of data that can be analysed and the types of analysis that can be
performed

EWIT DEPARTMENT OF MCA Page 5


TWITTETR DATA ANALYSIS 2022-23

• Lack of flexibility in data analysis leading to incomplete or inaccurate insights


• Dependence on third-party tools and limitations in customization

3.4 Proposed System:

3.4.1 Functional Requirements:

• Data collection using Tweepy and Twitter API


• Data cleaning using Pandas.
• Data analysis using NumPy and Pandas
• Sentiment analysis using Text Blob
• Data visualization using Matplotlib.

3.4.2 Non-Functional Requirements:

• Performance - the system should be able to handle large volumes of Twitter data and
perform analysis efficiently.
• Usability - the system should have a user-friendly interface that allows for easy data input,
analysis, and visualization.
• Reliability - the system should be reliable and accurate in data collection, cleaning, and
analysis.
• Security - the system should ensure the privacy and security of user data.

EWIT DEPARTMENT OF MCA Page 6


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 4

SYSTEM DESIGN AND DEVELOPMENT

Designing and developing a system for Twitter data analysis involves several steps and
considerations. Here are some key steps that you can follow:

Define the problem and requirements: Start by defining the problem you want to solve and the
requirements of the system. For example, you may want to analyze Twitter data to understand
customer sentiment about a product or service, track the performance of a marketing campaign, or
identify influencers in a particular niche.

Collect and store data: The next step is to collect and store Twitter data. You can use the Twitter
API to collect data in real-time or historical data, or you can use third-party tools such as Tweepy or
Twitter API to collect data. You will also need to choose a data storage solution such as a SQL
database or NoSQL database like MongoDB or Cassandra.

Data pre-processing: Once you have collected the data, you will need to pre-process it. This
includes cleaning and filtering the data to remove noise and irrelevant information, such as retweets,
duplicate tweets, and spam.

Data analysis: After pre-processing, you can perform various types of data analysis, such as
sentiment analysis, topic modelling, network analysis, and trend analysis. You can use tools like
Python, R, or MATLAB for data analysis.

Visualization and reporting: Finally, you can visualize the results of your analysis and generate
reports to communicate your findings. You can use tools like Tableau, Power BI, or Matplotlib for
data visualization.

Some additional considerations for designing and developing a Twitter data analysis system include:

Scalability: Make sure that the system is scalable to handle large volumes of data and can
accommodate future growth.

Security: Ensure that the system is secure and complies with data privacy regulations.

EWIT DEPARTMENT OF MCA Page 7


TWITTETR DATA ANALYSIS 2022-23

Real-time analysis: If you need to perform real-time analysis, you will need to design a system that
can process data in real-time, such as using streaming data processing frameworks like Apache
Kafka or Apache Spark.

Collaboration: If you are working in a team, consider using collaborative tools like GitHub or
GitLab for version control and collaboration.

By following these steps and considerations, you can design and develop a robust system for Twitter
data analysis that can provide valuable insights for your business or research.

Use Case Diagram:

Fig 4.1: Use Case Diagram


Description: Use-case diagrams describe the high-level functions and scope of a system

EWIT DEPARTMENT OF MCA Page 8


TWITTETR DATA ANALYSIS 2022-23

Data Flow Diagram:

Fig 4.2: Data Flow Diagram


Description: Data Flow Diagram Describes the flow of the Twitter Data

EWIT DEPARTMENT OF MCA Page 9


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 5

IMPLEMENTATION
5.1 CODING:
names=["clean_text","category"]

td=pd.read_csv("C:/Users/heman/python/Twitter_data.csv",names=names)[:10000]
td.dropna(axis=0, inplace=True)
tweets=td['clean_text'].tolist()
string = " ".join(tweets)
# datatype info
td.info()
positiveCount=int(td[td['category']==1].count().category)
neutralCount=int(td[td['category']==0].count().clean_text)
negativeCount=int(td[td['category']==-1].count().clean_text)

y = np.array([positiveCount,neutralCount,negativeCount])
labels=["positive","neutral","negative"]
plt.figure(figsize=(3,3))
plt.legend(title="Analysis")
plt.pie(y, labels=labels, autopct='%1.1f%%', explode=[0.1,0.1,0.1], shadow=True, startangle=90)

plt.show()
plt.figure(figsize=(10,10))
plt.imshow(WordCloud().generate(string))
X_train, X_test, y_train, y_test = train_test_split(td.clean_text,td.category,
random_state=104,train_size=0.8, shuffle=True)
def getCleanedText(text):

text = text.lower()
tokens = tokenizer.tokenize(text)

new_tokens = [toke for toke in tokens if toke not in e_stopwords]

EWIT DEPARTMENT OF MCA Page 10


TWITTETR DATA ANALYSIS 2022-23

stemmed_token=[ps.stem(tokens) for tokens in new_tokens]

clean_text=" ".join(stemmed_token)
return clean_text
X_clean=[getCleanedText(x) for x in X_train]
xt_clean=[getCleanedText(x) for x in X_test]
X_clean=[getCleanedText(x) for x in X_train]
xt_clean=[getCleanedText(x) for x in X_test]
cm1=confusion_matrix(y_test,n_pred)
sns.heatmap(cm1,annot=True)
cm2=confusion_matrix(y_test,l_pred)
sns.heatmap(cm2,annot=True)

5.2 SNAPSHOTS:
o Twitter data of clean text of five tweets

Fig 5.1: Twitter data of clean text of five tweets

EWIT DEPARTMENT OF MCA Page 11


TWITTETR DATA ANALYSIS 2022-23

o Analysis of positive, neutral and positive count

Fig 5.2: Analysis of positive, neutral and positive count

o Word count of twitter data

Fig 5.3: Word count of twitter data

EWIT DEPARTMENT OF MCA Page 12


TWITTETR DATA ANALYSIS 2022-23

o Classification report of twitter data

Fig 5.4: Classification report of twitter data

o Heat map of user data

Fig 5.5: Heat map of user data

EWIT DEPARTMENT OF MCA Page 13


TWITTETR DATA ANALYSIS 2022-23

o Heat map of post data

Fig 5.6: Heat map of post data

o Heat map of twitter data

Fig 5.7: Heat map of twitter data

EWIT DEPARTMENT OF MCA Page 14


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 6

SOFTWARE TESTING

6.1 Testing

Software testing is an investigation conducted to provide stakeholders with information about the
quality of the product or service under test. Software testing can also provide an objective,
independent view of the software to allow the business to appreciate and understand the risks of
software implementation. Test techniques include the process of executing a program or
application with the intent of finding software bugs (errors or other defects). Software testing
involves the execution of a software component or system component to evaluate one or more
properties of interest. In general, these properties indicate the extent to which the component or
system under test:

• meets the requirements that guided its design and development,


• responds correctly to all kinds of inputs,
• performs its functions within an acceptable time,
• is sufficiently usable,
• can be installed and run in its intended environments, and
• Achieves the general result its stakeholder’s desire.
As the number of possible tests for even simple software components is practically infinite, all
software testing uses some strategy to select tests that are feasible for the available time and
resources. As a result, software testing typically (but not exclusively) attempts to execute a program
or application with the intent of finding software bugs (errors or other defects). The job of testing
is an iterative process as when one bug is fixed; it can illuminate other, deeper bugs, or can even
create new ones.

Software testing can provide objective, independent information about the quality of software and
risk of its failure to users and/or sponsors. Software testing can be conducted as soon as executable
software (even if partially complete) exists. The overall approach to software development often
determines when and how testing is conducted. For example, in a phased process, most testing
occurs after system requirements have been defined and then implemented in testable programs. In
contrast, under an Agile approach, requirements, programming, and testing are often done
concurrently.

EWIT DEPARTMENT OF MCA Page 15


TWITTETR DATA ANALYSIS 2022-23

THE MAIN AIM OF TESTING

The main aim of testing is to analyze the performance and to evaluate the errors that occur when
the program is executed with different input sources and running in different operating
environments.

In this project, we have developed an Image Processing & Deep learning code which helps in
detection of face and analyzing real known and unknown face. The main aim of testing this project
is to check if the faces are getting detected accurately and check the working performance when
different images are given as inputs.

The testing steps are:

• Unit Testing.

• Integration Testing.

• Validation Testing.

• User Acceptance Testing.

• Output Testing.

UNIT TESTING:

Unit testing, also known as component testing refers to tests that verify the functionality of a
specific section of code, usually at the function level. In an object-oriented environment, this is
usually at the class level, and the minimal unit tests include the constructors and destructors. Unit
testing is a software development process that involves synchronized application of a broad
spectrum of defect prevention and detection strategies in order to reduce software development
risks, time, and costs. The following Unit Testing Table shows the functions that were tested at the
time of programming. The first column gives all the modules which were tested, and the second
column gives the test results. Test results indicate if the functions, for given inputs are delivering
valid outputs.

Function Name Tests Results Uploading Image Tested for uploading different types and sizes of
images. Detecting faces with Testing for different images of different faces. Get output Tested if
the message is displayed successfully.

EWIT DEPARTMENT OF MCA Page 16


TWITTETR DATA ANALYSIS 2022-23

Function Name Tests Results


Uploading csv data Tested for uploading data set and model.
Fraud detection For the uploaded model file, insurance fraud is predicted
Display result Output is displayed successfully.
Table: 6.1 Unit Testing

INTEGRATION TESTING:

Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative way
or all together ("big bang"). Normally the former is considered a better practice since it allows
interface issues to be located more quickly and fixed. Integration testing works to expose defects in
the interfaces and interaction between integrated components (modules). Progressively larger groups
of tested software components corresponding to elements of the architectural design are integrated
and tested until the software works as a system.

VALIDATION TESTING:

At the culmination of integration testing, software is completed assembled as a package. Interfacing


errors have been uncovered and corrected. Validation testing can be defined in many ways; here the
testing validates the software function in a manner that is reasonably expected by the customer. In
software project management, software testing, and software engineering, verification and validation
(V&V) is the process of checking that a software system meets specifications and that it fulfills its
intended purpose. It may also be referred to as software quality control.

USER ACCEPTANCE TESTING

Performance of an acceptance test is actually the user’s show. User motivation and knowledge are
critical for the successful performance of the system. The above tests were conducted on the newly
designed system performed to the expectations. All the above testing strategies were done using the
following test case designs.

EWIT DEPARTMENT OF MCA Page 17


TWITTETR DATA ANALYSIS 2022-23

6.2 TEST CASES


o Data set collection Test case:
Test Case 1
Name of Test Data set Collection
Input Collection of fraud insurance dataset
Excepted output Csv data format file
Actual output Csv data format file
Result Successful
Table 6.2 Data set Collection

o Training Test case:


Test Case 1
Name of Test Training
Input Load the dataset from folder i.e csv file
Excepted output Training of csv file to get model file
Actual output Training of csv file to get model file
Result Successful
Table 6.3 Training Test

o Prediction or Recommendation Test case:


Test Case 1
Name of Test Prediction and Recognition
Input Data input from the code
Excepted output Prediction of result and displaying the output
Actual output Prediction of result and displaying the output
Result Successful
Table 6.4 Prediction or recommendation Test

EWIT DEPARTMENT OF MCA Page 18


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 7

CONCLUSION

In conclusion, analyzing Twitter data using Python can provide valuable insights into user
behavior, sentiment, and trends. However, the first step in any Twitter data analysis project is
to clean the data to remove any noise or irrelevant information.
Cleaning the data involves removing duplicates, irrelevant columns, and handling missing
or incorrect data. It also involves removing stopwords, punctuation, and other noise from the
text data. Additionally, text normalization techniques such as stemming and lemmatization can
be applied to reduce the complexity of the data.
Overall, cleaning the Twitter data is a critical step in the analysis process as it ensures that
the data is accurate and relevant for further analysis. By using Python's powerful data cleaning
and manipulation libraries, such as Pandas and NLTK, the cleaning process can be automated,
saving time and increasing efficiency..

EWIT DEPARTMENT OF MCA Page 19


TWITTETR DATA ANALYSIS 2022-23

CHAPTER 9
FUTURE ENHANCEMENT
❖ Use regular expressions: Regular expressions (regex) are a powerful tool for cleaning textual
data. You can use them to identify and remove URLs, mentions, hashtags, emojis, and other
special characters in tweets.
❖ Handle missing data: Twitter data may contain missing values or null values. You can handle
these missing data by either dropping the rows or filling them with appropriate values.
❖ Remove duplicates: Twitter data can contain duplicate tweets due to retweets or multiple
users posting the same content. Removing duplicates can improve the accuracy of your
analysis.
❖ Use advanced natural language processing techniques: Advanced natural language
processing (NLP) techniques such as sentiment analysis, topic modeling, and named entity
recognition can provide more insights into Twitter data. You can use these techniques to
analyze the sentiment of tweets, identify trending topics, and extract entities such as people,
organizations, and locations.
❖ Implement data validation: It is important to validate the data before analyzing it. You can
implement data validation techniques to check for outliers, inconsistencies, and errors in the
data.
❖ Handle encoding issues: Twitter data may contain encoding issues due to the use of non-
ASCII characters. You can handle these issues by encoding the data in the appropriate format.
❖ Normalize the data: Twitter data may contain variations in spelling, capitalization, and
punctuation. You can normalize the data by converting the text to lowercase, removing
punctuation, and correcting spelling errors.
❖ Implement automated cleaning: You can automate the cleaning process by creating a pipeline
that performs all the necessary cleaning steps. This can save time and ensure consistency in
your analysis.
❖ Use machine learning algorithms: You can use machine learning algorithms such as
clustering, classification, and regression to analyze Twitter data. These algorithms can
provide more accurate insights and predictions.
❖ Visualize the data: You can use data visualization tools such as matplotlib, seaborn, and
plotly to create visualizations of the cleaned data. This can help you to identify patterns and
trends in the data more easily.

EWIT DEPARTMENT OF MCA Page 20


TWITTETR DATA ANALYSIS 2022-23

BIBLOGRAPHY

Referred Websites:

• Bramer, M. (2017). Principles of Data Mining (Third Edition). Springer.


• This book covers the fundamentals of data mining, including the different types of data, data
preprocessing, and techniques for data analysis. It also includes practical examples of data
mining in Python.
• Kulkarni, A., & Nagaraj, R. (2019). Sentiment Analysis of Twitter Data Using Machine
Learning Techniques. International Journal of Engineering and Advanced Technology
(IJEAT), 9(1), 231-235.
• This paper discusses the use of machine learning techniques for sentiment analysis of Twitter
data. It includes a detailed methodology for data collection, preprocessing, feature extraction,
and classification, along with the evaluation of the results.
• Liew, J., & Li, J. (2018). Text Mining and Analysis: Practical Methods, Examples, and Case
Studies Using SAS. SAS Institute.
• This book provides an overview of text mining and analysis techniques using SAS, but many
of the concepts and methods can also be applied in Python. It covers topics such as text
preprocessing, sentiment analysis, topic modeling, and social network analysis.
• Lin, C. T., & Lee, C. Y. (2017). Social media analytics: A survey of techniques, tools and
applications. Journal of Big Data, 4(1), 24.
• This paper provides an overview of social media analytics techniques, tools, and applications,
including Twitter data analysis. It includes a comparison of different methods for data
collection, preprocessing, analysis, and visualization, as well as examples of real-world
applications.
• Rizwan, M., Sadiq, S., Khan, M. F., & Zaman, T. (2020). Twitter Sentiment Analysis using
Python. Journal of Information Processing Systems, 16(4), 999-1014.

EWIT DEPARTMENT OF MCA Page 21

You might also like