D6 BATCH (2) Template For Mini Project Documentation
D6 BATCH (2) Template For Mini Project Documentation
CHAPTER 1
INTRODUCTION
INTRODUCTION:
In the dynamic landscape of the tourism industry, understanding and harnessing
insights from tourist behavior have become imperative for destinations and
businesses aiming to provide personalized and enriching experiences.
Recognizing this evolving need, our project, titled "Analyzing Tourist Behavior
using Big Data Technology," endeavors to employ advanced data analytics to
unravel the intricate patterns and preferences that define tourist activities. By
leveraging the power of big data technologies, we aim to revolutionize the way
we glean insights from vast and diverse datasets, ultimately contributing to the
enhancement of tourism strategies and services.
Methodology:
Our approach involves clustering tourists with similar interests using the K-
Means algorithm on the Geo-Tagged images dataset. This clustering aids in
grouping users who exhibit comparable behaviors, forming the foundation for
personalized recommendations. When a user submits a query, the clustering
algorithm predicts the relevant cluster, and the system suggests the top 5
popular destinations within that cluster as recommendations.
To enhance the model's interpretability, we employ the Random Forest
Classifier to train user features and cluster labels. Notably, we use the SHAP
(SHapley Additive exPlanations) framework for model explanation, replacing
LIME. SHAP offers insights into feature contributions, revealing which aspects
play a crucial role in predicting a particular label. This transparency aids in
building user trust and understanding the model's decision-making process.
CHAPTER 2
LITERATURE SURVEY
CHAPTER 3
SYSTEM ANALYSIS
Existing System:
The current methodologies for analyzing tourist behavior often grapple with the
challenges posed by the sheer volume, velocity, and variety of data generated in
the tourism sector. Conventional analytics tools may struggle to keep pace with
the dynamic nature of tourist preferences, leading to limitations in accurately
capturing the nuances of their behavior. The existing systems may not fully
exploit the potential of big data technologies, missing opportunities to provide
deeper and more nuanced insights. Our project seeks to address these limitations
by bridging the gap between traditional analytics and cutting-edge big data-
driven approaches, unlocking a more comprehensive understanding of tourist
behavior.
Proposed System:
Our proposed system aims to usher in a new era of tourist behavior analysis,
underpinned by the capabilities of big data technology. By leveraging robust
frameworks such as Apache Hadoop or Apache Spark, we seek to process and
analyze vast datasets comprising social media interactions, transaction records,
and geospatial data. Advanced analytics and machine learning algorithms will
be deployed to identify patterns, predict trends, and uncover valuable insights
that can inform decision-making in the tourism sector.
To ensure scalability and real-time adaptability, cloud-based solutions will be
explored, providing a flexible infrastructure for seamless integration and
analysis. The envisioned system not only promises a more nuanced
understanding of tourist behavior but also sets the stage for the development of
personalized and data-driven strategies that can elevate the overall tourism
experience. In essence, this project aspires to redefine the landscape of tourist
behavior analysis, showcasing the transformative potential of big data
technologies in shaping the future of the tourism industry.
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into
the research and development of the system is limited. The expenditures must
be justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on
the available technical resources. This will lead to high demands being placed
on the client. The developed system must have a modest requirement,
as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods
that are employed to educate the user about the system and to make him familiar
with it. His level of confidence must be raised so that he is also able to make
some constructive criticism, which is welcomed, as he is the final user of the
system.
CHAPTER 4
SYSTEM DESIGN
MODULES
Tensorflow
TensorFlow is a free and open-source software library for dataflow and
differentiable programming across a range of tasks. It is a symbolic math
library, and is also used for machine learning applications such as neural
networks. It is used for both research and production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It
was released under the Apache 2.0 open-source license on November 9, 2015.
Numpy :
Numpy is a general-purpose array-processing package. It provides a high-
performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains
various features including these important ones:
A powerful N-dimensional array object
Sophisticated (broadcasting) functions
Tools for integrating C/C++ and Fortran code
Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient
multi-dimensional container of generic data. Arbitrary data-types can be defined
using Numpy which allows Numpy to seamlessly and speedily integrate with a
wide variety of databases.
Pandas
Pandas is an open-source Python Library providing high-performance data
manipulation and analysis tool using its powerful data structures. Python was
majorly used for data munging and preparation. It had very little contribution
towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless
of the origin of data load, prepare, manipulate, model, and analyze. Python with
Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality
figures in a variety of hardcopy formats and interactive environments across
platforms. Matplotlib can be used in Python scripts, the Python and IPython
shells, the Jupyter Notebook, web application servers, and four graphical user
interface toolkits. Matplotlib tries to make easy things easy and hard things
possible. You can generate plots, histograms, power spectra, bar charts, error
charts, scatter plots, etc., with just a few lines of code. For examples, see the
sample plots and thumbnail gallery.
For simple plotting the pyplot module provides a MATLAB-like interface,
particularly when combined with IPython. For the power user, you have full
control of line styles, font properties, axes properties, etc, via an object oriented
interface or via a set of functions familiar to MATLAB users.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning
algorithms via a consistent interface in Python. It is licensed under a permissive
simplified BSD license and is distributed under many Linux distributions,
encouraging academic and commercial use
UML DIAGRAMS :
UML stands for Unified Modeling Language. UML is a standardized general-
purpose modeling language in the field of object-oriented software engineering.
The standard is managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and
the software development process. The UML uses mostly graphical notations to
express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class
contains information.
SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of
interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams
are sometimes called event diagrams, event scenarios, and timing diagrams.
ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise
activities and actions with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams can be used to describe the
business and operational step-by-step workflows of components in a system. An
activity diagram shows the overall flow of control.
CHAPTER 5
SYSTEM IMPLEMENTATION
SOFTWARE ENVIRONMENT :
What is Python :
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level
programming language.
Python allows programming in Object-Oriented and Procedural paradigms.
Python programs generally are smaller than other programming languages like
Java.
Programmers have to type relatively less and indentation requirement of the
language, makes them readable all the time.
Advantages of Python :-
Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various
purposes like
regular expressions, documentation-generation, unit-testing, web browsers,
threading, databases, CGI, email, image manipulation, and more. So, we don’t
have to write the complete code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can
write some of your code in languages like C++ or C. This comes in handy,
especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your
Python code in your source code of a different language, like C++. This lets us
add scripting capabilities to our code in the other language.
4. Improved Productivity
When working with Java, you may have to create a class to print ‘Hello
World’. But in Python, just a print statement will do. It is also quite easy to
learn, understand, and code. This is why when people pick up Python, they
have a hard time adjusting to other more verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading
English. This is the reason why it is so easy to learn, understand, and code. It
also does not need curly braces to define blocks, and indentation is
mandatory. This further aids the readability of the code.
8. Object-Oriented
This language supports both the procedural and object-oriented programming
paradigms. While functions help us with code reusability, classes and objects let
us model the real world. A class allows the encapsulation of data and functions
into one.
same with Python. Here, you need to code only once, and you can run it
anywhere. This is called Write Once Run Anywhere (WORA). However, you
need to be careful enough not to include any system-dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are
executed one by one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment
section.
Advantages of Python Over Other Languages :
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task
is done in other languages. Python also has an awesome standard library
support, so you don’t have to search for any third-party libraries to get your job
done. This is the reason that many people suggest learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can
leverage the free available resources to build applications. Python is popular and
widely used so it gives you better community support.
The 2019 Github annual survey showed us that Python has overtaken Java
in the most popular programming language category.
3. Python is for Everyone
Python code can run on any machine whether it is Linux, Mac or Windows.
Programmers need to learn different languages for different jobs but with
Python, you can professionally build web apps, perform data analysis and
machine learning, automate things, do web scraping and also build games and
powerful visualizations. It is an all-rounder programming language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you
choose it, you should be aware of its consequences as well. Let’s now see the
downsides of choosing Python over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is
interpreted, it often results in slow execution. This, however, isn’t a problem
MRCE Department of CSE 21
ANALYSING TOURIST BEHAVIOUR USING BIG DATA TECHNOLOGY
unless speed is a focal point for the project. In other words, unless high speed is
a requirement, the benefits offered by Python are enough to distract us from its
speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen
on the client-side. Besides that, it is rarely ever used to implement smartphone-
based applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t
that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to
declare the type of variable while writing the code. It uses duck-typing. But
wait, what’s that? Well, it just means that if it looks like a duck, it must be a
duck.
While this is easy on the programmers during coding, it can raise run-time
errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database
access layers are a bit underdeveloped. Consequently, it is less often applied in
huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my
example. I don’t do Java, I’m more of a Python person. To me, its syntax is so
simple that the verbosity of Java code seems unnecessary.This was all about the
Advantages and Disadvantages of Python Programming Language.
History of Python : -
What do the alphabet and the programming language Python have in common?
Right, both start with ABC. If we are talking about ABC in the Python context,
it's clear that the programming language ABC is meant. ABC is a general-
purpose programming language and programming environment, which had been
developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde
&Informatica). The greatest achievement of ABC was to influence the design of
Python.Python was conceptualized in the late 1980s. Guido van Rossum worked
that time in a project at the CWI, called Amoeba, a distributed operating system.
In an interview with Bill Venners1, Guido van Rossum said: "In the early 1980s,
I worked as an implementer on a team building a language called ABC at
Centrum voor Wiskunde en Informatica (CWI). I don't know how well people
know ABC's influence on Python. I try to mention ABC's influence because I'm
indebted to everything I learned during that project and to the people who
worked on it."Later on in the same Interview, Guido van Rossum continued: "I
remembered all my experience and some of my frustration with ABC. I decided
to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple
virtual machine, a
simple parser, and a simple runtime. I made my own version of the various ABC
parts that I liked. I created a basic syntax, used indentation for statement
grouping instead of curly braces or begin-end blocks, and developed a small
number of powerful data types: a hash table (or dictionary, as we call it), a list,
strings, and numbers."
At the most fundamental level, machine learning can be categorized into two
main types: supervised learning and unsupervised learning.Supervised learning
involves somehow modeling the relationship between measured features of data
and some label associated with the data; once this model is determined, it can be
used to apply labels to new, unknown data. This is further subdivided into
classification tasks and regression tasks: in classification, the labels are discrete
categories, while in regression, the labels are continuous quantities.
We will see examples of both types of supervised learning in the following
section.
Unsupervised learning involves modeling the features of a dataset without
reference to any label, and is often described as "letting the dataset speak for
itself." These models include tasks such as clustering and dimensionality
reduction. Clustering algorithms identify distinct groups of data, while
dimensionality reduction algorithms search for more succinct representations of
the data. We will see examples of both types of unsupervised learning in the
following section.
Need for Machine Learning
Human beings, at this moment, are the most intelligent and advanced species on
earth because they can think, evaluate and solve complex problems. On the
other side, AI is still in its initial stage and haven’t surpassed human intelligence
in many aspects. Then the question is that what is the need to make machine
learn? The most suitable reason for doing this is, “to make decisions, based on
data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial
Intelligence, Machine Learning and Deep Learning to get the key information
from data to perform several real-world tasks and solve problems. We can call it
data-driven decisions taken by machines, particularly to automate the process.
These data-driven decisions can be used, instead of using programing logic, in
the problems that cannot be programmed inherently. The fact is that we can’t do
without human intelligence, but other aspect is that we all need to solve real-
world problems with efficiency at a huge scale. That is why the need for
machine learning arises.
way to go. The reason behind is that ML has not been able to overcome number
of challenges. The challenges that ML is facing currently are −
Object recognition
Fraud detection
Fraud prevention
Recommendation of products to customer in online shopping.
How to Start Learning Machine Learning?
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as
a “Field of study that gives computers the capability to learn without being
explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine
Learning is one of the most popular (if not the most!) career choices. According
toIndeed, Machine Learning Engineer Is The Best Job of 2019 with a 344%
growth and an average base salary of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how
to start learning it? So this article deals with the Basics of Machine Learning
and also the path you can follow to eventually become a full-fledged Machine
Learning Engineer. Now let’s get started!!!
How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely
talented Machine Learning Engineer. Of course, you can always modify the
steps according to your needs to reach your desired end-goal!
Step 1 – Understand the Prerequisite : case you are a genius, you could start ML
directly but normally, there are some prerequisites that you need to know which
include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if
you don’t know these, never fear! You don’t need a Ph.D. degree in these topics
to get started but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine
Learning. However, the extent to which you need them depends on your role as
a data scientist. If you are more focused on application heavy machine learning,
then you will not be that heavily focused on maths as there are many common
libraries available. But if you want to focus on R&D in Machine Learning, then
mastery of Linear Algebra and Multivariate Calculus is very important as you
will have to implement many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time
as an ML expert will be spent collecting and cleaning data. And statistics is a
field that handles the collection, analysis, and presentation of data. So it is no
surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical
Significance, Probability Distributions, Hypothesis Testing, Regression, etc.
Also, Bayesian Thinking is also a very important part of ML which deals with
various concepts like Conditional Probability, Priors, and Posteriors, Maximum
Likelihood, etc.
(c) Learn Python
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics
and learn them as they go along with trial and error. But the one thing that you
absolutely cannot skip is Python! While there are other languages you can use
for Machine Learning like R, Scala, etc. Python is currently the most popular
language for ML. In fact, there are many Python libraries that are specifically
useful for Artificial Intelligence and Machine Learning such as Keras,
TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using
various online resources and courses such as Fork Python available Free on
GeeksforGeeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually
learning ML (Which is the fun part!!!) It’s best to start with the basics and then
move on to the more complicated stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning
Model – A model is a specific representation learned from data by
applying some machine learning algorithm. A model is also called a
hypothesis.
Feature – A feature is an individual measurable property of the data. A
set of numeric features can be conveniently described by a feature vector.
Feature vectors are fed as input to the model. For example, in order to
predict a fruit, there may be features like color, smell, taste, etc.
Target (Label) – A target variable or label is the value to be predicted by
our model. For the fruit example discussed in the feature section, the label
with each set of input would be the name of the fruit like apple, orange,
banana, etc.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python
has a design philosophy
that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It
supports multiple programming paradigms, including object-oriented,
imperative, functional and procedural, and has a large and comprehensive
standard library.
Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is
similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and
interact with the interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable
and terse code is part of this, and so is access to powerful constructs that avoid
tedious repetition of code. Maintainability also ties into this may be an all but
useless metric, but it does say something about how much code you have to
scan, read and/or understand to troubleshoot problems or tweak behaviors. This
speed of development, the ease with which a programmer of other languages
can pick up basic Python skills and the huge standard library is key to another
area where Python excels. All its tools have been quick to implement, saved a
lot of time, and several of them have later been patched and updated by people
with no Python background - without breaking.
Modules Used in Project :-
Tensorflow
TensorFlow is a free and open-source software library for dataflow and
differentiable programming across a range of tasks. It is a symbolic math
library, and is also used for machine learning applications such as neural
networks. It is used for both research and production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It
was released under the Apache 2.0 open-source license on November 9, 2015.
Numpy : Numpy is a general-purpose array-processing package. It provides a
high-performance
multidimensional array object, and tools for working with these arrays.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning
algorithms via a consistent interface in Python. It is licensed under a permissive
simplified BSD license and is distributed under many Linux distributions,
encouraging academic and commercial use.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python
has a design philosophy that emphasizes code readability, notably using
significant whitespace.
Python features a dynamic type system and automatic memory management. It
supports multiple programming paradigms, including object-oriented,
imperative, functional and procedural, and has a large and comprehensive
standard library.
Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is
similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and
interact with the interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable
and terse code is part of this, and so is access to powerful constructs that avoid
tedious repetition of code. Maintainability also ties into this may be an all but
useless metric, but it does say something about how much code you have to
scan, read and/or understand to troubleshoot problems or tweak behaviors. This
speed of development, the ease with which a programmer of other languages
can pick up basic Python skills and the huge standard library is key to another
area where Python excels. All its tools have been quick to implement, saved a
lot of time, and several of them have later been patched and updated by people
with no Python background - without breaking.
Now, check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.
Step 3: You can either select the Download Python for windows 3.7.4 button in
Yellow Color or you can scroll further down and click on download with
respective to their version. Here, we are downloading the most recent python
version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating
system.
• To download Windows 32-bit python, you can select any one from the three
options: Windows x86 embeddable zip file, Windows x86 executable installer
or Windows x86 web-based installer.
•To download Windows 64-bit python, you can select any one from the three
options: Windows x86-64 embeddable zip file, Windows x86-64 executable
installer or Windows x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part
regarding which version of python is to be downloaded is completed. Now we
move ahead with the second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can
click on the Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out
the installation process.
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python
3.7 to PATH.
With these above three steps on python installation, you have successfully and
correctly installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on
File > Click on Save
Step 5: Name the file and save as type should be Python files. Click on SAVE.
Here I have named the files as Hey World.
Step 6: Now for e.g. enter print
CHAPTER 6
TESTING
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs produce
valid outputs. All decision branches and internal code flow should be validated.
It is the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is event
driven and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions
tested are available as specified by the business and technical requirements,
system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be
exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on
requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields, predefined
Unit Testing
Unit testing is usually conducted as part of a combined code and
unit test phase of the software lifecycle, although it is not uncommon for coding
and unit testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests
will be written in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing
of two or more integrated software components on a single platform to produce
failures caused by interface defects.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets
the functional requirements.
CHAPTER 7
RESULTS
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
In this project we are implementing Popular Tourist Place Recommendation
using Bigdata framework called SPARK. Spark can process data in distributed
manner so it can handle any size of data. In propose work we are using Geo
Tagged images dataset to extract places information and then this text data will
be cluster to group similar behaviour tourist into same cluster. Whenever new
user input any query then clustering algorithm will predict similar cluster as per
user query and then suggest top 5 popular visited similar interest places as
recommendation.
To cluster user behaviour we have used KMEANS algorithm and then the user
features and cluster label will get trained with Random Forest Classifier to
explain model and for explanation we have used SHAP framework instead of
LIME. This model will explain which features contribute most to predict
particular label.
For this project we have used below tourism data which is Geo Tagged using
FLICKER
In above dataset first row represents column names and remaining rows
represents dataset values and in columns we have names as “Photo ID,
Description, tags, favourites as Number of time visited etc.”. So by using above
dataset we will cluster and recommend places for new user.
SCREEN SHOTS
We have coded this project using JUPYTER notebook and below are the code
and output screens with blue colour comments
In above screen using Spark we are loading Tourism dataset and after loading
will get below output
In above screen reading all Geo Tagged text data and then converting all clean
text data into numeric TFIDF vector and this vector contains average frequency
of each words and if word does not contains then vector will have 0 and by
using this vector KMEANS will perform clustering
In above screen displaying TFIDF vector for few rows from dataset
In above screen we performed clustering on all text data and after clustering in
output last column we can see which row or user place goes to which cluster. In
above table in last column we can see cluster label for each records. In above
clustering we have created 10 clusters so all rows will be distributed between 1
to 10 clusters.
In above graph x-axis represents place names and y-axis represents Number of
visited and small dots represents number of time place is visited
In above screen we define function which will read user query and then
recommend similar places from cluster based on user interest query
In above screen in Text Box I gave query as ‘london sculpture museum’ and
press enter key to get top 5 recommended places like below screen
In above table we got top 5 recommended places from cluster 6 and in blue
colour text we can see number of users visited those places. Similarly you can
enter query and get popular tourist places from cluster. Above output can be
consider as future tourism places which will be in demand
In above screen we have added SHAP modelling tool to explain about the
model who is using which features most for prediction.
CHAPTER 8
CONCLUSION
CONCLUSION
In conclusion, the "Analyzing Tourist Behavior using Big Data Technology"
project revolutionizes tourism analytics by integrating advanced technologies
like Apache Hadoop and Apache Spark.This initiative provides deeper insights
into tourist patterns, preferences, and trends from diverse data sources, enabling
real-time, data-driven decisions. By addressing the limitations of traditional
systems, it sets the stage for personalized strategies and enhanced tourist
experiences. This project highlights the transformative potential of big data in
tourism, paving the way for innovation, better resource allocation, and tailored
experiences, ushering in a new era of informed decision-making and strategic
planning
CHAPTER 9
REFERENCES
REFERENCES:
1) J. G. Bao and Y. F. Chu Tourism, Geography, Beijing:Higher Education
Press, pp. I-5, 2013.
2) J. Bowden, "A cross-national analysis of international tourist flows in
China", Tourism Geographies, vol. 5, pp. 257-279, Mar. 2003.
3) Y. F. Ma, T. S. Li and X. P. Liu, Research on Chinese Inbound Tourism,
Beijing:Science Press, pp. 10-15, 1999.
4) V. K. Jansen and R. Spee, "A regional analysis of tourist flows within
Europe", Tourism Management, vol. 16, pp. 73-80, Jan. 1995.
5) G. Richards, "Tourism attraction systems: Exploring cultural behavior",
Annals of Tourism Research, vol. 28, pp. 1048-1064, Apr. 2002.
6) F. J. Liu, J. Zhang and J. H. Zhang, "Analysis of basic method of collecting
the spatial data of tourist flows: A study review and comparison both at home
and abroad" in Tourism Tribune, Beijing, vol. 27, pp. 101-109, Jun. 2012.
7) J. Chen, B. Hu and X .Q. Zuo, "Personal Profile Mining Based on Mobile
Phone Location Data", Geometrics and Information Science of Wuhan
University Wuhan, vol. 39, pp. 734-738, Jun. 2014.
8) J. White and I. Wells, "Extracting origin destination information from mobile
phone data", 11th International Conference on Road Transport Information and
Control, pp. 30-34, 2002.
9) N. Caceres and J. P. Wideberg, "Benitez F G. Deriving origin destination data
from a mobile phone network", Intelligent Transport Systems IET, vol. 1, pp.
15-26, Jan. 2007.
10) M. S. Iqbal, C. F. Choudhury and P. Wang, "Development of origin-
destination matrices u sing mobile phone call data", Transportation Research
Part C Emerging Technologies, vol. 40, pp. 63-74, Jan. 2014.
11) F. Liu, D. Janssens and J. X. Cui, "Building a validation measure for
activity-based transportation models based on mobile phone data", Expert
Systems with Application, vol. 41, pp. 6174-6189, 2014.
12) S. Phithakkitnukoon, T. Horanont and G. D. Lorenzo, "Activity-aware
map: identifying human daily activity pattern using mobile phone data",
Proceedings of the First international conference on human behavior
understanding, pp. 14-25, 20 2010.