Stock Price Prediction
Stock Price Prediction
Stock Price Prediction
A PROJECT REPORT
In partial fulfilment of the requirements for the award of the degree
BACHELOR OF COMPUTER APPLICATION
Under the guidance of
JOYJIT GUHA
BY
(ISO9001:2015)
SDF Building, Module #132, Ground Floor, Salt Lake City, GP Block, Sector V, Kolkata, West
Bengal 700091
(Note: All entries of the proforma of approval should be filled up with
appropriate and complete information. Incomplete proforma of approval in
any respect will be summarily rejected.)
MR.JOYJIT GUHA
Approved
Not Approved Project Proposal Evaluator
DECLARATION
We hereby declare that the project work being presented in the project proposal
entitled “STOCK PRICE PREDICTION” in partial fulfilment of the
requirements for the award of the degree of BACHELOR OF COMPUTER
APPLICATION at ARDENT COMPUTECH PVT. LTD, JADAVPUR, KOLKATA,
WEST BENGAL, is anauthentic work carried out under the guidance of MR.
JOYJIT GUHA. The matter embodied in this project work has not been
submitted elsewhere for the award of any degree of our knowledge and belief.
Date:
CERTIFICATE
This is to certify that this proposal of minor project entitled “STOCK PRICE
PREDICTION” is a record of bonafide work, carried out by Sankha Subhra
Adhikary, Koushik Ghosh, Souradeep Sinha, Ankit Bouri and Bikram Maji
under my guidance at ARDENT COMPUTECH PVT LTD. In my opinion, the
report in its present form is in partial fulfilment ofthe requirements for the award
of the degree of BACHELOR OF COMPUTER APPLICATION and as per
regulations of the ARDENT®.
To the best of my knowledge, the results embodied in this report, are original in
nature and worthy of incorporation in the present version of the report .
Guide / Supervisor
------------------------------------------------
Project Engineer
Ardent ComputechPvt. Ltd (An ISO 9001:2015 Certified)
A-1/20, Ramgarh, Ganguly Bagan, Kolkata, West Bengal 700047
ACKNOWLEDGEMENT
I would like to show our greatest appreciation to Mr. Joyjit Guha, Project
Engineer at Ardent, Durgapur. I always feel motivated and encouraged every time
by his valuable advice and constant inspiration; without his encouragement and
guidance this project would not have materialized.
Words are inadequate in offering our thanks to the other trainees, project
assistants and other members at Ardent ComputechPvt. Ltd. for their
encouragement and cooperation in carrying out this project work. The guidance
and support received from all the members and who are contributing to this
project, was vital for the success of this project.
CONTENTS
• Overview
• History of Python
• Environment Setup
• Basic Syntax
• Variable Types
• Functions
• Modules
• Packages
• Artificial Intelligence o Deep Learning o Neural Networks o
Machine Learning
• Machine Learning o Supervised and Unsupervised Learning o
NumPy o SciPy o Scikit-learn o Pandas o Regression Analysis o
Matplotlib o Clustering
• Stock Price Prediction
OVERVIEW
Python is Interactive: You can actually sit at a Python prompt and interact with
the interpreterdirectly to write your programs.
Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
Small Talk, UNIX shell, and other scripting languages. Python is copyrighted. Like Perl,
Python source code is now available under the GNU General Public License (GPL). Python
is now maintained by a core development team at the institute, although Guido van Rossum
still holds a vital role in directing its progress.
FEATURES OF PYTHON
Easy-to-learn: Python has few Keywords, simple structure and clearly defined syntax. This
allows a student to pick up the language quickly.
Easy-to-Read: Python code is more clearly defined and visible to the eyes.
Easy -to-Maintain: Python's source code is fairly easy-to-maintain.
A broad standard library: Python's bulk of the library is very portable and cross platform
compatible on UNIX, Windows, and Macintosh.
Interactive Mode: Python has support for an interactive mode which allows interactive testing
and debugging of snippets of code.
Portable: Python can run on the wide variety of hardware platforms and has the same
interface on all platforms.
Extendable: You can add low level modules to the python interpreter. These modules enables
programmers to add to or customize their tools to be more efficient.
Databases: Python provides interfaces to all major commercial databases.
GUI Programming: Python supports GUI applications that can be created and ported to many
system calls, libraries, and windows systems, such as Windows MFC, Macintosh, and the X
Window system of Unix.
Scalable: Python provides a better structure and support for large programs than shell
scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are
listed below:
• It support functional and structured programming methods as well as OOP. It can
be used as a scripting language or can be compiled to byte code for building large
applications.
• It provides very high level dynamic datatypes and supports dynamic type checking.
• It supports automatic garbage collections.
• It can be easily integrated with C, C++, COM, ActiveX, CORBA and JAVA.
ENVIRONMENT SETUP
Open a terminal window and type "python" to find out if it is already installed and which
version is installed.
• Win 9x/NT/2000
• OS/2
• PalmOS
• Windows CE
• Acorn/RISC OS
BASIC SYNTAX OF PYTHON PROGRAM
Type the following text at the Python prompt and press the Enter –
If you are running new version of Python, then you would need to use print statement with
parenthesis as in print ("Hello, Python!");.
However in Python version 2.4.3, this produces the following result –
Hello, Python!
Python Identifiers
A Python identifier is a name used to identify a variable, function, class, module or other
object. An identifier starts with a letter A to Z or a to z or an underscore (_) followed by zero
or more letters, underscores and digits (0 to 9).
Python does not allow punctuation characters such as @, $, and % within identifiers. Python
is a case sensitive programming language.
Python Keywords
The following list shows the Python keywords. These are reserved words and you cannot use
them as constant or variable or any other identifier names. All the Python keywords contain
lowercase letters only.
Many programs can be run to provide you with some basic information about how they
should be run. Python enables you to do this with -h −
Multiple Assignment
Python allows you to assign a single value to several variables simultaneously. For example −
a = b = c = 1 a,b,c = 1,2,"hello"
There are several built-in functions to perform conversion from one data type to another.
FUNCTIONS
Defining a Function
deffunctionname( parameters ):
"function_docstring"
function_suite return
[expression]
defchangeme(mylist):
"This changes a passed list into this function"
mylist.append([1,2,3,4]);
print"Values inside the function: ",mylist
return
mylist=[10,20,30]; changeme(mylist);
print"Values outside the function: ",mylist
Here, we are maintaining reference of the passed object and appending values in the same
object. So, this would produce the following result −
sum(10,20);
print"Outside the function global total : ", total
The Python code for a module named aname normally resides in a file named aname.py.
Here's an example of a simple module, support.py
You can use any Python source file as a module by executing an import statement in some
other Python source file. The import has the following syntax –
A package is a hierarchical file directory structure that defines a single Python application
environment that consists of modules and sub packages and sub-subpackages, and so on.
Consider a file Pots.py available in Phone directory. This file has following line of source
code − def Pots():
print "I'm Pots Phone"
Similar way, we have another two files having different functions with the same name as
above –
Phone/__init__.py
To make all of your functions available when you've imported Phone, you need to put
explicit import statements in __init__.py as follows − from Pots import Pots from
Isdn import Isdn from G3 import
ARTIFICIAL INTELLIGENCE
Introduction
According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
The development of AI started with the intention of creating similar intelligence in machines
that we find and regard high in humans.
Goals of AI
To Create Expert Systems − The systems which exhibit intelligent behaviour, learn,
demonstrate, explain, and advice its users.
To Implement Human Intelligence in Machines − Creating systems that understand, think,
learn, and behave like humans.
Applications of AI
AI has been dominant in various fields such as :-
Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic knowledge.
Natural Language Processing − It is possible to interact with the computer that understands
natural language spoken by humans.
Expert Systems − There are some applications which integrate machine, software, and special
information to impart reasoning and advising. They provide explanation and advice to the
users.
Vision Systems − These systems understand, interpret, and comprehend visual input on the
computer.
For example: A spying aeroplane takes photographs, which are used to figure out
spatial information or map of the areas.
Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
Speech Recognition − Some intelligent systems are capable of hearing and comprehending
the language in terms of sentences and their meanings while a human talks to it. It can handle
different accents, slang words, noise in the background, change in human’s noise due to cold,
etc.
Handwriting Recognition − The handwriting recognition software reads the text written on
paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert
it into editable text.
Intelligent Robots − Robots are able to perform the tasks given by a human. They have
sensors to detect physical data from the real world such as light, heat, temperature,
movement, sound, bump, and pressure. They have efficient processors, multiple sensors and
huge memory, to exhibit intelligence. In addition, they are capable of learning from their
mistakes and they can adapt to the new environment.
Application of AI
Deep Learning
Deep learning is a subset of machine learning. Usually, when people use the term deep
learning, they are referring to deep artificial neural networks, and somewhat less frequently to
deep reinforcement learning.
• use a cascade of multiple layers of nonlinear processing units for feature extraction and
transformation. Each successive layer uses the output from the previous layer as input.
• learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
• learn multiple levels of representations that correspond to different levels of abstraction; the
levels form a hierarchy of concepts.
• use some form of gradient descent for training via backpropagation.
NEURAL NETWORKING
Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by
the biological neural networks that constitute animal brains. Such systems learn (progressively
improve performance on) tasks by considering examples, generally without task-specific
programming
An ANN is based on a collection of connected units or nodes called artificial neurons (analogous
to biological neurons in an animal brain). Each connection between artificial neurons can transmit
a signal from one to another.
MACHINE LEARNING
Machine learning is a field of computer science that gives computers the ability to learn without
being explicitly programmed.
Evolved from the study of pattern recognition and computational learning theory in artificial
intelligence, machine learning explores the study and construction of algorithms that can learn
from and make predictions on data.
INTRODUCTION TO MACHINE LEARNING
Machine learning is a field of computer science that gives computers the ability to learn without
being explicitly programmed.
Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence,
coined the term "Machine Learning" in 1959 while at IBM. Evolved from the study of pattern
recognition and computational learning theory in artificial intelligence, machine learning explores
the study and construction of algorithms that can learn from and make predictions on data
Machine learning tasks are typically classified into two broad categories, depending on whether
there is a learning "signal" or "feedback" available to a learning system:-
SUPERVISED LEARNING
Supervised learning is the machine learning task of inferring a function from labeled training
data.[1] The training data consist of a set of training examples. In supervised learning, each
example is a pair consisting of an input object (typically a vector) and a desired output value.
A supervised learning algorithm analyses the training data and produces an inferred function,
which can be used for mapping new examples. An optimal scenario will allow for the algorithm
to correctly determine the class labels for unseen instances. This requires the learning algorithm
to generalize from the training data to unseen situations in a "reasonable" way.
UNSUPERVISED LEARNING
Unsupervised learning is the machine learning task of inferring a function to describe hidden
structure from "unlabelled" data (a classification or categorization is not included in the
observations). Since the examples given to the learner are unlabelled, there is no evaluation of the
accuracy of the structure that is output by the relevant algorithm—which is one way of
distinguishing unsupervised learning from supervised learning and reinforcement learning.
A central case of unsupervised learning is the problem of density estimation in statistics, though
unsupervised learning encompasses many other problems (and solutions) involving summarizing
and explaining key features of the data.
NUMPY
NumPy is a library for the Python programming language, adding support for large, multidimensional
arrays and matrices, along with a large collection of high-level mathematical functions to operate on
these arrays. The ancestor of NumPy, Numeric, was originally created by Jim Hugunin.
NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode
interpreter. Mathematical algorithms written for this version of Python often run much slower than
compiled equivalents.
Using NumPy in Python gives functionality comparable to MATLAB since they are both interpreted,
and they both allow the user to write fast programs as long as most operations work on arrays or
matrices instead of scalars.
NUMPY ARRAY
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually
numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are
called axes. The number of axes is rank.
For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one
axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is
2dimensional). The first dimension (axis) has a length of 2, the second dimension has a length of 3.
[[ 1., 0., 0.],
[ 0., 1., 2.]]
SCIPY
modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and
image processing, ODE solvers and other tasks common in science and engineering.
SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like
Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy
stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy
stack is also sometimes referred to as the SciPy stack.
The SciPy package of key algorithms and functions core to Python's scientific computing capabilities.
Available sub-packages include:
Data Structures
The basic data structure used by SciPy is a multidimensional array provided by the NumPy module.
NumPy provides some functions for linear algebra, Fourier transforms and random number
generation, but not with the generality of the equivalent functions in SciPy. NumPy can also be used
as an efficient multi-dimensional container of data with arbitrary data-types. This allows NumPy to
seamlessly and speedily integrate with a wide variety of databases. Older versions of SciPy used
Numeric as an array type, which is now deprecated in favour of the newer NumPy array code.
SCIKIT-LEARN
Scikit-learn is a free software machine learning library for the Python programming language. It
features various classification, regression and clustering algorithms including support vector
machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate
with the Python numerical and scientific libraries NumPy and SciPy.
The scikit-learn project started as scikits.learn, a Google Summer of Code project by David
Cournapeau. Its name stems from the notion that it is a "SciKit" (SciPy Toolkit), a
separatelydeveloped and distributed third-party extension to SciPy.[4] The original codebase was later
rewritten by other developers. In 2010 Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and
Vincent Michel, all from INRIA took leadership of the project and made the first public release on
February the 1st 2010[5]. Of the various scikits, scikit-learn as well as scikit-image were described as
"wellmaintained and popular" in November 2012.
REGRESSION ANALYSIS
In statistical modelling, regression analysis is a set of statistical processes for estimating the
relationships among variables. It includes many techniques for modelling and analysing several
variables, when the focus is on the relationship between a dependent variable and one or more
independent variables (or 'predictors'). More specifically, regression analysis helps one understand
how the typical value of the dependent variable (or 'criterion variable') changes when any one of the
independent variables is varied, while the other independent variables are held fixed.
Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap
with the field of machine learning. Regression analysis is also used to understand which among the
independent variables are related to the dependent variable, and to explore the forms of these
relationships. In restricted circumstances, regression analysis can be used to infer casual relationships
between the independent and dependent variables. However this can lead to illusions or false
relationships, so caution is advisable
LINEAR REGRESSION
Linear regression is a linear approach for modelling the relationship between a scalar dependent
variable y and one or more explanatory variables (or independent variables) denoted X. The case of
one explanatory variable is called simple linear regression. For more than one explanatory variable,
the process is called multiple linear regression.
In linear regression, the relationships are modelled using linear predictor functions whose unknown
model parameters are estimated from the data. Such models are called linear models.
LOGISTIC REGRESSION
Logistic regression, or logit regression, or logit model [1] is a regression model where the dependent
variable (DV) is categorical. This article covers the case of a binary dependent variable—that is,
where the output can take only two values, "0" and "1", which represent outcomes such as pass/fail,
win/lose, alive/dead or healthy/sick. Cases where the dependent variable has more than two outcome
categories may be analysed in multinomial logistic regression, or, if the multiple categories are
ordered, in ordinal logistic regression. In the terminology of economics, logistic regression is an
example of a qualitative response/discrete choice model.
POLYNOMIAL REGRESSION
Polynomial regression is a form of regression analysis in which the relationship between the
independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
Polynomial regression fits a nonlinear relationship between the value of x and the corresponding
conditional mean of y , denoted E( y | x ), and has been used to describe nonlinear phenomena such as
the growth rate of tissues, the distribution of carbon isotopes in lake sediments, and the progression of
disease epidemics.
Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem
it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data.
MATPLOTLIB
Matplotlib is a plotting library for the Python programming language and its numerical mathematics
extension NumPy. It provides an object-oriented API for embedding plots into applications using
general-purpose GUI toolkits like Tkinter,wxPython, Qt, or GTK+. There is also a procedural "pylab"
interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB,
though its use is discouraged .SciPy makes use of matplotlib.
EXAMPLE
LINE PLOT
>>>importmatplotlib.pyplotasplt
>>>importnumpyasnp
>>> a =np.linspace(0,10,100)
>>> b =np.exp(-a)
>>>plt.plot(a,b)
>>>plt.show()
SCATTER PLOT
>>>importmatplotlib.pyplotasplt
>>>fromnumpy.randomimport rand
>>> a =rand(100)
>>> b =rand(100)
>>>plt.scatter(a,b)
>>>plt.show()
PANDAS
In computer programming, pandas is a software library written for the Python programming language
for data manipulation and analysis. In particular, it offers data structures and operations for
manipulating numerical tables and time series. It is free software released under the three-clause BSD
license. "Panel data", an econometrics term for multidimensional, structured data sets.
LIBRARY FEATURES
Data Frame object for data manipulation with integrated indexing.
Tools for reading and writing data between in-memory data structures and different file
formats.
Data alignment and integrated handling of missing data.
Reshaping and pivoting of data sets.
Label-based slicing, fancy indexing, and sub setting of large data sets.
Data structure column insertion and deletion.
Group by engine allowing split-apply-combine operations on data sets.
Data set merging and joining.
Hierarchical axis indexing to work with high-dimensional data in a lower-dimensional data
structure.
Time series-functionality: Date range generation.
CLUSTERING
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the
same group (called a cluster) are more similar (in some sense or another) to each other than to those
in other groups (clusters). It is a main task of exploratory data mining, and a common technique for
statistical data analysis, used in many fields, including machine learning, pattern recognition, image
analysis, information retrieval, bioinformatics, data compression, and computer graphics.
Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be
achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and
how to efficiently find them. Popular notions of clusters include groups with small distances among
the cluster members, dense areas of the data space, intervals or particular statistical distributions.
Clustering can therefore be formulated as a multi-objective optimization problem.
The appropriate clustering algorithm and parameter settings (including values such as the distance
function to use, a density threshold or the number of expected clusters) depend on the individual data
set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative
process of knowledge discovery or interactive multi-objective optimization that involves trial and
failure. It is often necessary to modify data pre-processing and model parameters until the result
achieves the desired properties.
ALGORITHM
Data Collection
Data Formatting
Model Selection
Training
Testing
Data Collection: We have collected data sets of weather from online website.
We have downloaded the .csv files in which information was present.
Data Formatting: The collected data is formatted into suitable data sets. We
check the collinearity with mean temperature. The data sets which have
collinearity nearer to 1.0 has been selected.
Model Selection: We have selected different models to minimize the error of the
predicted value. The different models used are Linear Regression Linear Model,
Ridge Linear model, Lasso Linear Model and Bayesian Ridge Linear Model.
Training: The data sets was divided such that x_train is used to train the model
with corresponding x_test values and some y_train kept reserved for testing.
Testing: The model was tested with y_train and stored in y_predict . Both
y_train and y_predict was compared.
ACTUAL CODES FOR STOCK PRICE PREDICTION
CONCLUSION
We have collected the raw data from online sources. Then we take this raw data and
format it.
Now we have selected few models for error detection. We have used four models namely
linear regression model, linear ridge model, linear lasso model and Linear BayesianRidge
model.
Linear Bayesian Ridge model is good one as MSE (Mean Squared Error) calculated was
near about 11.6358 which is less than the other model.
FUTURE SCOPE
The data taken was limited. The project could be extended to more number of days. The
weather was predicted only by taking the last three days in account, it can be predicted by
looking at the last four to seven days as well.
The error can be minimized as well using other algorithms.
THANK YOU