0% found this document useful (0 votes)
18 views73 pages

Report Latest

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views73 pages

Report Latest

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

FinaciAl

1. INTRODUCTION

1.1. Introduction to Project Domain


In today's rapidly evolving and intricate financial landscape, individuals and investors are
confronted with an array of challenges when it comes to making well-informed decisions
about various financial assets, including equities, and cryptocurrencies. The intricacies of
modern financial markets demand a deep understanding of market trends, historical data,
risk analysis, and the ability to anticipate future developments. In response to these
challenges, the "Finaci AI" project emerges as a groundbreaking initiative. Finaci AI is a
web-based application that leverages the power of machine learning to provide a
comprehensive and dependable solution for financial forecasting. In this project, we will
explore the dynamic and complex world of finance, which is no longer driven solely by
traditional factors but also by technological advancements, global economic events, and
the ever-expanding universe of digital assets.
In this introduction, we will delve deeper into the problems and challenges encountered
by individuals and investors in the contemporary financial landscape and highlight the
significance of Finaci AI as a forward-thinking response. We will also provide a glimpse
into the key components and objectives of the project, emphasizing its role in shaping the
future of financial decision-making. Finaci AI, the focal point of this project, represents a
pioneering leap into the world of finance. It is a machine learning-based web application
designed to serve as an all-encompassing and dependable financial forecasting tool. At its
core, Finaci AI harnesses the potential of advanced data analytics, machine learning
algorithms, and real-time data streams to offer users invaluable insights, predictions, and
analytical capabilities that were once beyond reach.

1.2. Problem Definition


In the midst of today's intricate and ever-shifting financial landscape, both individuals and
seasoned investors find themselves grappling with a multifaceted challenge – the need to
make well-informed decisions regarding a diverse portfolio of financial assets. This
portfolio encompasses equities, and cryptocurrencies an array of other financial
instruments. These financial choices, whether for building wealth, preserving assets, or
achieving specific financial goals, are increasingly influenced by a multitude of factors. The

Department of Computer Engineering 1 D.N.Patel COE Shahada


FinaciAl

contemporary financial decision-making process extends far beyond conventional analysis.


It involves grappling with intricate market trends, delving into historical data, assessing
and mitigating risks, and, perhaps most significantly, making astute predictions about the
future. It is within this complex environment that the "Finaci AI" project emerges,
conceived to tackle these pressing issues and usher in a new era of financial decision
support. The "Finaci AI" project addresses these challenges by proposing the development
of a machine learning-based web application.

1.3. Available Similar Systems

 Yahoo Finance: Offers comprehensive financial information, including stock market


data, news, and analysis. It also provides historical data and charts for stocks,
cryptocurrencies, and commodities.

 Trading View: This platform provides charts, analysis, and social networking for
traders and investors. It covers a wide range of financial instruments, including
stocks, cryptocurrencies, and commodities.

 Coin Market Cap: Focuses specifically on cryptocurrencies, providing market data,


charts, and information about various digital assets.

1.4. Objectives of Proposed System


 Comprehensive Financial Forecasting: At the core of "Finaci AI" lies the primary
objective of providing a comprehensive financial forecasting tool.
 Incorporating Advanced Machine Learning: A key objective of "Finaci AI" is to
leverage cutting-edge machine learning techniques to facilitate accurate and
forward-looking financial predictions.
 Risk Assessment and Mitigation: Effective risk analysis is paramount in the
financial decision-making process
 User-Friendly Interface: The system is designed with a user-centric approach as
one of its key objectives.
 Real-Time Updates and Relevance: "Finaci AI" aims to offer real-time updates on
market conditions, asset performance, and economic events.

Department of Computer Engineering 2 D.N.Patel COE Shahada


FinaciAl

1.5. Proposed Methodology


Within the framework of the "Finaci AI" project, the concept of modules serves as a crucial
organizational structure

 Building a stock market prediction project using machine learning involves a


comprehensive methodology, particularly when considering equity, cryptocurrency,
and commodities modules. The initial step entails data collection from reliable sources,
such as financial APIs for equity and cryptocurrency data, and relevant economic
indicators for commodities. Subsequently, data preprocessing is crucial, involving
tasks like handling missing data, normalization, and the creation of lag features.

 Feature engineering follows, where indicators like moving averages, RSI, and MACD
are extracted to enhance model performance. Model selection is a pivotal step, with
choices like LSTM, GRU, Random Forests, and Neural Networks, depending on the
nature of the data. The training and testing phase involves splitting the data,
hyperparameter tuning, and implementing cross-validation techniques for robustness.
Evaluation metrics, including MSE and RMSE, are used to assess model performance.

 Once the models are trained, deployment in a production environment and periodic
updates with new data are essential. Continuous monitoring, improvement, and
adaptation to changing market conditions are crucial for sustained success.
Visualization tools like Matplotlib and Plotly aid in presenting predictions and
performance, while documentation ensures transparency and facilitates future
enhancements. Staying informed about the latest advancements in machine learning
and finance is imperative for ongoing model refinement.

Department of Computer Engineering 3 D.N.Patel COE Shahada


FinaciAl

1.5.1. System Architecture

Figure 1.1. System Architecture

1.5.2. System Modules


1.5.2.1. Module- User#1

 Login
 Registration
 Search Stocks
 Display Result

Department of Computer Engineering 4 D.N.Patel COE Shahada


FinaciAl

1.5.2.2. Module-Admin#2

 Login
 Add Data
 Authorize Users

1.6. Applicability
Amidst the intricacies of the contemporary financial landscape, the "FinanciAI" project
emerges as a beacon of innovation, poised to tackle the multifaceted challenges individuals
and investors face. The project's ambitions extend to the broad spectrum of financial
assets, encompassing equities and cryptocurrency. It seeks to address the intricate web of
factors that influence financial decision-making, including market trends, historical data,
risk analysis, and future predictions. The overarching goal of the "FinanciAI" project is to
develop a machine learning-based web application that serves as a comprehensive and
reliable financial forecasting tool. In this context, it is crucial to explore the applicability of
the "FinanciAI" system, shedding light on how it promises to revolutionize financial
decision-making for a diverse range of stakeholders.

Department of Computer Engineering 5 D.N.Patel COE Shahada


FinaciAl

2. LITERATURE SURVEY

2.1. Related Work


1. Determinants of Credit Ratings and Comparison of the Rating
Prediction Performances of Machine Learning Algorithms: New
machine learning algorithms are dynamically produced in the field of artificial
intelligence engineering and the algorithms are constantly updated with new
parameter estimations. The performance of existing algorithms in various business
areas is still an important topic of discussion. Also, machine learning algorithms are
frequently used in long-term credit ratings, which is an crucially important sub-
branch of finance.
2. Forecasting the movements of Bitcoin prices: an application of
machine learning algorithms: Cryptocurrencies, such as Bitcoin, are one of
the most controversial and complex technological innovations in today's financial
system. This study aims to forecast the movements of Bitcoin prices at a high degree
of accuracy. To this aim, four different Machine Learning (ML) algorithms are
applied, namely, the Support Vector Machines (SVM), the Artificial Neural Network
(ANN), the Naive Bayes (NB) and the Random Forest (RF) besides the logistic
regression (LR) as a benchmark model. In order to test these algorithms, besides
existing continuous dataset, discrete dataset was also created and used. For the
evaluations of algorithm performances, the F statistic, accuracy statistic, the Mean
Absolute Error (MAE), the Root Mean Square Error (RMSE) and the Root Absolute
Error (RAE) metrics were used. The t test was used to compare the performances of
the SVM, ANN, NB and RF with the performance of the LR. Empirical findings reveal
that, while the RF has the highest forecasting performance in the continuous dataset,
the NB has the lowest. On the other hand, while the ANN has the highest and the NB
the lowest performance in the discrete dataset. Furthermore, the discrete dataset
improves the overall forecasting performance in all algorithms (models) estimated.

3. A Deep Learning-Based Cryptocurrency Price Prediction Model


That Uses On-Chain Data: Cryptocurrency price prediction, especially BTC,
have used various methods. In order to deal with cryptocurrency prices that have

Department of Computer Engineering 6 D.N.Patel COE Shahada


FinaciAl

enormous fluctuations and non-stationarity, a dominant branch of research is based


on machine learning methods [25], [26]. In particular, not only conventional
machine learning algorithms but also more sophisticated methods, such as
reinforcement learning (RL) and deep learning (DL)-based approaches, have been
popularly utilized for handling the volatility of cryptocurrency prices
4. Stock Price Forecasting: Machine Learning Models with K-fold and

Repeated Cross Validation Approaches: Stock exchange price prediction is


one of the most researched topics, attracting interest from both academics and
industry. Various algorithms have been developed since the introduction of Artificial
Intelligence (AI) and have been used to forecast equities market movement. Despite
all these researches, less attention has been paid to the use of cross validation (CV)
approaches for better stock price prediction. Objective: The aim of this work is to
predict Nigerian stock prices using machine learning models with K-fold and
repeated K-fold CVs. Methods: In this work, we consider the prediction performance
of machine learning models under two cross validation approaches, namely K-fold
and repeated K-fold CVs and when no cross validation technique is used.
5. Predictability of Greek Systemic Bank Stocks using Machine
Learning Techniques: Considering that financial markets are linked to a
country’s economic conditions attracting major funds from investors and issues
equities in the public interest, it is, therefore, crucial to forecast the movement of the
stock prices to prevent excessive losses and make relevant investment decisions. .
Traditionally, two main approaches have been widely employed in stock price
predictions. These approaches include technical analysis methods that utilize
historical stock prices to predict future prices.
6. Hafiz, F. Broekaert, J. La Torre, D. and Swain, A., 2021. A multi-
criteria approach to evolve sparse neural architectures for stock
market forecasting: The analysis of financial time series aims to correlate data
points over time with a dependent output and, hence, provide a way to predict
future values from historical data points. However, the quasi immediate
information-adaptation mechanism underlying the Efficient Market Hypothesis
(EMH) severely reduces the signal-to-noise ratio in the financial time series (Fama,
1965), and, hence, caps from start the forecasting accuracy of any technical analysis

Department of Computer Engineering 7 D.N.Patel COE Shahada


FinaciAl

algorithm. The analysis of financial time series aims to correlate data points over
time with a dependent output and, hence, provide a way to predict future values
from historical data points. However, the quasi immediate information-adaptation
mechanism underlying the Efficient Market Hypothesis (EMH) severely reduces the
signal-to-noise ratio in the financial time series (Fama, 1965), and, hence, caps from
start the forecasting accuracy of any technical analysis algorithm.

7. Time-Series Prediction of Cryptocurrency Market using Machine


Learning Techniques: LSTM model structure used in the research [23].
Analysis shows that by comparing f1 values, the LSTM model exceeds the time series
price range booster model, which is an overall master learning model considered to
be of reasonably good predictive performance. Compared to the GB model with the
LSTM, with improved efficiency of around 7 percent.

8. Bitcoin Price Prediction Using Machine Learning and Artificial


Neural Network Model: This paper explains the working of the Multiple Linear
Regression and Long Short-Term Memory model in predicting the value of a Bitcoin.
Due to its raising popularity, Bitcoin has become like an investment and works on
the Block chain technology which also gave raise to other crypto currency. This
makes it very difficult to predict its value and hence with the help of Machine
Learning Algorithm and Artificial Neural Network Model this predictor is tested.

9. Ascertaining price formation in cryptocurrency markets with


machine learning: The objective of this paper is to understand the way prices at
either side of the market move. Motivated by related literature, we focus on a
particular measure, the mid-price, which intuitively captures the average difference
between the best ask (the lowest price sellers are willing to accept) and best bid (the
highest price buyers are willing to pay). Towards this aim we could, for example, use
Markov chains to model the limit order book (Kelly and Yudovina 2017).

Department of Computer Engineering 8 D.N.Patel COE Shahada


FinaciAl

10. A Multi-criteria Approach to Evolve Sparse Neural


Architectures for Stock Market Forecasting: This study proposes Two-
Dimensional Swarms (2DS) to identify parsimonious and efficacious neural
architectures. 2DS was originally developed by the authors to solve the feature
selection problem in (Hafiz et al., 2018b). In this study, we extend the key ideas of
2DS for the ENAS problem. In particular, the key attribute of 2DS is the direct and
explicit integration of architecture complexity into the search process, i.e., instead of
evolving selection likelihoods of only architecture aspects (such as number of
hidden layers, neurons, features and activation functions), these are also evolved for
distinct architectural complexities (discussed at length in Section 5). This key
attribute is demonstrated to identify significantly better neural architectures for the
day-ahead prediction of the NASDAQ index direction of movement.

2.2 Study of Existing System


1. A SURVEY ON STOCK PRICE PREDICTION USING MACHINE
LEARNING: The objective of this paper is to classify different machine learning
algorithms. This paper introduced the concept of economic derivatives just like the
“no arbitrage” principle and people of the predictive model, like stochastic process
theory and efficient market hypothesis (EMH). News articles were analyzed and also
the prediction was made on their basis. Steps involved in prediction were data
preparation, analysis, aggregation, and visualization. The results were neutral,
positive, and negative. Data were collected daily and predictions were made on
Logistic Regression. The accuracy obtained was 70%. A generalized linear model
(LM) (binomial family) is often used as a logistic regression model.

2. Investment advice based on market trends and the financial


distress of the company: This study will mainly focus on developing two
models to refine the investment process. The first model we have proposed is a
stock market prediction based on deep learning techniques. Here we have used a
Realtime dataset from Yahoo finance for a particular company where clients want to
invest. Here we have used different deep learning techniques. But for this research

Department of Computer Engineering 9 D.N.Patel COE Shahada


FinaciAl

sequential LSTM has outperformed all the models with minimum rmse (root mean
squared error) score. With the help of this are able to predict the stock closing price
of the company for up to one month. The second model will be of Financial Distress
Predication based on various Bagging and Boosting techniques with the integration
of various SMOTE techniques were used. But for this research Balanced Bagging
Method with ADASYN has outperformed from all the models with an accuracy of
93%. ADASYN Adaptive Synthetic Sampling Method is a modified version of SMOTE
which performed best with our bagging and boosting model, we have used ADASYN
to deal with the class imbalance problem. This empirical research is carried out
based on real world financial data of 3476 Chinees company with over 84 financial
and non-financial features.

3. LITERATURE SURVEY ON STOCK PRICE PREDICTION USING


MACHINE LEARNING: An LSTM-Method for Bit-coin Price Prediction: A Case
Study Yahoo Finance Stock Market, IEEE 2019- Ferdiansyah et al., Bit-coin is a type
of Cryptocurrency and currently is one of a kind of investment on the stock market.
Stock markets are inclined by several risks. And bit-coin is one kind of crypto
currency that keeps rising in recent years, and sometimes suddenly falls without
knowing influence on the stock market. There’s a need for automation tools to
predict bit-coin on the stock market because of its fluctuations. This research study
studies how to create mode prediction bit-coin stock market prediction using LSTM.
Before confirming the results the paper tries to measure the results using RMSE (the
Root Mean Square Error).The RMSE will at all times be larger or equal to the MAE.
The RMSE metric assesses how well a model can calculate a continuous value. The
method that is applied on this research to predict Bit-coin on the stock market
Yahoo finance can forecast the result above $12600 USD for the next couple of days
after prediction

4. An SVM-Based Classification Model for Migration Prediction of


Beijing: In this paper, we focus on addressing the questions of who, among the
migrant residents, will stay, who will leave, and how to predict their migration. To
clarify some concepts discussed later in this paper, we provide a few important

Department of Computer Engineering 10 D.N.Patel COE Shahada


FinaciAl

definitions as follows. Household registration/Hukou in China is a booklet issued by


Public Security Bureau, and is used to officially identify a person as a resident of a
specific area. Hukou determines where a person can claim his/her social welfare, e.g.
health insurance, school allocation, etc. Such geographic registration system creates
various inconveniences and limitations for people living away from the area his/her
hukou is registered [4]. As we can see, hukou acts as a domestic passport and
imposes restrictions on its holder’s migration. Megacities [5] (as defined by The
State Council of P. R. China in 2014) are the cities with a permanent population that
exceeds 10 million in urban areas. Currently, the megacities in China include Beijing,
Shanghai, Guangzhou and Shenzhen

5. Predicting Crypto Currency Prices Using Machine Learning and


Deep Learning Techniques: In this paper, several approaches for crypto
currencies like Bitcoin price prediction were investigated. We compared the results
of prediction with Linear Regression, Multiple Linear Regression with Features,
and Recurrent Neural Networks with LSTM cells. The research contribution of this
technique is that we predicted a numerical value of price instead of performing
binary classification, as well as used multiple features to train the model. The LSTM
method performed notably better than the other two approaches, and we believe
that further research on using Neural Networks for time-series prediction is
very promising to financial data analytics and other fields

6. Price Movement Prediction of Cryptocurrencies Using Sentiment


Analysis and Machine Learning: In this paper, they proved that it is possible
to predict the direction of price movements for the emerging cryptocurrency market
utilizing machine learning and sentimental analysis, techniques that had been
previously utilized for Bitcoin. We evaluated and compared the performance of
three prediction models: MLPs, SVMs and RFs for Bitcoin, Ethereum, Ripple and
Litecoin using Twitter data, market data or both.

7. A Comparative Study of Bitcoin Price Prediction Using Deep


Learning: In this study, they have developed and compared various deep learning-

Department of Computer Engineering 11 D.N.Patel COE Shahada


FinaciAl

based Bitcoin price prediction models using Bitcoin blockchain information. More
specifically, we tested the state-of-the-art deep learning models such as deep neural
networks (DNN), long short-term memory (LSTM) models, convolutional neural
networks (CNN), deep residual networks (ResNet), and their combinations. We
addressed both regression and classification problems, where the former predicts
the future Bitcoin price, and the latter predicts whether or not the future price will
go up or down. For regression problems, LSTM slightly outperformed the other
models, whereas for classification problems, DNN slightly outperformed the other
models unlike the previous literature on Bitcoin price prediction. Although CNN and
ResNet are known to be very effective in many applications, including sequence data
analysis, their performance was not particularly good for Bitcoin price prediction.
Overall, there was no clear winner and the performance of all deep learning models
studied in this work was comparable to each other. In addition, although deep
learning models seem to predict the Bitcoin price very well in terms of the
regression analysis, it is still premature to solely use such models for algorithmic
Bitcoin trading.

8. Predicting and Forecasting the Price of Constituents and Index of


Cryptocurrency Using Machine Learning: In this article,they have
presented four different models to predict and forecast the close prices of nine
constituents and cci30 using machine learning approaches. Our models exhibit a
very good performance in overall prediction of the close price of cryptocurrencies,
which can be extremely useful for all including public, private, and government
organizations as through our models, the trends and patterns of these currencies
can be well-understood. However, in case of forecasting, the K-NN model didn’t
worked very effectively unlike other models, which happened due to the presence of
noisy random features and extreme volatility. We have compared our work with the
state-of-the-art models from the literature and demonstrated that our models’
performance seems better and competitive. We believe and hope that our model will
be beneficial for people to observe, understand, and choose their own desired
currency from cryptocurrency market.

Department of Computer Engineering 12 D.N.Patel COE Shahada


FinaciAl

SR TITLE PUBLISHED ABSTRACT CONCLUSION


NO: YEAR

1 Determinants of 2023 In the literature, new The reason for the low
Credit Ratings and machine learning performance of the
Comparison of the algorithms are Artificial Neural Networks
Rating Prediction dynamically produced in algorithm in the study is
Performances of the field of artificial that it generally creates a
Machine Learning intelligence engineering learning neural network
Algorithms and the algorithms are that establishes inter-
constantly updated with layer relations with big
new parameter data and imitates the
estimations. The human brain. This study
performance of existing has data limitations. In
algorithms in various addition, there is no
business areas is still an feature structure suitable
important topic of for distance-based KNN
discussion. and SVM algorithms in
this study.
2 Forecasting the 2023 Cryptocurrencies, such as This study aims to
movements of Bitcoin, are one of the forecast the movements of
Bitcoin prices: an most controversial and Bitcoin prices at a high
application of complex technological degree of accuracy. To this
machine learning innovations in today’s aim, four different
algorithms financial system. This Machine Learning
study aims to forecast the algorithms are applied,
movements of Bitcoin namely the Artificial
prices at a high degree of Neural Network, Random
accuracy. Forest, Support Vector
Machines, the Naïve Bayes
and besides to the logistic
regression (LR) as
benchmark model

3 A Deep Learning- 2022 Cryptocurrency has The authors propose a


Based recently attracted novel approach that uses
Cryptocurrency substantial interest from multivariate on-chain
Price Prediction investors due to its time-series data to predict
Model That Uses underlying philosophy of cryptocurrency prices.
On-Chain Data decentralization and BTC price prediction is
transparency. Considering conducted within the

Department of Computer Engineering 13 D.N.Patel COE Shahada


FinaciAl

cryptocurrency’s volatility proposed approach.


and unique characteristics, Unlike traditional machine
accurate price prediction learning-based models, a
is essential for developing CPD-based normalization
successful investment technique enables price
strategies separately prediction models to
conducted based on predict unseen price
segmentation. ranges prediction.
4 Stock Price 2022 Background: Stock In this work, we have
Forecasting: exchange price prediction been able to model stock
Machine Learning is one of the most price using Multiple
Models with K-fold researched topics, Linear Regression model,
and Repeated Cross attracting interest from RF regression, random
Validation both academics and tree (CART), ANN and the
Approaches industry. Various SVM model. Multiple
algorithms have been Linear Regression model
developed since the estimates were obtained
introduction of Artificial so as to see the significant
Intelligence (AI) and have contributions of the
been used to forecast explanatory variables
equities market considered. From the
movement. Despite all result, we observed that
these researches, less GDP, inflation rate,
attention has been paid to exchange rate and interest
the use of cross validation rate are all significant at
(CV) approaches for better 5%. This implies that
stock price prediction. these variables
contributes to the price of
stock exchange in Nigeria.
5 On the 2022 Background/Objectives: Accurate forecast of bank
Predictability of Accurate prediction of prices is a challenging task
Greek Systemic stock prices is an capturing the interest of
Bank Stocks using extremely challenging task academics and investors.
Machine Learning because of factors such as Several machine learning
Techniques political conditions, global algorithms have been
economy, unexpected widely employed
events, market anomalies, considering their
and relevant companies’ effectiveness to identify
features. In this work, the complex relations in the
random forest has been stock market data. In this
used to forecast the prices work, the Random Forest
of the four major Greek machine learning
systemic banks algorithm has been
Methods/Analysis: employed to predict stock
prices in a one-step-ahead

Department of Computer Engineering 14 D.N.Patel COE Shahada


FinaciAl

out-of-sample forecasting
exercise

6 Hafiz, F., Broekaert, 2021 This study proposes a new The problem of day ahead
J., La Torre, D. and framework to evolve prediction of NASDAQ
Swain, A., 2021. A efficacious yet index movement was
multi-criteria parsimonious neural explored from the neural
approach to evolve architectures for the design perspective. In
sparse neural movement prediction of particular, the attempts
architectures for stock market indices using have been made to clarify
stock market technical indicators as the issues related to
forecasting. arXiv inputs. In the light of a implications and possible
preprint sparse signal-to-noise remedies of disparate
arXiv:2111.08060. ratio under the Efficient market behaviors prior to
Market hypothesis, and during the ongoing
developing machine COVID pandemic with
learning methods to respect to the neural
predict the movement of a architecture design.
financial market using
technical indicators has
shown to be a challenging
problem..
7 Time-Series 2021 In the market of Our dataset contains the
Prediction of cryptocurrency the timestamps of yearly,
Cryptocurrency Bitcoins are the first monthly daily close, open,
Market using currency which has gain high, low and weighted
Machine Learning the significant importance. price of bitcoins. We have
Techniques To predict the market pre-processed that data
price and stability of according to our
Bitcoin in Crypto-market, requirement of
a machine learning based normalization. Then we
time series analysis has have applied three
been applied. machine learning
algorithm for time series
forecasting of the bitcoin
prices in the
cryptocurrency market.
8 Bitcoin Price 2021 This paper explains the The study reveals that the
Prediction Using working of the Multiple best accuracy rate is
Machine Learning Linear Regression and shown in Long Short-
and Artificial Neural Long Short-Term Memory Term Memory than
Network Model model in predicting the Multiple Linear

Department of Computer Engineering 15 D.N.Patel COE Shahada


FinaciAl

value of a Bitcoin. Due to Regression. This study is


its raising popularity, used to compare the
Bitcoin has become like an features: open, close, high,
investment and works on and low only, hence the
the Block chain technology result may differ if we
which also gave raise to tend to take various other
other crypto currency. features into
considerations.
9 Ascertaining price 2021 The cryptocurrency This paper analyzes a
formation in market is amongst the data-driven approach to
cryptocurrency fastest-growing of all the predict mid-price
markets with financial markets in the movements in
machine learning world. Unlike traditional cryptocurrency markets,
markets, such as equities, and covered a number of
foreign exchange and research questions en
commodities, route regarding
cryptocurrency market is parameter settings, design
considered to have larger of neural networks and
volatility and illiquidity. universality of the models.
10 A Multi-criteria 2021 This study proposes a new The problem of day ahead
Approach to Evolve framework to evolve prediction of NASDAQ
Sparse Neural efficacious yet index movement was
Architectures for parsimonious neural explored from the neural
Stock Market architectures for the design perspective. In
Forecasting movement prediction of particular, the attempts
stock market indices using have been made to clarify
technical indicators as the issues related to
inputs. In the light of a implications and possible
sparse signal-to-noise remedies of disparate
ratio under the Efficient market behaviors prior to
Market hypothesis, and during the ongoing
developing machine COVID pandemic with
learning methods to respect to the neural
predict the movement of a architecture design
financial market using
technical indicators has
shown to be a challenging
problem.

Department of Computer Engineering 16 D.N.Patel COE Shahada


FinaciAl

11 A SURVEY ON 2021 Stock returns are very Stock investments are of


STOCK PRICE fluctuating in nature. They interest to several
PREDICTION USING rely upon various factors investors around the
MACHINE like previous stock prices, world. However, making a
LEARNING current market trends, choice may be a complex
financial news, etc. To task as numerous factors
feature their annual are involved. For
income, people have now successful investment,
started watching stock investors are keen to
investments as a forecast the longer-term
remunerative option. situation of the securities
There are many tools market. Even the slightest
available to investors improvements of
using technical analysis to predictive efficiency will
form decisions. be very profitable.

12 Investment advice 2020 In this day and age The above literature
based on market “Investment” has become review on different
trends and the a necessity and an researchers helped us to
financial distress of important factor for understand the research
the company companies and work done by different
individuals. Investment is researchers on Financial
something which is called distress prediction. This
a monetary asset helps us to understand
purchased with the idea how financial distress
that the investment will prediction previously
provide a profit in the performed using
future. Before investing statistical analysis models
people study Financial like Beavers Univariant
performance, Background Model, Altman Z-score
and experience in the which were stationary
industry, Company models, to machine
uniqueness, Effective learning techniques like
business model, Large SVM, Decision tree, Naïve
market size of a particular Bayes which performs
company, but they do not well with small scale data
focus on minute and data with fewer
fingerprints of financial features. As the
distress. The main of this technology evolved people
research is to draw down moved towards creating
the factors of investment hybrid models like a
under a single umbrella hybrid stepwise-SVM,
and generate and LDA-SVM, to using
investment advice. multiple classifiers like

Department of Computer Engineering 17 D.N.Patel COE Shahada


FinaciAl

Adaboost-SVM, Adaboost-
Decision tree while using
these techniques people
are only focusing on
increasing the
performance using
multiple classifier sand
neglecting the class
imbalance problem.

13 LITERATURE 2020 The Stock Market has been The research done so far it
SURVEY ON STOCK very successful in could be concluded that
PRICE PREDICTION attracting people from the RNN and LSTM
USING MACHINE various backgrounds be it libraries are very effective
LEARNING educational or business in determining the stock
.The nonlinear nature of price trends effectively
the Stock Market has made relative to the actual
its research one of the market trend. At the same
most trending and crucial time what we could find
topics all around the out is that the python
world.. People decide to libraries that were used as
invest in the stock market a part of the training
on the basis of some prior process were not very
research knowledge or optimal.
some prediction. In terms
of prediction people often
look for tools or methods
that would minimize their
risks and maximize their
profits and hence the stock
price prediction takes on
an influential role in the
ever challenging stock
market business

Department of Computer Engineering 18 D.N.Patel COE Shahada


FinaciAl

14 An SVM-Based 2020 In this paper, a In this paper, to predict


Classification Model classification model for LTNL residents’ migration
for Migration the migration of residents in megacities, we
Prediction of Beijing without local household established an SVM
registration in Beijing is classification model and
established through the used factual survey data
algorithm of Support to verify the model.
Vector Machine (SVM) and Through our experiments,
the model is verified by we were able to make the
using the migration data of following deductions:
Beijing, which is collected Firstly, we found that (a)
from various surveys. Our the SVM classification
result shows that, model we established has
compared to BP Neural a great performance, (b)
Network and Logistic the classification and
Regression, SVM performs generalization of SVM
better in terms of accuracy model are better than
and generalization for those of BP Neural
these particular Network and Logistic
classification tasks. We Regression models, and
identify ten classification (c) SVM models produce
features, which, we more stable.
believe, are crucial as the
determining factors to
predict the migration
trend in Beijing.

15 Predicting Crypto 2020 In the past eight years of In this paper, several
Currency Prices Bitcoin’s history, the approaches for crypto
Using Machine economy has seen the currencies like Bitcoin
Learning and Deep price of Bitcoin rapidly price prediction were
Learning grow due to its promising investigated. We
Techniques outlook on the future for compared the results of
crypto currencies. prediction with Multiple
Investors have taken note Linear Regression,
of several advantages Multiple Linear
Bitcoin provides over the Regression with Features,
traditional banking and Recurrent Neural
system. One such trait is Networks with LSTM cells
that Bitcoin allows for
decentralized banking,
meaning that Bitcoin
cannot be regulated by
powerful banks.
16 Price Movement 2019 Cryptocurrencies are In this paper, we proved
Prediction of becoming increasingly that it is possible to

Department of Computer Engineering 19 D.N.Patel COE Shahada


FinaciAl

Cryptocurrencies relevant in the financial predict the direction of


Using Sentiment world and can be price movements for the
Analysis and considered as an emerging emerging cryptocurrency
Machine Learning market. The low barrier of market utilizing machine
entry and high data learning and sentimental
availability of the analysis, techniques that
cryptocurrency market had been previously
makes it an excellent utilized for Bitcoin. We
subject of study, from evaluated and compared
which it is possible to the performance of three
derive insights into the prediction models: MLPs,
behavior of markets SVMs and RFs for Bitcoin,
through the application of Ethereum, Ripple and
sentiment analysis and Litecoin using Twitter
machine learning data, market data or both.
techniques for the
challenging task of stock
market prediction
17 A Comparative 2019 : Bitcoin has recently developed and compared
Study of Bitcoin received a lot of attention various deep learning-
Price Prediction from the media and the based Bitcoin price
Using Deep public due to its recent prediction models using
Learning price surge and crash. Bitcoin blockchain
Correspondingly, many information. More
researchers have specifically, we tested the
investigated various state-of-the-art deep
factors that affect the learning models such as
Bitcoin price and the deep neural networks
patterns behind its (DNN), long short-term
fluctuations, in particular, memory (LSTM) models,
using various machine convolutional neural
learning methods. In this networks (CNN), deep
paper, we study and residual networks
compare various state-of- (ResNet), and their
the-art deep learning combinations. We
methods such as a deep addressed both regression
neural network (DNN), a and classification
long short-term memory problems, where the
(LSTM) model, a former predicts the future
convolutional neural Bitcoin price, and the
network, a deep residual latter predicts whether or
network, and their not the future price will go
combinations for Bitcoin up or down. For
price prediction regression problems,
LSTM slightly

Department of Computer Engineering 20 D.N.Patel COE Shahada


FinaciAl

outperformed the other


models, whereas for
classification problems,
DNN slightly
outperformed the other
models unlike the
previous literature on
Bitcoin price prediction.
18 Predicting and 2019 At present, In this article, we have
Forecasting the cryptocurrencies have presented four different
Price of become a global models to predict and
Constituents and phenomenon in financial forecast the close prices of
Index of sectors as it is one of the nine constituents and
Cryptocurrency most traded financial cci30 using machine
Using Machine instruments worldwide. learning approaches. Our
Learning Cryptocurrency is not only models exhibit a very
one of the most good performance in
complicated and abstruse overall prediction of the
fields among financial close price of
instruments, but it is also cryptocurrencies, which
deemed as a perplexing can be extremely useful
problem in finance due to for all including public,
its high volatility. This private, and government
paper makes an attempt to organizations as through
apply machine learning our models, the trends
techniques on the index and patterns of these
and constituents of currencies can be well-
cryptocurrency with a goal understood.
to predict and forecast
prices thereof. In
particular, the purpose of
this paper is to predict and
forecast the close (closing)
price of the
cryptocurrency index 30
and nine constituents of
cryptocurrencies using
machine learning
algorithms and models so
that, it becomes easier for
people to trade these
currencies.

Department of Computer Engineering 21 D.N.Patel COE Shahada


FinaciAl

3. ANALYSIS

The software development cycle is a combination of different phases such as designing,


implementing and deploying the project. These different phases of the software
development model are described in this section. The SDLC model for the project
development can be understood using the following figure The chosen SDLC model is the
waterfall model which is easy to follow and fits bests for the implementation of this project.
 Requirements Analysis: At this stage, the business requirements, definitions of use
cases are studied and respective documentations are generated.
 Design: In this stage, the designs of the data models will be defined and different data
preparation and analysis will be carried out.
 Implementation: The actual development of the model will be carried out in this stage.
Based on the data model designs and requirements from previous stages, appropriate
algorithms, mathematical models and design patterns will be used to develop the
agent’s back-end and front-end components.
 Testing: The developed model based on the previous stages will be tested in this stage.
Various validation tests will be carried out over the trained model. Deployment: After
the model is validated for its accuracy scores its ready to be deployed or used in
simulated scenarios.
 Maintenance: During the use of the developed solution various inputs/scenarios will
been countered by the model which might affect the models overall accuracy. Or with
passing time the model might not fit the new business requirements. Thus, the model
must be maintained often to keep its desired state of operation.

3.1 System Development Requirements


The system development requirements detail the resources required by the system in the
development phase. The resources are categorized as hardware resources and software
resources.

Department of Computer Engineering 22 D.N.Patel COE Shahada


FinaciAl

i. 3.1.1 Hardware Requirements

Table3.1.MinimumHardwareRequirements

Hardware Type Minimum Requirement


Processor 2.2 GHz x86/x64 based
Primary Memory 2 GB RAM
Secondary Memory 5 GB HDD/ROM Space
Internet
4 Mbps Stable Connection
Connection

ii. 3.1.2 Software Requirements

Table3.2.MinimumSoftwareRequirements

Software Type Minimum Requirement


Platform (OS) Windows 10
Front End Python 3.8, Angular.js
Back End MySQL 8.0.22, Django 4.1.0,
Design Tool RSA (Rational Software Architect), Edraw 6.1
Development Tool (IDE) Visual Studio Code 1.7
Testing Tool Selenium IDE 3.17.0
Documentation Tool Microsoft Office 2010

3.2 System Deployment Requirements


The system deployment requirements detail the resources required by the system to
deploy/install/execute it on user machine. The resources are also categorized as hardware
resources and software resources.

iii. 3.2.1 Hardware Requirements

Table3.3.MinimumHardwareRequirements (Server Side)

Hardware Type Minimum Requirement


Processor 2.66 GHz x86/x64 based
Primary Memory 2 GB RAM

Department of Computer Engineering 23 D.N.Patel COE Shahada


FinaciAl

Secondary Memory 15 GB HDD Space


Internet
4 Mbps Stable Connection
Connection

Table3.4.MinimumHardwareRequirements (Client Side)

Hardware Type Minimum Requirement


Processor 2.2 GHz x86/x64 based
Primary Memory 2 GB RAM
Secondary Memory 5 GB HDD/ROM Space
Internet
4 Mbps Stable Connection
Connection

iv. 3.2.2 Software Requirements

Table3.5.MinimumSoftwareRequirements (Server Side)

Software Type Minimum Requirement


Platform (OS) Windows 10
Hosting Server Local Host
Execution Environment VS Code, PyCharm
Web Browser Google Chrome 107.0.5304.107

Table3.5.MinimumSoftwareRequirements (Client Side)

Software Type Minimum Requirement


Platform (OS) Windows 10
Execution Environment VS Code, PyCharm
Web Browser Google Chrome 107.0.5304.107

Department of Computer Engineering 24 D.N.Patel COE Shahada


FinaciAl

3.3 Functional Requirements


1. Home Page: The system should have home page to view Nifty 50, Sensex and trending
Indian Stocks and US Stocks.

2. Predict/Search: The system should have search boxes for Crypto and Stocks
predictions. Enter the ticker name and no of days.

3. Ticker List: The system should have ticker list for find the ticker name of the specific
company and Crypto.

4. Training and Validation: The system should be able to provide functionality for model
training on historical data, considering different time periods and market conditions.

5. Real-time Prediction: The system should be able to make real-time predictions based
on the latest available data.

6. User Interface: Design a user-friendly interface for users to interact with the system.
Provide visualizations, charts, and reports summarizing predictions, historical
performance, and model accuracy.

7. Feedback Mechanism: Include a mechanism for users to provide feedback on


predictions, helping improve the model over time.

8. Integration with External Systems: Enable integration with external financial


platforms and trading systems to facilitate seamless execution of trades based on
predictions.

Department of Computer Engineering 25 D.N.Patel COE Shahada


FinaciAl

3.4 Non-Functional Requirements


1. Performance: The system should provide predictions with low latency, ensuring quick
responses to user queries and real-time market conditions.

2. Scalability: The system should be scalable to handle an increasing volume of data and
user interactions over time without compromising performance.

3. Reliability: Ensure high availability of the system, minimizing downtime during critical
market periods

4. Accuracy: Define acceptable levels of accuracy for predictions and continuously monitor
and improve model performance to meet these standards.

5. Security: Implement robust security measures to protect sensitive financial data and
ensure secure communication between the system and external platforms.

6. Security: Implement robust security measures to protect sensitive financial data and
ensure secure communication between the system and external platforms. Enforce user
authentication and authorization mechanisms to control access to different system
functionalities.

7. Maintainability: Develop the system with modularity and clear documentation to


facilitate ease of maintenance and updates. Ensure that updates can be applied seamlessly
without causing disruptions to the system.

8. Cost Efficiency: Optimize resource usage to ensure cost-effective operation of the


system, considering factors such as cloud service costs and computational expenses.

Department of Computer Engineering 26 D.N.Patel COE Shahada


FinaciAl

3.5 Functional Modeling: Data Flow Diagram


Data flow diagram (DFD), also referred to as ‘Bubble chart’ is a graphical technique, which
is used to represent information flow, and transformers are applied when data moves from
input to output.

3.5.1 Data Flow Diagram (DFD): Level 0

Figure 3.1. Data Flow Diagram (DFD): Level 0

Department of Computer Engineering 27 D.N.Patel COE Shahada


FinaciAl

3.5.2 Data Flow Diagram (DFD): Level 1

Figure 3.2. Data Flow Diagram (DFD): Level 1

Department of Computer Engineering 28 D.N.Patel COE Shahada


FinaciAl

3.5.3 Data Flow Diagram (DFD): Level 2

Figure 3.3. Data Flow Diagram (DFD):Level 2

Department of Computer Engineering 29 D.N.Patel COE Shahada


FinaciAl

4. PLANNING

4.1 Software Process Model

 Waterfall Model
Waterfall Model The waterfall model is a sequential, plan driven-process where you must
plan and schedule all your activities before starting the project. Each activity in the
waterfall model is represented as a separate phase arranged in linear order. Each of these
phases produces one or more documents that need to be approved before the next phase
begins.
It has the following phases:
 Requirement Analysis
 System Design
 Implementation
 Testing
 Deployment
 Maintenance

Fig 4.1. Waterfall Model

Department of Computer Engineering 30 D.N.Patel COE Shahada


FinaciAl

4.1.1. Model Description

The Waterfall Model is sequential design process, often used in Software development
processes, where progress is seen as flowing steadily download through the phase of
conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation
and Maintenance. This Model is also called as the classic Life cycle model as it suggests a
systematic sequential approach to software developments. This one of the oldest models
followed in software engineering. The process begins with the communication phase where
the customer specifies the requirements and then progress through other phases like
planning, modeling, construction and deployment of the software. There are 5 Phase of
water fall model: The Waterfall Model is sequential design process, often used in Software
development processes, where progress is seen as flowing steadily download through the
phase of conception, Initiation, Analysis, Design, Construction, Testing,
Production/Implementation and Maintenance. This Model is also called as the classic Life
cycle model as it suggests a systematic sequential approach to software developments. This
one of the oldest models followed in software engineering. The process begins with the
communication phase where the customer specifies the requirements and then progress
through other phases like planning, modeling, construction and deployment of the
software.

4.1.2. Model Selection Criteria

There are 5 Phase of water fall model:

1. Communication: In communication phase the major task performed is requirement


gathering which helps in finding out exact need of customer. Once all the needs of the
customer are gathered the next step is planning.

2. Planning: In planning major activities like planning for schedule, keeping tracks on the
processes and the estimation related to the project are done. Planning is even used to find
the types of risks involved throughout the projects. Planning describes how technical tasks
are going to take place and what resources are needed and how to use them.

3. Modeling: This is one the important phases as the architecture of the system is designed
in this phase. Analysis is carried out and depending on the analysis a software model is

Department of Computer Engineering 31 D.N.Patel COE Shahada


FinaciAl

designed. Different models for developing software are created depending on the
requirements gathered in the first phase and the planning done in the second phase.

4. Construction: The actual coding of the software is done in this phase. This coding is
done on the basis of the model designed in the modeling phase. So in this phase software is
actually developed and tested.

5. Deployment: In this last phase the product is actually rolled out or delivered installed at
customer’s end and support is given if required. Feedback is taken from the customer to
ensure the quality of the product. From the last two decades Waterfall model has come
under lot of criticism due to its efficiency issues. So let’s discuss the advantages and
disadvantages of waterfall model.

4.2. Estimations
Estimation is the process of predicting the most realistic amount of effort required to
develop or maintain software based on incomplete, uncertain, and noisy input. Estimation
consists of following steps:
- Estimate the size in lines of code of each module from empirical data.
- Estimate the effort in person-month or person-hours.
- Estimate the duration in calendar month.
- Estimate the number of people required.
- Estimate the cost in currency.

4.2.1. Historical Data

We have reviewed a project entitled “Finaci AI”. The proposed system has somewhat
similar functionality to it. The project is modularized as shown below

Table 4.1. Size Estimation of Historical Data

Software Module LOC


Machine learning algorithm 200
Index page 800
Backend code 1000
Ticker Page 500
Stock Information 1200

Department of Computer Engineering 32 D.N.Patel COE Shahada


FinaciAl

Final Result 300


Total Estimated Lines of Code (LOC) 4000

We are here considering the approximate size of our product as LOC.

4.2.2. Estimation Technique

COCOMO (Constructive Cost Estimation Model) was proposed by Boehm [1981]. COCOMO
predicts the efforts and schedule of a software product based on the size of the software.
According to Boehm, software cost estimation should be done through three stages: Basic
COCOMO, Intermediate COCOMO and Detailed / Complete / Advanced COCOMO. [8,16]
- Basic COCOMO: It is a single-valued, static model that computes software development
effort (and cost) as a function of program size expressed in estimated thousand
delivered source instructions (KDSI) i.e., Lines of code (LOC).
- Intermediate COCOMO: an extension of the Basic model that computes software
development effort as a function of program size by adding a set of "cost drivers," that
will determine the effort and duration of the project, such as assessments of personnel
and hardware.
- Detailed COCOMO: an extension of the Intermediate model that adds effort multipliers
for each phase of the project to determine the cost driver’s impact on each step
(analysis, design, etc.) of the software engineering process.
In our project we are going to use “Basic COCOMO” model for estimations. Basic
COCOMO categorizes projects into three types:
- Organic Mode: (Application Programs such as: data processing, scientific, etc.)
Development projects typically are not complicated and involve small, experienced
teams. The planned software is not considered innovative (i.e., little innovation) and
requires a relatively small number of DSI (typically 2000 to 50,000 LOC). Organic
projects are those developed in a stable development environment and do not have
tight deadlines or constraints.
- Semidetached Mode: (Utility Programs such as: compilers, linkers, analyzers, etc.)
Development projects typically are more complicated than in Organic Mode and involve
teams of people with mixed levels of experience. The software requires no more than
50,000 to 300,000 DSI. The projects require minor innovations and has some deadline &

Department of Computer Engineering 33 D.N.Patel COE Shahada


FinaciAl

constraint restrictions where the development environment is not much stable.


Examples of this type are developing a new database management system.
- Embedded Mode: (System Programs such as: operating system, etc.)Development
projects must fit into a rigid set of requirements because the software is to be
embedded in a strongly joined complex of hardware, software, regulations and
operating procedures. Contains a large highly experienced project team which is
required to do some highly innovative work with very tight deadlines and severe
constraints. The project requires no more than 300,000 DSI.
The Basic COCOMO formula takes the form:
Effort, E = a(KLoC/KDSI)b person months
Duration, D = c(E)d months
Person, P = E/D persons
Where E is the effort applied in person-months, KLoC is the estimated number of thousands
of delivered lines of code for the project, D is total time duration to develop the system in
months, and P is number of persons required to develop that system.
The coefficients a, c and the exponent b, d are given in the following table.

Table 4.2.Coefficient/Exponent Values of Basic COCOMO

Project Type a b c d
Organic 2.4 1.05 2.5 0.38
Semi-Detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

This system will fall in the “Embedded” category.

4.2.3. Cost Estimation

4.2.3.1. Size Estimation

Table 4.3.Size Estimation of Current System.

Software Module LOC


Login/signup 200
Homepage 2000
Listing page 1300
Machine learning algorithm 300

Department of Computer Engineering 34 D.N.Patel COE Shahada


FinaciAl

Results page 1200


Total Estimated Lines of Code (LOC) 5000

Total lines of code for the proposed system will be approximately 5000.

4.2.3.2. Effort Estimation

The system falls into the embedded category. The value a and b according to embedded
system is a =2.8 and b =1.20
Total LOC (approx.) of project is5000LOC = 5.00KLOC
Effort (E) =a(KLoC)b
E=3.6*(5.00)1.20
E=24.83 ≈ 25 Person Months

4.2.3.3. Duration Estimation

The value c and d according to embedded system is c =2.5 and d = 0.32


Duration (D) = c(E )d
D = 2.5*(24.83)0.32
D = 2.5 *2.7950
D = 6.98 ≈7 Months

4.2.3.4. Person Required

Person Required = Effort Applied (E) / Development Time (D)


= 24.12/7
= 3.44≈ 3 Persons

4.2.3.5. Estimated Cost of System

We assume each team member charges₹500/- per month, ₹1000/- required for other
resources & miscellaneous purposes, and ₹500/- for Microphone. Thus,
Estimated Cost of System = ((Person Charges * Person Required) + Resource Charges) *
Duration [+ Hardware Cost]
=((500 * 3) + 1000) * 7 + 500 = ₹18,000/-

Department of Computer Engineering 35 D.N.Patel COE Shahada


FinaciAl

4.2.3.6. Estimation Summary

Table 4.4. Summary of different Estimation.

Estimation Value
Size of the Project 5000 LoC
Effort Required 36 Person Months
Duration Required 8 Months
Person Required 4
Cost Required ₹37,500/-

4.3. Team Structure


Team structure addresses the issue of organization of the individual project teams. As per
the estimation, the project team will consist of three members. The effort assignment,
duties, and details of each member is given below:
Table 4.5.Team Structure.
Sr. Phase - I Phase - II
Name of Team Member E-mail ID
No. Role Role
1. Patil Tushar Vinod Leader, Leader, [email protected]

rushikeshthakur0101@
2. Thakur Rushikesh Bharat Member Member
gmail.com
urveshpatil152003@gm
3. Patil Urvesh Madhukar Member Member ail.com

4. Beldar Faizan Khan Arif Khan Member Member [email protected]

4.4. Project Scheduling


Project scheduling involves plotting project activities against a time frame. The aim of the
process is to ensure that various project tasks are well coordinated, and they meet the
various project objectives including timely completion of the project. The Project Table is a
popular way to perform project scheduling.

Department of Computer Engineering 36 D.N.Patel COE Shahada


FinaciAl

Table 4.6. Project Table

Activity Start Actual End Actual


Effort Assignment
Name Date Start Date Date End Date

Mr. Patil Tushar


Vinod
Introduction 04/10/23 05/10/23 08/10/23 12/10/23
Mr. Thakur
Rushikesh Bharat

Mr. Beldar Faizan


Literature Khan Arif Khan
6/10/23 06/10/23 07/10/23 07/10/23
Survey Mr. Patil Urvesh
Madhukar

Mr. Patil Tushar


Vinod
Analysis 10/10/24 10/10/23 09/11/23 09/11/23
Mr. Thakur
Rushikesh Bharat

Mr. Beldar Faizan


Khan Arif Khan
Planning 12/11/23 12/11/23 10/12/23 08/12/23
Mr. Patil Urvesh
Madhukar

Mr. Patil Tushar


Vinod
Design 22/01/24 24/01/24 30/01/24 31/01/24
Mr. Thakur
Rushikesh Bharat

Mr. Beldar Faizan


Khan Arif Khan
Phase-I
05/12/23 06/12/23 28/12/23 29/12/23 Mr. Patil Urvesh
Documentation
Madhukar

Mr. Patil Tushar


Vinod
Implementation 24/02/24 25/02/24 20/03/24 21/03/24
Mr. Thakur
Rushikesh Bharat

Department of Computer Engineering 37 D.N.Patel COE Shahada


FinaciAl

Mr. Beldar Faizan


Khan Arif Khan
Testing 04/04/24 05/04/24 09/04/24 10/04/24
Mr. Patil Urvesh
Madhukar

Mr. Patil Tushar


Vinod
Deployment 12/04/24 13/04/24 15/04/24 16/04/24
Mr. Thakur
Rushikesh Bharat

Mr. Beldar Faizan


Phase-II Khan Arif Khan
20/04/24 22/04/24 19/05/24 20/05/24
Documentation Mr. Patil Urvesh
Madhukar

Department of Computer Engineering 38 D.N.Patel COE Shahada


FinaciAl

5. DESIGN

Design uses a combination of text and diagrammatic forms to depict the


requirements for data, function and behavior in a way that is relatively easy to understand
and more importantly, straightforward to review for correctness, completeness, and
consistency. A diagram is the graphical presentation of a set of elements most often
rendered as a connected graph of vertices (things) and arcs (relationship). These diagrams
are drawn to visualize a system from different perspectives so a diagram into a system.

5.1. UML Modeling

The unified modeling language (UML) is a Graphical Language for visualization, Specifying,
construction and documenting the artifacts of a software intensive system. The UML gives a
standard was to write system’s blueprints, covering conceptual thing, such as Business
Processes & system functions, as well as concrete things, such as classes written in a
specific programming language, database schemas, and reusable software components.

5.1.1. Use Case Diagram

A use case defines behavioral features of a system. Each use case is named using a verb
phase expresses a goal of the system. A use case diagram shows a set of use cases and
actors &their relationships. Use case diagrams address the static use case view of a system.
These diagrams are especially important in organizing and modeling the behaviors of a
system. It shows the graphical overview of functionality provided by the system intents
actor.

Department of Computer Engineering 39 D.N.Patel COE Shahada


FinaciAl

Figure 5.1. Use case Diagram for User

Table 5.1. Use case Description for User

Use case Description


Login The user can login to start beginning his work.
Select Stocks to
The user shall be able to select which company stock to predict
Predict
Company The user shall be able to see all the relevant information related
Infomation to that company
Results The user shall be able to see the future
Logout User can logout to exit application.

Department of Computer Engineering 40 D.N.Patel COE Shahada


FinaciAl

5.1.2. Activity Diagram

An activity diagram of a special kind of state chart diagram that shows the flow from
activity within a system. An activity addresses the dynamic view of a system. The activity
diagram is often seen as part of the functional view of a system because it describes logical
processes, or functions. Each process describes a sequence of tasks and the decisions that
govern when and when they are performed. The flow in an activity diagram is driven by the
completion of an action.

Figure 5.2. Activity Diagram 1

Department of Computer Engineering 41 D.N.Patel COE Shahada


FinaciAl

Figure 5.3. Activity Diagram 2

Department of Computer Engineering 42 D.N.Patel COE Shahada


FinaciAl

5.1.3. Sequence Diagram

Sequence diagram is a kind of interaction diagram. It shows an interaction, consisting of a


set of objects and their relationships, including the message that may be dispatched among
them. A sequence diagram emphasizes the time ordering of messages. As shown in the
figure we can form a sequence diagram by first placing the objects that participate in the
interaction at the top of our diagram. The object that initiates the interaction at the left and
increasingly more subordinate objects to the right. The messages that these objects send
and receive along the Y-axis, in order of increasing time from top to bottom. This gives the
reader a clear visual cue to the flow of control over time.

Figure 5.4. Sequence Diagram

Department of Computer Engineering 43 D.N.Patel COE Shahada


FinaciAl

5.1.4. Class Diagram

A class diagram shows a set of classes, interfaces and collaborations and their relationship.
These diagrams are the most common diagram found in modeling object-oriented systems.
Class diagram addressed the static design view of a system.

Figure 5.5.Class Diagram

Department of Computer Engineering 44 D.N.Patel COE Shahada


FinaciAl

Table 5.3. Use Case Description for Class

Class Description
Register The user is able to register to the application
Login The user is able to register to the application
The user can select which company stock to predict via ticker
System
or can directly predict it
A user is able to see future value of stock that has been
Stocks
selected
A user is able to see future value of stock that has been
Crptocurrency
selected

5.1.5. Component Diagram

A component diagram shows the organization and dependencies among a set of


components. Component diagrams address the static implementation view of a system.
Component diagrams are one of the two kinds of diagrams found in modeling the physical
aspects of object-oriented systems. A component diagram shows the organization and
dependencies among set of components. You can use component diagrams to model the
static implementation view of a system.

Figure 5.6.Component Diagram

Department of Computer Engineering 45 D.N.Patel COE Shahada


FinaciAl

5.1.6. Deployment Diagram

Deployment diagram shows the configuration of run time processing nodes and
components that live on them. Deployment diagram addresses the static deployment view
of architecture. A deployment diagram shows the configuration of run-time processing
nodes and the components that live on them. Deployment diagrams address the static view
of architecture. They are related to components diagram in that a node typically encloses
one or more components.

Figure 5.7. Deployment Diagram

Department of Computer Engineering 46 D.N.Patel COE Shahada


FinaciAl

6 . IMPLEMENTATION

6.1. Implementation Language: <Python>


Python is an interpreted, high-level and general-purpose programming language. Created
by Guido van Rossum and first released in 1991, Python’s design philosophy emphasizes
code readability with its notable use of significant whitespace. Its language constructs and
object oriented approach aim to help programmers write clear, logical code for small and
large-scale projects. Python is dynamically typed and garbage-collected. It supports
multiple programming paradigms, including structured (particularly, procedural), object-
oriented, and functional programming. Python is often described as a ”batteries included”
language due to its comprehensive standard library. Python was created in the late 1980s
as a successor to the ABC language. Python 2.0, released in 2000, introduced features like
list comprehensions and a garbage collection system with reference counting. Python 3.0,
released in 2008, was a major revision of the language that is not completely backward-
compatible, and much Python 2 code does not run unmodified on Python 3.

The Python 2 language was officially discontinued in 2020 (first planned for 2015), and
”Python 2.7.18 is the last Python 2.7 release and therefore the last Python 2 release.” [30]
No more security patches or other improvements will be released for it. With Python 2’s
end-of-life, only Python 3.6.x and later are supported.

6.1.1. Features

We have selected Python to implement the system because of following reasons:


1. Free and Open Source Python language is freely available at the official website and
you can download it from the given download link below click on the Download
Python keyword. Download Python Since it is open-source, this means that source
code is also available to the public. So you can download it, use it as well as share it.
2. Easy to code Python is a high-level programming language. Python is very easy to
learn the language as compared to other languages like C, C#, Javascript, Java, etc. It
is very easy to code in the Python language and anybody can learn Python basics in a
few hours or days. It is also a developer-friendly language.

Department of Computer Engineering 47 D.N.Patel COE Shahada


FinaciAl

3. Easy to Read As you will see, learning Python is quite simple. As was already
established, Python’s syntax is really straightforward. The code block is defined by
the indentations rather than by semicolons or brackets.
4. Object-Oriented Language One of the key features of Python is Object-Oriented
programming. Python supports objectoriented language and concepts of classes,
object encapsulation, etc.
5. GUI Programming Support Graphical User interfaces can be made using a module
such as PyQt5, PyQt4, wx Python, or Tk in Python. PyQt5 is the most popular option
for creating graphical apps with Python.
6. High-Level Language Python is a high-level language. When we write programs in
Python, we do not need to remember the system architecture, nor do we need to
manage the memory.
7. Large Community Support Python has gained popularity over the years. Our
questions are constantly answered by the enormous Stack Overflow community.
These websites have already provided answers to many questions about Python, so
Python users can consult them as needed. 8. Easy to Debug Excellent information for
mistake tracing. You will be able to quickly identify and correct the majority of your
program’s issues once you understand how to interpret Python’s error traces.
Simply by glancing at the code, you can determine what it is designed to perform

6.1.2. Reason for Selection

We have selected Python programming language to implement the system because of


following reasons:

 Rich ML Ecosystem: Python boasts a vast ecosystem of libraries and frameworks


for machine learning, such as TensorFlow, PyTorch, and scikit-learn. These libraries
provide powerful tools and algorithms for training and deploying machine learning
models, making Python a natural choice for ML projects.
 Ease of Prototyping: Python's concise and readable syntax allows for rapid
prototyping and experimentation. This is particularly beneficial in the iterative
process of developing machine learning models, where quick testing and refinement
are crucial. Lane and Speed breaker
 Detection using machine learning algorithm: Integration with OpenCV: OpenCV
(Open Source Computer Vision Library) is a popular library for computer vision

Department of Computer Engineering 48 D.N.Patel COE Shahada


FinaciAl

tasks, including lane and object detection. Python provides seamless integration
with OpenCV, enabling you to leverage its functionalities for tasks like image
processing and feature extraction.
 Flexibility and Scalability: Python is a versatile language that can be used for both
smallscale experiments and large-scale production systems. Its flexibility allows you
to easily integrate machine learning components into your project and scale them as
needed.

6.1.3. Comparison with <Java>

Python is one of the most used programming languages due to its simplicity,
readability and extensive library support. Python consists of extensive machine learning
libraries and frameworks. Some of the most used libraries for machine learning models are
NumPy, Pandas, TensorFlow, etc., which are used for training models.

Python is often the first choice of most beginners as well as professional developers around
the world. Why is Python the first choice of machine learning developers? Some of the most
important reasons are given below.

 Python syntax is concise and easy to understand.


 It is suitable for rapid prototyping and experimentation during the development of
machine learning models.
 Extensive library support such as Numpy, PyTorch, TensorFlow, Scikit-learn, etc.
 It is often the first choice among the data science community spread around the
world.

However, Java is also a competitive choice for machine learning developers. But in the end,
it depends on the developer’s choice and the kind of projects they are currently working.
Programming languages are just the medium to tell systems the work they need to carry
out. We must feel free while deciding which programming language to go with and make a
selection based on personal research and preference.

Department of Computer Engineering 49 D.N.Patel COE Shahada


FinaciAl

Table 6.1. Comparison of <Python> with <Java>

Criteria Python Java


Known for simplicity and
More verbose syntax, often
readability. Concise syntax
Ease of Use requires more boilerplate code
facilitates rapid prototyping and
for similar tasks.
development.
Smaller ecosystem for data
Vast ecosystem of data analysis,
science and machine learning
machine learning, and financial
Rich Ecosystem compared to Python. Libraries
modeling libraries (e.g., NumPy,
like Weka and Deeplearning4j
pandas, scikit-learn, TensorFlow).
exist but are less extensive.
Large and active community of
data scientists, researchers, and Active community but smaller
Community
developers. Abundance of compared to Python's. Resources
Support
resources, forums, and tutorials may be less abundant.
available.
Often associated with
Versatile language supporting enterprise-grade applications.
procedural and object-oriented Capable of implementing
Flexibility programming. Ideal for machine learning algorithms but
exploratory data analysis and may require more effort for
iterative development. experimentation and
prototyping.
Many financial data providers
Java libraries for financial APIs
Integration with offer Python libraries and SDKs
exist but may be less common
Financial APIs for accessing and analyzing
compared to Python
market data.
Various deployment options
Often used for server-side
including cloud services,
Deployment development and enterprise
containerization (e.g., Docker),
Options applications. Deployment
and serverless computing (e.g.,
options may be more traditional.
AWS Lambda).
Widely taught in universities and
Less commonly taught in the
online courses for data science
Educational context of data science and
and machine learning. Many
Resources machine learning compared to
aspiring data scientists are
Python
already familiar with Python.

6.2. Implementation Tool(s): <VS Code>

Visual Studio Code is a source-code editor made by Microsoft for Windows, Linux and
macOS. Features include support for debugging, syntax highlighting, intelligent code
completion, snippets, code refactoring, and embedded Git. Features of NetBeans.

Department of Computer Engineering 50 D.N.Patel COE Shahada


FinaciAl

VS Code has a modern and minimalist user interface, which can be customized through
themes and extensions. It offers a clean and uncluttered coding experience. Eclipse, on the
other hand, has a more traditional IDE interface with multiple views, perspectives, and
toolbars. Eclipse provides a rich set of menus and options, giving you more control and
flexibility in configuring the IDE. VS Code has a vast marketplace of extensions that can
enhance its functionality. It has a strong ecosystem with support for various programming
languages and frameworks.

6.2.1. Features

1. Integrated Development Environment (IDE) Features: Despite being a


lightweight code editor, VS Code offers many features typically found in full-fledged
IDEs, such as code refactoring, code navigation, and project-wide search.

2. Cross-platform: VS Code is available for Windows, macOS, and Linux, making it


accessible to a wide range of developers.

3. IntelliSense: This feature provides smart code completion suggestions, parameter


info, and member lists based on variable types, function definitions, and imported
modules.

4. Debugger: VS Code has built-in support for debugging applications. It allows


developers to set breakpoints, inspect variables, step through code, and perform
other debugging tasks.

5. Extensions: VS Code has a rich ecosystem of extensions that enhance its


functionality. These extensions cover a wide range of use cases, including language
support, debugging, code formatting, and version control integration.

6. Version Control Integration: It comes with built-in Git support, allowing


developers to perform common Git operations such as commit, push, pull, and
branch management directly from the editor.

7. Customization: VS Code is highly customizable. Users can customize themes,


keyboard shortcuts, and other aspects of the editor to suit their preferences and
workflow.

Department of Computer Engineering 51 D.N.Patel COE Shahada


FinaciAl

8. Terminal Integration: VS Code includes an integrated terminal, allowing


developers to run command-line tools and scripts without leaving the editor.

9. Task Runner: It provides a task running system that allows developers to define
and run tasks, such as building projects or running tests, directly from the editor.

10. Multi-root Workspaces: VS Code supports working with multiple folders open in
the same window, allowing developers to organize their projects more efficiently.

6.2.2. Reason for Selection

We have selected <VS Code> to implement the system because of following reasons:

 Both VS Code and Pycharm have active communities, but Pycharm has a longer
history and a larger user base. It has been around for many years and has a mature
ecosystem with extensive documentation, tutorials, and online resources.
 VS Code has gained significant popularity in recent years and has a growing
community with a focus on web development, including python. VS Code and
Pycharm are both popular integrated development environments (IDEs) used for
python development, but they have some key differences in terms of features,
performance, and community support.
 As we are not focused on feature-rich IDE with comprehensive python development
tools. We are more leaned towards prioritizing simplicity, extensibility, and a
lightweight environment, The VS Code might be the right choice for us.

6.2.3. Comparison with <Pycharm>

A lightweight, general-purpose code editor that supports multiple languages and


frameworks through extensions. It's easy to set up and use, and integrates with Git and
other development tools. VS Code is a good choice for developers who aren't exclusively
focused on Python or who work in polyglot programming environments. It offers faster
startup times and better performance when working on large codebases. However, its
debugging tools aren't as advanced as PyCharm's, and Python-specific features require
additional extensions.

Department of Computer Engineering 52 D.N.Patel COE Shahada


FinaciAl

An integrated, feature-rich IDE that's specifically tailored for Python. It has a rich set of
features including advanced refactoring and debugging tools, and supports multiple Python
frameworks. PyCharm is a good choice for developers who want an integrated, feature-rich
IDE that streamlines the Python development process. However, it's more resource-
intensive than VS Code, which can result in slower startup times and higher memory usage.
PyCharm may also have a steeper learning curve for beginners.

Table 6.2. Comparison of <VS Code> with <Pycharm >

Criteria IDE Selected IDE to Compare


Both free and paid versions
Pricing Free and open-source.
available.
Full-fledged integrated
Lightweight code editor with development environment (IDE)
some IDE features (e.g., with advanced features for
IDE Features
IntelliSense, debugger, version Python development (e.g., code
control integration). refactoring, database tools, web
development support).
Specifically designed for Python
Supports multiple programming
Language development, with
languages with extensions
Support comprehensive support for
(including Python).
Python features and libraries.
Generally performs well but may
Lightweight and fast, suitable for
Performance be slightly heavier due to its
smaller projects and quick edits.
extensive features.
Seamless integration with
Integration with Integrates well with various tools JetBrains' suite of tools (e.g.,
Tools and services through extensions. WebStorm, IntelliJ IDEA) and
other JetBrains products.
Supports remote development
Remote Provides extensions for remote out of the box with features like
Development development (e.g., SSH, WSL). SSH, Docker, and WSL
integration.
Offers more advanced
Supports basic collaboration
Collaboration collaboration features built-in
features through extensions (e.g.,
Features (e.g., code review, pair
Live Share).
programming).

Department of Computer Engineering 53 D.N.Patel COE Shahada


FinaciAl

6.3. Form Design

6.3.1. Signup Form

Figure 6.1. Signup Form


6.3.2. Login Form

Figure 6.2. Login Form

Department of Computer Engineering 54 D.N.Patel COE Shahada


FinaciAl

6.3.3. Ticker Prediction Form

Figure 6.3. Ticker Prediction Form

6.3.4. Prediction Result

Department of Computer Engineering 55 D.N.Patel COE Shahada


FinaciAl

Figure 6.4. Prediction Result

Department of Computer Engineering 56 D.N.Patel COE Shahada


FinaciAl

6.4. Code Snippet

6.4.1. Sample Code Algorithm

Figure 6.5. Code Snippet: Multiple Linear Regression.

Department of Computer Engineering 57 D.N.Patel COE Shahada


FinaciAl

6.4.1. Sign Up (Backend)

Figure 6.6. Code Snippet: Signup (Backend).

6.4.1. Login / Logout (Backend)

Figure 6.7. Code Snippet: Login/ Logout (Backend).

Department of Computer Engineering 58 D.N.Patel COE Shahada


FinaciAl

6.4.1. Ticker Search Logic

Figure 6.8. Code Snippet: Ticker Search logic.

Department of Computer Engineering 59 D.N.Patel COE Shahada


FinaciAl

7. TESTING

Testing is an investigation conducted to provide stakeholders with information about


the quality of the product or service under test. Software Testing also provides an objective,
independent view of the software to allow the business to appreciate and understand the
risks in implementation of the software. Test techniques include, but are not limited to, the
process of executing a program or application with the intent of finding software bugs.
Software Testing, depending on the testing method employed can be implemented at
any time in the development process. However, most of the test effort occurs after the
requirements have been defined and the coding process has been completed. As such, the
methodology of the test is governed by the Software Development methodology adopted.

7.1. Testing Approach: <Software Testing>


Software testing, depending on the testing method employed, can be implemented at any
time in the development process. However, most of the test effort occurs after the
requirements have been defined and the coding process has been completed. As such, the
methodology of the test is governed by the software development methodology adopted.
Different software development models will focus the test effort at different points in the
development process. Newer development models, such as Agile, often employ test driven
development and place an increased portion of the testing in the hands of the developer,
before it reaches a formal team of testers. In a more traditional model, most of the test
execution occurs after the requirements have been defined and the coding process has been
completed.

TYPE OF TESTING USED

 Unit Testing: It is the testing of individual software units of the application .It is
done after the complexion of an individual unit before integration. Unit testing
involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic tests
at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process

Department of Computer Engineering 60 D.N.Patel COE Shahada


FinaciAl

performs accurately to the documented specifications and contains clearly defined


inputs and expected results.
 Integration Testing: Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is event
driven and is more concerned with the basic outcome of screens or fields.
Integration tests demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
 System Test: To test this application we are going with proper sequencing of testing
like unit, integration, validation, GUI, Low level and High level test cases, major
scenarios likewise. We will go with the GUI testing first and then integration testing.
After integration testing performs the high level test cases and major scenarios
which can affect the working on the application. We will perform the testing on the
data transmitted using the various inputs and outputs and validate the results.
 WHITE-BOX TESTING: Software testing methods are traditionally divided into white-
and black-box testing. These two approaches are used to describe the point of view
that a test engineer takes when designing test cases. In white-box testing an internal
perspective of the system, as well as programming skills, are used to design test
cases
 BLACK-BOX TESTING: Black-box testing treats the software as a quot;black boxquot;,
examining functionality without any knowledge of internal implementation. The
testers are only aware of what the software is supposed to do, not how it does it.

7.1.1. Features

Software testing is a critical process in the development of software applications. It involves


evaluating a system or its components with the intent to find whether it satisfies the
specified requirements or not. Testing is essential to ensure that the software is reliable,
functional, secure, and meets the expectations of users. Here are some key features and
aspects of software testing: • Quality Assurance: Testing helps in ensuring that the software
meets the quality standards set by the organization. It involves processes and activities that
ensure the development and maintenance processes are adequate to meet the goals.

Department of Computer Engineering 61 D.N.Patel COE Shahada


FinaciAl

• Bug Detection: One of the primary purposes of testing is to identify and eliminate bugs
or defects in the software. This helps in enhancing the overall quality and reliability of the
software. Validation and Verification: Testing is used to validate that the software meets the
requirements specified by the stakeholders. It verifies that the software is built according to
the design specifications.
• Functional and Non-functional Testing: Testing can be classified into two broad
categories: functional testing, which focuses on testing the functionality of the software,
and non-functional testing, which focuses on aspects like performance, usability, security,
etc.
• Test Planning and Execution: Testing involves planning the testing activities, creating
test cases, executing the tests, and analyzing the results. It requires a systematic approach
to ensure that all aspects of the software are adequately tested.

7.1.2. Reason for Selection

• Quality Assurance: Testing ensures that the software meets the quality standards
expected by the stakeholders. By identifying and fixing bugs and defects, testing contributes
to the overall reliability and usability of the software.

• Customer Satisfaction: Thorough testing helps in delivering a high-quality product that


meets or exceeds customer expectations. By detecting and fixing issues before deployment,
testing reduces the likelihood of customer dissatisfaction due to software failures or errors.

• Compliance and Regulatory Requirements: Many industries have strict compliance and
regulatory requirements that software must meet. Testing ensures that the software
complies with these requirements, reducing the risk of legal or financial penalties for non-
compliance.

• Cost Savings: While investing in testing may seem like an additional expense, it often
results in cost savings in the long run. By identifying and fixing defects early, testing
reduces the cost of fixing issues later in the development lifecycle or after deployment.

• Enhanced Security: Security testing helps in identifying vulnerabilities and weaknesses in


the software that could be exploited by malicious actors. By uncovering security issues
early, testing helps in building more secure software and protecting sensitive data.

• Continuous Improvement: Testing is not a one-time activity but an ongoing process


throughout the software development lifecycle.

Department of Computer Engineering 62 D.N.Patel COE Shahada


FinaciAl

7.2. Test Plan


At estimated plan documents the strategy that will be used to verify and ensure that a
product or system meets its design specifications and other requirements. At estimated
plan is usually prepared by or with significant input from Test Engineers.

Table 7.1. Test Plan


Sr. Test Item Planned
Name of Tester
No. (Module/Webpage/Function etc.) Date
Mr. Patil Urvesh
1. Webpages 03/05/2024
Madhukar
Mr. Patil Urvesh
2. Stock Prediction Module 06/05/2024
Madhukar
Mr. Beldar Faizan Khan
3. Crypto Prediction Module 08/05/2024
Arif Khan
Mr. Beldar Faizan Khan
4. Display Result Module 10/05/2024
Arif Khan

7.3. Test Case Design


A test case in software engineering is a set of conditions or variables under which a tester
will determine whether an application or software system is working correctly or not. It
may take many test cases to determine that a software program or system is functioning
correctly. Test cases are often referred to as test scripts, particularly when written. Written
test cases are usually collected into test suites.
Table 7.2. Test Case#1 Login page (Manual Testing)
Sr.
Test Item Purpose Input(s) Output(s) Validation(s)
No.
- Allow valid
- Navigate users.
To make user login - User Id
1. Login Page to user - Restrict
to system - Password
account invalid
users
Enter the
correct
username
and To make user login - User Id
2. Accept Valid Users
password to system Password
click on
submit
button

Department of Computer Engineering 63 D.N.Patel COE Shahada


FinaciAl

Table 7.3. Test Case#2 Signup page (Manual Testing)


Sr.
Test Item Purpose Input(s) Output(s) Validation(s)
No.
Enter the
number in Invalid Info
1. First name, Registration Number Notification Invalid
last name Occurs
field
Enter the
invalid email test@admin, Error occurs
2. Registration Invalid
id format in in invalid email
email id field
Enter the
character in
3. username, Registration Character Accept Valid
last name
field

Table 7.4. Test Case#3 Ticker page (Manual Testing)


Sr. Test Item Purpose Input(s) Output(s) Validation(s)
No.
1. Entering Enter Appropriate e.g. INFY.NS, Redirect to Valid
valid Ticker Ticker and number 30 Results page
values in of days
Ticker
Predictor
Form
2. Entering Enter Inappropriate e.g. INFY,CS , Redirect to Invalid
Invalid Ticker and number 30 Invalid
Ticker value of days Ticker page
in Ticker
Predictor
Form

7.4. Test Results


Table 7.5. Test Results (Manual Testing)
Sr.
Test Item Errors Bugs Remark
No.
1. Login Page None None Pass

2. Registration Page None None Pass

Department of Computer Engineering 64 D.N.Patel COE Shahada


FinaciAl

3. Ticker Search Page None None Pass

4. Result Page None None Pass

Table 7.6. Test Case 1 Login (Automatic Testing)

Test Item: Login Page


Sr.
Command Target Value
No.
1. Open /user/login/

2. Type field1 = username [email protected]

3. Type field2 = password Admin

4. clickAndWait button1 = login link = userAccount

Test Result: Pass

Table 7.7. Test Case 2 Registration (Automatic Testing)

Test Case #1

Test Item: Registration Page


Sr.
Command Target Value
No.
1. Open /user/signup/

2. Type field1 = First_name John

3. Type field2 = Last_name Doe

4. Type Field3 = Username Admin

5. Type Field 4 = password Admin@123

6. Type Field 5 = confirm pass Admin@123

7. Click and wait button1 = Submit Details

Department of Computer Engineering 65 D.N.Patel COE Shahada


FinaciAl

Test Result: Pass

Table 7.8. Test Case 3 Tickers Search (Automatic Testing)

Test Case #1

Test Item: Ticker Prediction Page


Sr.
Command Target Value
No.
1. Open /search/

2. Type field1 = Ticker symbol INFY.NS

3. Type field2 = Number of Days 30

4. Click And Wait button1 = Predict Ticker

Test Result: Pass

Table 7.9. Test Case 4 Result (Automatic Testing)

Test Case #1

Test Item: Result Page


Sr.
Command Target Value
No.
1. Open /result/

2. Stock field1 = Company Overview Show Value

3. Crypto field2 = Currency Overview Show Value

4. Display Field3 = Buy / Sell

Test Result: Pass

Department of Computer Engineering 66 D.N.Patel COE Shahada


FinaciAl

8. PROJECT COST AND EFFORT

8.1. Estimation Technique: Detailed COCOMO


For the initial estimation of our project, we have used the first stage of COCOMO i.e., Basic
COCOMO, now since our work is completed, we have all the necessary and actual
information required for the cost calculation, hence here we will use Detailed COCOCMO.
Detailed COCOMO incorporates all characteristics of the intermediate version with an
assessment of the cost driver's impact on each step (analysis, design, etc.) of the software
engineering process.
The detailed model uses different effort multipliers for each cost driver attribute.
These Phase Sensitive effort multipliers are used to determine the amount of effort
required to complete each phase. In detailed COCOMO, the whole software is divided in
different modules and then we apply COCOMO in different modules to estimate effort and
then sum the effort.
Detailed COCOMO incorporates the set of "cost drivers" that include subjective
assessment of product, hardware, personnel, and project attributes. The 17 cost drivers
which are multiplicative factors that determine the effort required to complete our
software project. Each of the 17 attributes receives a rating on a six-point scale that ranges
from "very low" to "extra high" (in importance or value).
Table 8.1 Cost Drivers for Detailed COCOMO

Personnel Factors
Analyst Capability (ACAP) Platform Experience (PLEX)
Applications Experience (APEX) Language and Tool Experience (LTEX)
Programmer Capability (PCAP) Personnel Continuity (PCON)
Project Factors
Use of Software Tools (TOOL) Development Schedule (SCED)
Multisite Development (SITE)
Platform Factors
Execution Time Constraint (TIME) Database Size (DATA)
Main Storage Constraint (STOR) Product Complexity (CPLX)
Platform Volatility (PVOL) Required Reusability (RUSE)
Documentation Match to Lifecycle Needs
Required Software Reliability (RELY)
(DOCU)

Department of Computer Engineering 67 D.N.Patel COE Shahada


FinaciAl

After assigning rating to each of the cost drivers the ratings are multiplied together to yield
Effort Adjustment Factor (EAF).
The Detailed COCOMO formula takes the form:
Effort, E = a(KLoC)b*EAF person months
Duration, D = c(E)d months
Person, P = E/D persons
where E is the effort applied in person-months, KLoC is the estimated number of thousands
of delivered lines of code for the project, D is total time duration to develop the system in
months, and P is number of persons required to develop that system.
The coefficients a, c and the exponent b, d are given in the following table.
Table 8.2. Coefficient/Exponent Values of Detailed COCOMO

Project Type a b c d
Organic 3.2 1.05 2.5 0.38
Semi-Detached 3.0 1.12 2.5 0.35
Embedded 2.8 1.20 2.5 0.32

8.2. Effort and Cost Calculation

8.2.1. Project Size

Table 8.3. Final Project Size

Software Module LOC


Login/signup 200
Homepage 2000
Listing page 1300
Machine learning algorithm 300
Results page 1200
Total Lines of Code (LOC) 5100

Department of Computer Engineering 68 D.N.Patel COE Shahada


FinaciAl

8.2.2. Cost Drivers Selection

Table 8.4. Cost Drivers Selection

Ratings
Cost
Drivers Very Very Extra
Low Usual High
Low High High
Personnel Factors

Analyst Capability(ACAP) 1.46 1.19 1.00 0.86 0.71 ---

Applications Experience(APEX) 1.29 1.13 1.00 0.91 0.82 ---

Programmer Capability(PCAP) 1.42 1.17 1.00 0.86 0.70 ---

Platform Experience (PLEX) 1.21 1.10 1.00 0.90 --- ---

Language and Tool Experience(LTEX) 1.14 1.07 1.00 0.95 --- --

Personnel Continuity(PCON) 1.29 1.12 1.00 0.90 0.81 ---

Project Factors

Use of Software Tools(TOOL) 1.24 1.10 1.00 0.91 0.83 ---

Multisite Development(SITE) 1.24 1.10 1.00 0.91 0.82 ---

Development Schedule(SCED) 1.23 1.08 1.00 1.04 1.10 ---

Platform Factors

Execution Time Constraint(TIME) --- --- 1.00 1.11 1.30 1.66

Main Storage Constraint(STOR) --- --- 1.00 1.06 1.21 1.56

Product Factors

Platform Volatility(PVOL) --- 0.87 1.00 1.15 1.30 ---

Required Software Reliability(RELY) 0.75 0.88 1.00 1.15 1.40 ---

Database Size (DATA) --- 0.94 1.00 1.08 1.16 ---

Product Complexity(CPLX) 0.70 0.85 1.00 1.15 1.30 1.65

Required Reusability(RUSE) --- 0.95 1.00 1.07 1.15 1.24

Documentation Match to Life cycle Needs(DOCU) 0.81 0.91 1.00 1.11 1.23 ---
1.19*1.13*1.00*0.90*0.95*1.12*0.91*1.24*1.04*1.00
Effort Adjustment Factor (EAF)
*1.00*0.87*1.40*1.00*1.15*0.95*1.00 = 2.01

Department of Computer Engineering 69 D.N.Patel COE Shahada


FinaciAl

8.2.3. Effort Calculation

The system falls into the Semi-Detached category. The value a and b according to
embedded system is a =3,0 and b =1.12.
Total LOC (approx.) of project is 5000LOC = 5.00KLOC
Effort (E) =a(KLoC)b * EAF
E=3.0*(5.00)1.20 * 2.01
E = 36.57≈ 36Person Months

8.2.4. Duration Calculation

The value c and d according to embedded system is c =2.5 and d =0.35.


Duration (D) = c (E )d
D = 2.5*(36.57)0.32
D = 8.81 ≈ 8 Months

8.2.5. Person Required

Person Required = Effort Applied (E) / Development Time (D)


= 36/8
= 4.5≈4 Persons

8.2.6. Total Cost

Each team member has charged₹900/- per month with ₹1000/- spent for other resources
& miscellaneous purposes in each month, in additional Deployment Server is purchased for
₹1000/-. Thus,
Total Cost of System = ((Person Charges * Person Required) + Resource Charges) *
Duration [+ Hardware Cost]
=((900 * 4) + 1000) * 8 + 1000 = ₹37,500/-

Department of Computer Engineering 70 D.N.Patel COE Shahada


FinaciAl

8.3. Calculation Summary


Table 8.5. Summary of different calculations.

Calculation Value
Size of the Project 5000 LoC
Effort Required 36 Person Months
Duration Required 8 Months
Person Required 4
Total Cost ₹37,500/-

Department of Computer Engineering 71 D.N.Patel COE Shahada


FinaciAl

9. CONCLUSION

In this project, we will be using a Machine Learning algorithm such as Multiple


Linear Regression to train a model using historical data sets of companies to predict the
future value of stocks and crypto. The project utilizing the Y-finance API, will excel in
accurately forecasting future values of stocks and cryptocurrencies. Compared to
traditional algorithms, Multiple Linear Regression ability to capture intricate patterns and
handle long-term dependencies will set it apart, resulting in superior predictive
performance.

Department of Computer Engineering 72 D.N.Patel COE Shahada


FinaciAl

10. FUTURE SCOPE

In the future for better accuracy, The models will be trained with more variety of
different data sets, and also other algorithms like CNN and Hybrid models Like Multi-
Multiple Linear Regression and CNN will be used to create more precise predictions. The
Financial AI project will enhance predictive accuracy with advanced techniques, expand to
global markets and diverse assets, and offer personalized insights, real-time processing,
seamless platform integration, and educational resources. Adding commodities and index
modules will provide comprehensive market analysis for strategic investment decisions.

Department of Computer Engineering 73 D.N.Patel COE Shahada

You might also like