0% found this document useful (0 votes)
61 views6 pages

Business Failure Prediction Through Deep Learning

During the course of carrying out company operations, complications might often arise as a result of turbulent business operating circumstances and unforeseen abnormalities. In most cases, a number of difficulties combine to cause a lengthy decrease in the project's perceived usefulness or collapse owing to a depletion of financial resources. Preemptive evaluation of a company's failure may help anticipate potential challenges and mitigate the negative effects of such challenges by methodically
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views6 pages

Business Failure Prediction Through Deep Learning

During the course of carrying out company operations, complications might often arise as a result of turbulent business operating circumstances and unforeseen abnormalities. In most cases, a number of difficulties combine to cause a lengthy decrease in the project's perceived usefulness or collapse owing to a depletion of financial resources. Preemptive evaluation of a company's failure may help anticipate potential challenges and mitigate the negative effects of such challenges by methodically
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Business Failure Prediction through Deep Learning


1. 2. 3.
Ketan Bagade Dr. Nikita Kulkarni, Dr. Vajid Khan,
Computer Department KJ College of Computer Dept., KJ College of Computer Dept., KJ College of
Engineering Management Research, Engineering Management and Engineering Management and
Pune, Research, Pune Research, Pune

Abstract:- During the course of carrying out company defined as the use of fact-based technology to the aid of
operations, complications might often arise as a result of decision-making. The processes of technology and design
turbulent business operating circumstances and may turn raw, unpredictable data into coherent data that can
unforeseen abnormalities. In most cases, a number of be used to its full potential. By making use of this insightful
difficulties combine to cause a lengthy decrease in the facts, the company has a better chance of developing new
project's perceived usefulness or collapse owing to a goals, achieving organizational success, gaining an analytical
depletion of financial resources. Preemptive evaluation of knowledge, and making choices that are in the best interest of
a company's failure may help anticipate potential the organization moving forward.
challenges and mitigate the negative effects of such
challenges by methodically planning, preparing, and Early warning of impending corporate collapse is a
carrying out a business failure prediction. For an crucial component of financial risk avoidance management.
accurate forecast of the collapse of a company, it is It is possible for the management of a company to get early
important to do a prediction analysis of the activities of signals from an efficient proactive detection system for
the firm in order to detect potential problems. Methods of financial concerns, which might prevent the company from
machine learning or deep learning that can be used for filing for bankruptcy. The implementation of financial risk
the goal of generating an accurate forecast of the collapse warnings is vital for boosting the efficacy of investment plan
of a firm may effectively be used to identify these issues, and ensuring the economic viability of the organization. The
and they can be used to do so successfully. This number of foreclosures filed by corporations is one factor
methodology will be realized by the successful use of the that has a substantial bearing on the economic health of a
method of K-nearest Neighbor Clustering as well as country and is also one factor that may be used to anticipate
entropy estimation, in conjunction with Long-Short Term the onset of a financial crisis. As a result of the significant
Memory and Decision Making. correlations between some of the phenomena that a lot of
businesses are having financial problems and increased
Keywords:- Business Failure, K Nearest Neighbors, Entropy productivity, financial analysts are more aware of the
Estimation, Long Short Term Memory, Decision Making. significance of controlling and preventing the risk of
bankruptcy. This is because of the significant correlations
I. INTRODUCTION between some of the phenomena that many businesses are
having financial problems.
As the economy of the world evolves, insolvency
forecasting, the practice whose goal it is to examine a Predictive analytics are positioned to play a large part in
company's current financial status and potential for growth virtually all company kinds, both now and in the foreseeable
through the company's own accounting transactions, is future. This opportunity exists in both the present and the
playing a more significant role in the economic project foreseeable future. Predictive analytics requires solid
lifecycle. Insolvency forecasting's objective is to examine a decision-making based on statistics, which is a vital
company's present financial status and potential for growth component of predictive analytics for all sorts of businesses
through the company's own accounting transactions. Despite in all sectors. Not only does it boost the efficiency and output
the fact that it has been demonstrated that using ensemble of business organizations, but it also reduces costs and
approaches is an effective method for reducing premature lessens the risk of legal exposure. In addition to a plethora of
business failure prediction inaccuracy, the vast majority of other significant benefits, it improves customer retention and
early business failure prediction algorithms ignore the acquisition, and it drives up revenue. The potential
severely unbalanced distribution of such observations in movements of the market are anticipated by using predictive
business failure databases. This is despite the fact that it has analytics. In order to put a concept about predictive analytics
been shown that it is an effective method for reducing into practice by using predictive maintenance for a particular
premature business failure prediction inaccuracy. company, one instrument that is used is machine learning,
and another instrument that is used is methodologies.
In this day and age, where innovation is the norm,
having accurate business information to assist a firm in According to Lauren N. Singelmann et al. [1] machine
making choices about its future initiatives is essential. The learning may give simultaneously appropriate and relevant
processes and ideas that have a positive impact on the knowledge when combined with the infrastructure for
business decisions made by an organization are collectively development participation and the corporate strategy in the
referred to as predictive analytics. Predictive analytics is education. A computerized intelligence-based classification

IJISRT23MAY722 www.ijisrt.com 3248


Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
was created so that it could respond to the main investigation. research, a thermally conductive GDP prediction system is
Students' projects were classified using this system to place relying on deep reinforcement training should be developed.
them in the appropriate proposed theoretical groups. The idea The approach of GDP projection given in this paper offers
that a machine learning categorization might increase the technical support for the development of reasonable
task's homogeneity should have been developed in order to economic planning and the upkeep of a prosperous
maintain stronger consistency than a bunch of human development. By using signature modeling techniques like
evaluators. Multiple linguistic and quantitative categorization distinctive separation and categorization decision, the
were developed and evaluated both internally and against one generator's modeling capacity might be enhanced even more.
another to assess the degree to which the proposed technique Possible benefits include improved constituent reliability and
may lead to improvements. optimized computational information.

As stated by Pedro Rico et al. [2] this paper's principal Among the main focuses of study in the area of
objective is to provide a technique for anticipating the knowledge discovery over the last two decades has been the
pending sequence of actions and their descriptors in a development of new methods for measuring the accuracy of
dynamic and expandable setting related to a particular predictions, as stated by Alfonso E. Márquez Chamorro et al.
implementation of continuous improvement. For the purposes [5]. This seems to be because all of these resources are
of this article, this is the essential point. The first part of this fundamental for prescriptive and interpretative knowledge
structure serves to foresee the second part's sequence of extraction, the goals of whom are to maintain guidance and
actions, while the part 2 provides time details for both parts. encouragement during the implementation of the program.. In
Each of them is a "major element," the basic building block. plenty of other respects, improving methods of anticipating
Both stages and the procedure for improving the forecasting evaluation has been a major focus of studies into knowledge
models have already been detailed for adaptation to large- discovery. Such recommendations have focused on creating
scale data systems. This was performed so that the tools that help improve generalization ability, whether by
interpretation would go more smoothly. Additionally, two using a more effective teaching technique or by giving
case studies have been used to evaluate the development of capabilities to incorporate underlying information in
this application's Apache Flink-based architecture. As a result combination to timing, location, and behavior characteristics.
of using this method, it has been discovered that the Calculations may be made more confidently by using one of
architecture can administer and enhance systematic design these methods. Nevertheless, there is no approach in the
prediction. existing literature on predictive observation that supports the
consumer to locate the proper metadata for the projection of a
For the purpose of gauging the efficiency of genuine certain part of quality assurance. This could be since there
outcome forecasting preventative maintenance techniques, wasn't enough effort has been put into the study of the issue.
Jongchan Kim et al. [3] reported a technique they created.
The effectiveness integrity compared to the threshold criteria Yu Liu et al. [6] introduced the LSTM-based resource
for segmentation, the generation uniformity across descriptor forecasting for banking branches, and the resulting
categories, and the thermal efficiency across the online forecasting method was satisfactorily implemented in a
component are the three key measurements included. The laboratory setting. With this prediction method, a bank
software's goal is to supplement the standard focus on total branch's everyday liquidity ratio is equivalent to its monthly
performance standards acquired from either a classification buffer need. In order to estimate the future monthly buffer
algorithm with a more nuanced approach to assessing the demand, they built a long short-term memory (LSTM)
efficacy of predicting techniques. Among the metrics used to system containing 5 convolutional nodes and used it to
judge the success of this approach may be recall or precision. retrieve the deposit account rules of the banking branches.
The research recommends using the notion of Optimal Based on the results of the experiments with actual data sets,
solutions and a ranking mechanism for situations to compare they found that the LSTM forecast technique outperformed
several categorization and bucketing methodology variations both its forerunner, the ARIMA probabilistic model, and its
before applying the approach to practical systems. main competitor, the average prognosis technique. This
proposed methodology may be improved upon, despite the
This research article's literature review is found in the reality that perhaps the date attribute represents the most
second part. Section 3 describes the suggested strategy, while crucial consideration.
Section 4 thoroughly evaluates the findings obtained. This
study article is finalized in the section 5 including the extent Qiao Li et al. [7] state that the absence of inter data
of the future improvements. management (which includes gathering) and the shortage of
powerful computing tools are the three main obstacles to
II. LITERATURE SURVEY tackling broader societal computational concerns. Each of
these issues complicate efforts to develop answers to
Qingwen Li [4] et al. narrates the local government may widespread issues in social computations. The IoT is an
get technology help with macroeconomic evaluation and enormous number that has shown its ability to effectively
development thanks to the Economic forecasting method. handle information as well as to its connection. Furthermore,
Indicators of both local and international economic expansion it has been discovered that supervised learning may be
in addition to properly performed improvement may be found employed efficiently as a virtual environment for a broad
in GDP, thus this is crucial. According to the results of this variety of classic research areas. Putting the two together

IJISRT23MAY722 www.ijisrt.com 3249


Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
seems like it may be a practical way to enhance many methods are unlikely to provide comparable outcomes. This
established norms in many different businesses. Inspired by is largely due to the fact that svm classifier may be
this perspective, this study presents the interdisciplinary compromised by data fragmentation that occurs during the
approach of IoT and creates a unique architecture to solve the testing and calibration processes.
first problem. The predicament may be addressed with the
incorporation of a hybrid GNN/CNN artificial neural method Dinh Lam Pham et al. [11] states that Engagement in
in with this framework. Collectively, these neurons could corporate virtual communities and precise incident modeling
form the cerebral system. are both required for fully comprehending a daily finances
and controlling its personnel successfully. LSTM represents
The targeted pricing approach increases efficiency of one of the strongest effective strategies for achieving this
operations whilst decreasing redundancies and process goal at the present time, there are however others. This study
failures, as shown by the research of Muhammad Adnan addresses the challenge of the phase after the acquisition of
Khan et al. [8]. This means that the corporation is lacking the the predicted future occurrences in operational processes by
product lines that it obtained goods for owing to projections. compiling a number of strategies for forecasting the next
Extremely precise forecasting aids in the formulation of a important information connected to business confidence
robust advertising strategy, the upping of utilized investment, forecasting observation. This study set out to answer a
the decreasing of operating costs related to the supply chain, question about the next step in corporate operations after the
and the raising of customer delight. The study did look at the acquisition of information pertaining to anticipated future
benefits of rule-based programming and the capacity to occurrences. They provided an overview of the approach and
forecast moving average. Forecasted calculations have strategy for forecasting multiple corporate online social
demonstrated that DeepAR techniques are effective with a changes in along with details about the forthcoming event
high degree of precision and are competitive with one that used a multidimensional, multi-step LSTM network.
another. That is why DeepAR programs can make such a These measures were taken in addition to disseminating
high proportion of on-target recommendations. This is details about the function. Towards the greatest of their
because the estimated proportion differences are low. As knowledge,, the approach used in this research is the very
additional data is added, the model's predictions approach first to deal with the prediction of an institution's social
closer to the mark. The new research may have consequences media channel using data gathered from either a query
for tracking inventory if its findings hold water. Therefore, processing log or a documentary evidence of past predictions.
the stock value rationalization's current status may be seen as
a fresh beginning. For predicting a company's performance, Mohamed
Gihan Ali et al. [12] suggested a hybrid recommendation
Concerned with the challenges of forecasting team technique. Researchers first use Mutual Information Guided
effort and team effectiveness, Catherine Sandoval et al. [9] Morphological Operations to determine the much more
turned to subjective linguistic categorization in their study. significant features that may be used to distinguish among
Incorporating relevant post knowledge in the prediction failed and lucrative Initial coin sales throughout two
method was thought to possess the ability to enhance imbalanced datasets. The number of qualities chosen while
classifier performance for the entire project, despite the fact retaining the same level of accuracy decreases when
that it required to enhance the functionality of the supervised knowledge acquisition is utilized to promote adaptation in
learning or the level of sophistication of the classifications. segmentation. Researchers compared the suggested
The study's theories were put to the examination via a series categorization method to many different types of criteria that
of trials. In these tests, they integrated team activity and were not evenly weighted to show that it was successful. The
quality assessments into a multilayered decision-making classification outcomes showed that the suggested method
architecture or an unified Convolutional Neural Network outperformed other classifications by a large margin.
construction employing twofold data augmentation. In each
of these examples, the hypothesis was supported by an III. PROPOSED SYSTEM
improvement in classification results between phases of team
effectiveness and group burden.

Jin Eun Yoo et al. [10] suggest using batch


normalization as a suitable Machine Learning approach for
authentic evaluation. There is a lack of citations for this
section. Whenever combined with the gathered information,
standardization provides a chance to explore undiscovered
connections in between numerous aspects that are associated
to the development of learners in virtual classrooms. This
study's normalization not only provided an easily digestible
prediction system, but also showed predicting performance
on level with Random Forest. Multiple opportunities to make
use of big data in schooling are emerging as development on Fig 1: System Overview
normalization in classroom assessment advances. It is
possible that standardisation and other Machine Learning

IJISRT23MAY722 www.ijisrt.com 3250


Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The proposed methodology for business failure normalized using the MinMaxScaler function of the sklearn
prediction has been realized using Long Short Term Memory library. This function is utilized to rescale the data in a
that has been depicted in the figure above. manner that the resultant values are in the range of [0, 1]. The
data is reshaped after rescaling it according to the machine
Step 1: Dataset Collection – The presented approach learning algorithm that is being utilized.
initiates with the dataset being provided as an input. The
dataset for business failure prediction is downloaded from the For the purpose of implementing the LSTM approach,
URL https://www.kaggle.com/datasets/fedesoriano/company- the training and testing data is reshaped in to three
bankruptcy-prediction. This dataset is collected for a number dimensions. The three dimensions of the LSTM input are the
of different businesses along with the various parameters for features, time steps and sample size. The features are 10, the
the same. The attributes of the dataset are extensive and timestep is 1 and the sample size is 6706 rows.
provide an in-depth insight into the functioning of the
businesses. This is useful in determining the factors that can Step 6: K-nearest Neighbor classification – At this
be crucial in the success and the failure of the companies. stage of the procedure, the approach takes in both the user
input and the dataset that has already been preprocessed from
The attributes include, class label, Operating Gross earlier phases. The separation among the input data as well as
Margin, Realized Sales Gross Margin, Operating Profit Rate, the individual cells of the preprocessed collection is being
Operating Expense Rate, Revenue Per Share, Operating estimated with the help of these data. The equation 1 that is
Profit Growth Rate, Net Value Growth Rate, Cash provided below would be used to get the distance.
Reinvestment Ratio, Debt ratio, Total Asset Turnover,
Degree of Financial Leverage, Liability to Equity, etc. These 𝐸𝐷 = √∑(𝐴𝑇𝑖 − 𝐴𝑇𝑗)2 ________ (1)
attributes are useful in determining the state of the company Where,
and provide excellent information that can be useful in ED=Euclidian Distance
predicting the business failure with reasonable accuracy. The ATi=Attribute at index i
collected data is in the form of a spreadsheet that will be read ATj= Attribute at index j
in the next step of the procedure.
With the bubble sort method, the complete list is
Step 2: Dataset Reading - Once the spreadsheet for the essentially categorized into the increasing order after the
data is created, it can now be utilized as an input by the distances between every one of the rows has already been
system. The spreadsheet is read by the system through the computed and added to the relevant rows. This brings the list
use of the pandas tool. The data in the spreadsheet is read in to its final destination. The k value is going to be set to 2,
the string format, which cannot be utilized effectively in this which will culminate in two different clusters after this
approach, therefore, the data is converted into float type. This function is finished. The very first cluster, which has the
data is also plotted graphically to visualize the pattern of the relevant information, is passed on to the subsequent stage in
recorded data through the code. The dataset is now in a order to estimate the entropy making use of Shannon's
format that can be interfaced by the system to perform the information gain.
various functions and machine learning implementations in
the subsequent steps. Step 7: Entropy Estimation – It will be necessary to do
an analysis on the information gain scores of the clusters and
Step 3: Dataset Preprocessing - The Dataset needs to be the properties of the produced dataset. As an input for this
preprocessed before being utilized in the methodology, as step of the procedure, the clusters that were produced in the
preprocessing removes the inconsistent and improper data stage before this one are used. The Shannon information gain
from the dataset which in turn improves the performance of method is used in the computation of entropy for attribute
the system. The Correlation is used to achieve the qualities.
preprocessing of the dataset as depicted in fig. 20. The
dataframe_corr() function is used to find the correlation of An accurate score may be determined by the
the columns in in the dataframe in a pairwise manner. examination of the clusters that were completed in the past.
After then, the Shannon information gain calculation, which
Step 4: Dataset Segmentation - The preprocessed was introduced previously, is applied to this score in order to
dataset is provided as an input to this step, where the dataset determine the information gain. When the entropy estimates
is segmented into training and testing samples. This is done have been acquired, they are then written down in the form of
to ensure that the ample data is provided to the machine a list and passed on to the subsequent step so that they may
learning implementations as the amount of the data is directly be evaluated further.
proportional to the accuracy of the model deployed on the
said data. Therefore, a large amount of data is useful Step 8: LSTM – Long Short Term Memory – The LSTM
accessory in improving the accuracy of the presented approach is utilized through the implementation of the Keras
approach. LSTM module for our approach. The Keras library allows for
the realization of the machine learning concepts with greater
Step 5: Scaling and Dataset Reshaping - The segmented ease and effective control over the layers. The LSTM
dataset is provided as an input to this step of the approach is initialized as sequential and a single LSTM layer
implementation. In this step the dataset is scaled or

IJISRT23MAY722 www.ijisrt.com 3251


Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
is added with the 10 kernels and the resized shape achieved The ratio of the number of accurate Business Failure
in the previous steps. Predictions collected to the total number of trials that were
carried out was used to determine this method's precision
The feed the unit’s value is designated as 10 for the parameter. On the other hand, the recall criteria are a
execution of our approach. The activation function is the tanh complement to the precision measurement and help in
function along with the adam optimizer that is utilized to evaluating the total reliability of the LSTM constituent. This
compile the LSTM module. The batch size for the execution is because the precision measurement is not sufficient by
is designated as 30 with 150 epochs to fit the network. The itself.
Dense layer is also added which utilizes tanh as the activation
function. The model is compiled with the adam optimizer and In this approach, the recall is calculated by comparing the
the mean squared error is being used for the metrics as number of accurate Business Failure Forecasts to the total
accuracy. number of inaccurate Business Failure Predictions. The
following formulae provide a quantitative expansion of this
The mathematical model for the proposed methodology point.
has been depicted below.
Precision and Recall can be depicted as below:
Mathematical Model
S= { } be as system Business Failure Prediction System  A = The number of accurate Business Failure Predictions
Identify Input as D={ D1, D2, D3….. Dn}
 B = The number of inaccurate Business Failure Predictions
Where D = Dataset
S={D }
Identify BP as Output i.e. Business Prediction  C = The number of accurate Business Failure Predictions not
done
S= { I, BP }
Identify Process P
S= { I, P, BP } So, precision can be defined as
P= { P,KNN, LSTM ,DM}
Where
Precision = (A / (A+ B)) *100
P=Preprocessing
Recall = (B / (B+ C)) *100
KNN =K-Nearest Neighbor
LSTM = Long Short Term Memory
The experimental results that were obtained by utilizing
DM = Decision Making
the aforementioned formula are shown in a particular manner
So complete system for Business Prediction System
below in Table 1. These statistical measurements are applied
can be given as
S = { I, P,KNN, LSTM ,DM, BP } in order to give a graphical depiction, which is shown in
figure 2.
_________________________________________________

IV. RESULTS AND DISCUSSIONS

The research framework for Business Failure Prediction


using machine learning has been developed along with the
implementation of the Spyder IDE. For the purpose of
implementing this strategy, the Python programming
language was chosen to serve as the primary language for the
software. The laptop used for the development has a standard
configuration, consisting of a 1 terabyte (TB) hard drive, an
Intel i5 processor, and 16 gigabytes (GB) of RAM.

This strategy has been put through a rigorous evaluation


in order to ensure that an accurate assessment of the
functioning of the suggested technique has been made. The
concepts of precision and recall were used to the Table 1: Precision and Recall Measurement Table
investigation of the assessment criteria.

 Performance Evaluation based on Precision and Recall


Precision and recall are two incredibly helpful techniques
to measure how exactly a given module in the paradigm is
implemented. Both of these metrics can be found in the
context of our methodology. The component's precision,
which encompasses its dependability throughout a broad
range, is what defines the component's relative validity.

IJISRT23MAY722 www.ijisrt.com 3252


Volume 8, Issue 5, May – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
REFERENCES

[1]. L. N. Singelmann and D. L. Ewert, "Leveraging the


Innovation-Based Learning Framework to Predict and
Understand Student Success in Innovation," in IEEE
Access, vol. 10, pp. 36123-36139, 2022, doi:
10.1109/ACCESS.2022.3163744.
[2]. P. Rico, F. Cuadrado, J. C. Dueñas, J. Andión and H. A.
Parada G., "Business Process Event Prediction Through
Scalable Online Learning," in IEEE Access, vol. 9, pp.
136313-136333, 2021, doi:
10.1109/ACCESS.2021.3117147.
[3]. J. Kim and M. Comuzzi, "Stability Metrics for
Fig 2: Comparison of Precision and Recall Enhancing the Evaluation of Outcome-Based Business
Process Predictive Monitoring," in IEEE Access, vol. 9,
The functionality of the LSTM and its ability to make pp. 133461-133471, 2021, doi:
accurate predictions based on the input data are shown on the 10.1109/ACCESS.2021.3115759.
graph over a wide range of trial counts. The remarkable [4]. Q. Li, C. Yu and G. Yan, "A New Multipredictor
Ensemble Decision Framework Based on Deep
dependability of the method is shown by its precision and
Reinforcement Learning for Regional GDP Prediction,"
recall rates, which come in at 90.19 percent and 91.83
percent, correspondingly. These statistics are remarkably in IEEE Access, vol. 10, pp. 45266-45279, 2022, doi:
substantial for the first execution of such a procedure, and the 10.1109/ACCESS.2022.3170905.
[5]. A. E. Márquez-Chamorro, K. Revoredo, M. Resinas, A.
accomplishment that has been achieved as a consequence of
Del-Río-Ortega, F. M. Santoro and A. Ruiz-Cortés,
this is satisfying.
"Context-Aware Process Performance Indicator
Prediction," in IEEE Access, vol. 8, pp. 222050-
V. CONCLUSION AND FUTURE SCOPE
222063, 2020, doi: 10.1109/ACCESS.2020.3044670.
[6]. Y. Liu, S. Dong, M. Lu and J. Wang, "LSTM based
The research framework for Business Failure Prediction
reserve prediction for bank outlets," in Tsinghua
using machine learning has been elaborated in detail in this
Science and Technology, vol. 24, no. 1, pp. 77-85, Feb.
research paper. The proposed approach initiates with the
2019, doi: 10.26599/TST.2018.9010007.
business information dataset that is collected from a number
[7]. Q. Li, Y. Song, B. Du, Y. Shen and Y. Tian, "Deep
of different companies. This data consists of various
Neural Network-Embedded Internet of Social
attributes that are useful in the realization of the business
failure prediction. The dataset is first effectively Computing Things for Sustainability Prediction," in
IEEE Access, vol. 8, pp. 60737-60746, 2020, doi:
preprocessed to eliminate any redundant aspects in the
dataset which need to be removed to increase the 10.1109/ACCESS.2020.2982986.
effectiveness of the approach. The preprocessed dataset is [8]. M. A. Khan et al., "Effective Demand Forecasting
then subjected the KNN model to achieve the clusters of the Model Using Business Intelligence Empowered With
Machine Learning," in IEEE Access, vol. 8, pp.
data. The clusters are then utilized for the entropy estimation
116013-116023, 2020, doi:
using the Shannon information gain. Once the entropy values
10.1109/ACCESS.2020.3003790.
are achieved, the clusters along with the entropy values are
[9]. C. Sandoval, M. N. Stolar, S. G. Hosking, D. Jia and M.
provided to the Long Short Term Memory module that
Lech, "Real-Time Team Performance and Workload
performs the prediction based on the input values. The LSTM
Prediction From Voice Communications," in IEEE
outputs an effective probability score for the business failure
Access, vol. 10, pp. 78484-78492, 2022, doi:
prediction that needs to be classified before being presented
to the user. The decision making approach classifies the 10.1109/ACCESS.2022.3193694.
predictions and displays the output of the business failure [10]. J. E. Yoo, M. Rho and Y. Lee, "Online Students’
prediction to the user. The approach has been tested Learning Behaviors and Academic Success: An
Analysis of LMS Log Data From Flipped Classrooms
extensively for its performance through the use of precision
via Regularization," in IEEE Access, vol. 10, pp.
and recall which achieved 90.19 percent and 91.83 percent,
precision and recall respectively. 10740-10753, 2022, doi:
10.1109/ACCESS.2022.3144625.
For the future development purpose the propose model [11]. D. -L. Pham, H. Ahn, K. -S. Kim and K. P. Kim,
"Process-Aware Enterprise Social Network Prediction
can be implement in banking domain, and also in stock
market trading to analyze the possibilities of business and Experiment Using LSTM Neural Network Models,"
failures. in IEEE Access, vol. 9, pp. 57922-57940, 2021, doi:
10.1109/ACCESS.2021.3071789.
[12]. M. G. Ali, I. I. Gomaa and S. M. Darwish, "An
Intelligent Model for Success Prediction of Initial Coin
Offerings," in IEEE Access, vol. 10, pp. 58589-58602,
2022, doi: 10.1109/ACCESS.2022.3178369.

IJISRT23MAY722 www.ijisrt.com 3253

You might also like