Human Recognition Techniques Through Machine Learning
Human Recognition Techniques Through Machine Learning
1
Contents
INTRODUCTION.......................................................................................................................................3
1.1 Introduction.......................................................................................................................................4
1.2 Brief History of IFRS........................................................................................................................6
LITERATURE REVIEW............................................................................................................................9
2.1 Introduction.....................................................................................................................................10
2.2 Popularity of IFRS Worldwide........................................................................................................10
2.3 Literature Review............................................................................................................................11
2.4 Research Gap...................................................................................................................................13
RESEARCH DESIGN&METHODOLOGY.............................................................................................14
3.1 Introduction.....................................................................................................................................15
3.2 Objective of the Study.....................................................................................................................15
3.3 Hypothesis of the Study...................................................................................................................15
3.4 Scope of the Research Work............................................................................................................15
3.5 Limitations of the Study..................................................................................................................16
3.6 Data Collection................................................................................................................................16
3.7 Analysis of Data..............................................................................................................................16
3.8Financial Matrix for Data Inference..................................................................................................16
3.9 Graphical Representation.................................................................................................................17
COMPANIES UNDERSTUDY................................................................................................................18
4.1 IRFS reporting & regulations in INDIA..........................................................................................19
4.2 Accounting Standards Legal Representation...................................................................................20
4.3 Financial Statements Presentation...................................................................................................21
4.4 IRFS convergence............................................................................................................................22
4.5 Small and Medium Enterprises........................................................................................................24
4.6 COMPANIES UNDERSTUDY......................................................................................................25
ANALYSIS AND INTERPRETATION...................................................................................................29
5.1 Variables Measurement...................................................................................................................30
5.2 Interpreting the Results....................................................................................................................31
CONCLUSION, SUGGESTIONS, AND SCOPE FOR FURTHER RESEARCH....................................34
6.1 CONCLUSION...............................................................................................................................35
6.2 CONTRIBUTIONS.........................................................................................................................36
6.3 SUGGESTIONS..............................................................................................................................38
REFERENCES..........................................................................................................................................40
2
CHAPTER I
INTRODUCTION
3
1.1 Introduction
Human activity recognition has been developed with the purpose of monitoring the
dynamism of a person and it can be achieved with the assistance of Machine Learning
techniques. Many applications have been developed with the purpose of identifying,
detecting, and classifying human activity. In the human activity recognition process, a
threshold-based algorithm is quick and simple. A reliable result can only be provided by the
Machine algorithm. The dynamics of human behaviour are being observed by a variety of
sensors. As sensors, low-cost and commercial smartphones can be used to measure the
activities of individuals. This research will assess the effectiveness of various machine
learning classification algorithms. Various studies examined the activities of the human in an
intelligent environment. The AI model was developed from UCI's online storehouse based
on "Human Activity Recognition". This work aims to study the accuracy of machine learning
algorithms as well as draw effective conclusions based on their use of real-world datasets.
The other solutions cannot run on commodity mobile devices in real time[5], as they are too
heavy for them. Thus, their capability is limited to platforms with powerful processors. This
paper proposes a novel solution that performs the detection of the gesture recognition of an
individual or group of people movement with the application of different modules and
packages.
We will use Image Processing to implement the gesture recognition system with the help of
robust motion detection algorithms. All the movements of legs and hands are tracked and
extracted via help of sensors, differentiation is also done for right/left hand/leg. A person will
wear color bands to make it is easy for identification and differentiation of body parts that are
identical.
A system history is maintained for each hand and leg movement. For the purpose of
extracting moving parts from video frames. In the event that an object detection fails, Self-
Occlusion is checked. At the very end we utilize classification algorithms to check various
actions such as stabbing, hitting bottle, boxing or clapping too.
We use the Skeletal Landmark detection to detect the joints and bones to check for movement
and activity for the same.
4
HAR systems have several major applications, including human-computer interaction, remote
monitoring, military, health care, game, sport, and security. A wide variety of applications
can also be performed by the assistance system, such as monitoring humans' special
activities, monitoring sleep disorders, recognizing rehabilitation behaviors, detecting falls,
etc. [1]. Despite physical activities being important for physical and mental development,
many body-related problems occur due to physical activities, such as diabetes, heart attacks,
etc. Currently, physical activity recognition is divided into two categories: (a) vision-
based/visual and (b) non-visual. Vision-based approaches only detect at low range and are
limited to environments' light sensitivity. Wearable devices that use many different sensors
for human activity recognition are referred to as non-vision approaches [2]. It is difficult to
collect huge amounts of data for HAR. It is very difficult to use public datasets due to the
time and resources it takes to find the appropriate data and prepare it to be used as inputs for
further processing and privacy issues. Time sequence task classification is a major challenge
for HAR. With the help of data from sensors, one can predict movement of a person and
allows machines to see into a machine learning model, based on signal processing methods,
which require expertise and methods to design features from the raw data.
At present, deep learning methods, such as Convolutional Neural Networks (CNN) and
Recurrent Neural networks (RNN), are very much capable of and have even achieved higher
state-of-the-art results and have enabled the automatic learning of the features from the raw
sensor data. HAR is broadly classified into Sensor-based, Vision-based and multimodal
categories [3].
5
CHAPTER II
LITERATURE REVIEW
6
2.1. RELATED WORKS
The previous methods of gesture recognition which we took into account in this section are
briefly discussed here. However, there are still new challenges associated with different input
modalities and restrictions despite significant progress on gesture recognition.
To date, gesture or action recognition has been dealt with by more classical machine learning
methods such as hidden Markov model (HMM) [3], support vector machine (SVM) [1], and
bag-of-features [2]. The hand-engineered extractor uses hand-extracted features to classify
gestures. The skeleton joint covariance matrix for 3D skeleton joint locations was used in
Hussein et al. [1] to derive a discriminating descriptor for action classification.
A bag-of-words vector was used by Dardas [2] to calculate GLE gesture classes, using scale-
invariant feature transform (SIFT). Using the likelihood threshold estimation for the input
pattern, Lee [3] proposed a new approach to HMM. Conventional machine learning methods
have been used to recognize gestures, but these approaches have significant limitations. In
addition, the parameters of the model are sensitive to noise, and the parameters are dependant
upon experience.
Motion detection and segmentation have received notable attention. Several of the cases
[1,3,4] targeted video conferencing and were primarily concerned with the segmentation of
moving objects collectively rather than the segmentation of the individual parts. There are a
few algorithms designed to detect and classify human motion [2]. These algorithms use a
neural net mechanism to detect and track a person's movements. Unfortunately, such
algorithms require heavy computing resources, and they don't provide real-time results.
Recent advances in deep learning have led to the presentation of gesture recognition
techniques with impressive performance enhancements. Its capability of automatically
extracting spatiotemporal features from raw video input has led deep learning-based methods
to become popular. The convolutional neural network (CNN) was first designed for the
extraction of spatial features from static images. However, it has expanded to handle different
input types or to extract various features. CNN has been used as a multi-channel input for
video classification in several approaches [4,5,7,8]. To extract motion information for gesture
7
classification, Feldtenhofer [4] implements a two-stream network with separate ConvNets for
RGB images and optical images. Data level fusion was described by Kopuklu [10] as a means
of combining RGB and optical flow modalities paired with static images to extract action
features.
For classification of action videos, CNN and RNN can be combined. [8] Donahue used long-
term recurrent convolutional networks (LRCN) to extract spatial features of images. Long-
term recurrent neural networks were then used to calculate temporal relationships between
these features.
[3] used SVM classifiers to classify gestures based on spatiotemporal features extracted from
3DCNN. Molchanov [10] proposes using 3DCNNs as feature extractors to recognize gestures
online using recurrent 3D convolutional networks (R3DCNN). It has also been successful in
recognizing human gestures using a human pose. In the context of gesture classification,
RNN and LSTM techniques for capturing temporal details in the joints of human skeletons
are gaining popularity [5-7]; however, RGB data will create additional challenges. A
combination of multiple deep learning models or modalities is used in other research to
improve gesture classification performance.
8
REFERENCES
1. Hussain, Z.; Sheng, M.; Zhang, W.E. Different approaches for human activity
recognition: A survey. arXiv 2019, arXiv:1906.05074.
9
2. Poppe, R. A survey on vision-based human action recognition. Image Vis. Comput.
2010, 28, 976–990.
3. Fereidoonian, F.; Firouzi, F.; Farahani, B. Human activity recognition: From sensors
to applications. In Proceedings of theInternational Conference on
OmnilayerIntelligent Systems (COINS), Barcelona, Spain, 31 August–2 September
2020; pp. 1–8.
4. Yang, W.; Liu, X.; Zhang, L.; Yang, L.T. Big data real-time processing based on
storm. In Proceedings of the 12th IEEE InternationalConference on Trust, Security
and Privacy in Computing and Communications, Melbourne, VIC, Australia, 16–18
July 2016;pp. 3601784–3601787.
5. Ashraf, I.; Zikria, B.Y.; Hur, S.; Bashir, K.A.; Alhussain, T.; Park, Y. Localizing
pedestrians in indoor environments using magneticfield data with term frequency
paradigm and deep neural networks. Int. J. Mach. Learn. Cybern. 2021.
6. Gope, P.; Hwang, T. Bsn-care: A secure IoT-based modern healthcare system using
body sensor network. IEEE Sens. J. 2015, 16,1368–1376.
10