AUTOMATIC AGE AND GENDER DETECTION MCA - Documentes
AUTOMATIC AGE AND GENDER DETECTION MCA - Documentes
AUTOMATIC AGE AND GENDER DETECTION MCA - Documentes
of
BHIMAVARAM-534202
AUGUST-2022
© 2011, Name of the student All rights reserved.
BHIMAVARAM-534202
CERTIFICATE
This is to certify that the project Report entitled, “AUTOMATIC AGE AND
GENDER DETECTION” submitted by Ms. “Chatla. Mounika” to “D.N.R. College,
P.G.Courses (Autonomous)”, affiliated to “Adikavi Nannaya University”,
Rajahmahendravaram, Andhra Pradesh, India, is a record of bonafide Project work carried
out by him/her under my/our supervision and guidance and is worthy of consideration for the
award of the degree of Master of Computer Applications.
Date:
DECLARATION
I certify that
a. The work contained in this report is original and has been done by me under the guidance
of my supervisor(s).
b. The work has not been submitted to any other Institute for any degree or diploma.
c. I have followed the guidelines provided by the university in preparing the report.
d. I have conformed to the norms and guidelines given in the Ethical Code of Conduct of the
Institute.
e. Whenever I have used materials (data, theoretical analysis, figures, and text) from other
sources, I have given due credit to them by citing them in the text of the report and giving
their details in the references. Further, I have taken permission from the copyright owners of
the sources, whenever necessary.
(CH.MOUNIKA)
Regd.No: 2141014
ACKNOWLEDGEMENT
I would like to take the privilege of the opportunity to express my gratitude into
Project work of “Automatic Age And Gender Detection” enabled me to express my thanks
to Mr.K.RAMBABU,MCA,M.Tech, Assit Prof HOD, Department of Computer
Applications, D.N.R College, P.G.Courses, for guidance since the project started. I feel that it
is a great privilege of completing my project under his/her guidance.
I owe my gratitude to Mr. K.Rambabu our beloved Head of the Department,
Department of Computer Applications, D.N.R College, P.G.Courses , for being a source of
inspiriting and encouraging me in successful completion of the project work.
I am thankful to our Honorable Principal, D.N.R College, P.G.Courses(A) ,
Sri. Dr.B.S.Santha kumari ,Who has shown keen interest and encouraged me by providing
all the facilities to complete my project successful.
I wish to express my sincere thanks to our Management by providing all the facilities
and infrastructure to complete my project successfully and I wish to express my sincere
thanks to all Teaching and Non-Teaching staff of Department of Computer Applications,
D.N.R College, P.G.Courses.
I am very thankful to all my friends and my family members who had given me good
co-operation and Suggestion throughout this project and helped me in successful completion.
CH.MOUNIKA
Regd.No: 2141016
AUTOMATIC AGE AND GENDER DETECTION
ABSTRACT
ABSTRACT
Automatic age and gender classification has become relevant to an increasing amount of
applications, particularly since the rise of social platforms and social media. Nevertheless,
performance of existing methods on real-world images is still significantly lacking, especially
when compared to the tremendous leaps in performance recently reported for the related task
of face recognition. In this paper we show that by learning representations through the use of
deep-convolutional neural networks (CNN) models (agenet. caff mode, gender net. Caffe
model) a significant increase in performance can be obtained on these tasks. To this end, we
propose a simple convolutional net architecture that can be used even when the amount of
learning data is limited
INDEX
S.NO TITLES PAGE.NO
ABSTRACT
1. INTRODUCTION 1
2. LITERATURE SURVEY 3
4. IMPLEMENTATION 27
4.1 MODULES 28
4.2 SOFTWARE ENVIRONMENT 31
4.2.1 PYTHON 31
4.2.2WHAT IS PYTHON 31
5.3 SCREENSHOTS 44
LIST OF TABLES
A Human face provides a lot of information about the age, gender, mood etc. It is affected by
many dynamic factors that get changed over period of time such as aging, hair style,
expressions, etc. Gender and Age are considered an important biometric attribute for human
identification. Bio-metric recognition is the method gathering information about person’s
physiological and behavioural characteristics for human identification and verification
(security models). Biometrics consists of soft biometric (age, gender, ethnicity, height and
facial measurements) and hard biometric (Physical, behavioural and biological). Soft-
biometric attributes like skin, hair colour, distance between eye and nose, face shape, and etc.
can be accessed to accelerate data traversing, or to classify unlabelled subject for various
gender and age classes. Furthermore, with the wide spread use of computers, bio-metric
identification comes into demand in areas such as home automation and healthcare. Recently,
it has come about automatically detecting physical presence and confirming one’s identity
through pattern recognition, computer vision and image analysis.
One of the Biometric attributes considered is aging. Aging is caused because of many reasons
like DNA change, metabolism changes, UV rays from Sun, variation in facial tissues,
Restructuring of facial bones etc. Face recognition system are adversely affected by the
aging of the face. This idea plays huge role for new research area to be probed in the field
computer vision. The age estimation is just carried out extensively to find out some patterns
and variations as well as to get the best possible way to find out the various characteristic’s
that should be thought of. Another attribute is gender. Automatic gender classification is an
important for many applications like surveillance, targeted advertisement’s etc. This is done
to differentiate between male and female based on the features of humans. This literature
elaborates descriptive minutiae and comparisons that are performed by author on various
aspects like age, gender, and race. Also, different methods for extracting features,
classification, evaluation for significant knowledge of research. This helps the enthusiastic
researchers to enrol with deep learning aspects in classifying age and gender through human
facial images.
1
LITERATURE SURVEY
2
2. LITERATURE SURVEY
Gender Classification:
Early works on gender classification applied unsupervised methods [9,10], using Adaptive
Multi-Gradient (AMG) [9], and Multi-Gradient Directional (MGD) [10] features. Since 2012,
traditional machine learning methods in general and Support Vector Ma-chines (SVMs) in
particular [11–23], have become most popular. Except for SVMs, such models as Decision
Trees and their ensembles (Random Forests or AdaBoost) [11,13,21,24], shallow Artificial
Neural Networks [11,12,22,25], Regressions [20], Naïve Bayes [21], K-nearest neighbors
[11,26], Fuzzy Rule-Based Classification [16], and Discriminant Analysis [21] were applied.
We also observed that attention had been paid lately to ensemble approaches [20], where
several different classifiers are combined to create a master model. The majority of the
aforementioned models were applied upon textural [9,11–13,15–18,25] and a combination of
textural and shape features [14,22,23,27–31]. The best accuracy rates—between 77%
and82%—were achieved by the SVM classifiers with textural features [12,16,17,27]. Deep
models based on Convolutional Neural Networks started to appear in gender classification
works around 2018. Deep neural networks were applied as feature extrac-tors [21], and also
end-to-end pipelines, including both feature selection and classification layers [8,32,33]. The
main advantage of deep networks is their ability to learn features automatically without
manual engineering. In addition, CNNs have been shown to be on par or even outperforming
other classifiers on gender classification task [8,33,34]. Due to their benefits in terms of
performance and usability, deep networks have recently emerged as a leader in various
computer vision applications, including handwriting analysis
3
Age classification
In contrast to the gender classification task, not many works reported on automatic age
classification, while in most of them, age was only one of many demographic features
identified from handwriting documents. Bouadjenek et al. [15] applied an SVM classifier on
two gradient features for a gender, handedness, and age range prediction. Three SVM
predictors, each applied on aspecific data feature, were subsequently combined in [16,35] to
identify a writer’s gender, age range, and handedness. Em ran et al. [36] investigated different
classifiers—K-Nearest Neighbors, Random Forests, and SVM—using various visual
appearance features for the prediction of a writer’s age, gender, and handedness. Only a few
works developed models solely for age prediction. Upadhyay and Singh [37] studied the
estimation of age through handwriting characteristics in females and found that such
characteristics as slant, alignment, spacing, hesitation marks, tremor, and speed are really
valuable and helpful for age determination. Zouaoui et.al [38] investigated the co-training
approach for age range prediction from handwriting analysis. The authors proposed several
descriptors for feature generation and applied an SVM predictor for classification. Basavaraja
et al. [39] proposed a new unsupervised method for age estimationusing handwriting analysis
with Hu invariant moments, disconnectedness features, and k-means clustering. In [40], the
efficacy of using the dynamic features generated by users of smartphones and tablets to
automatically identify their age group was examined. The study with the KNN classifier
provides evidence that it is possible to detect user age groups based on the words they write
with their fingers on touchscreens. Research in [41] applied SVM and Random Forests to
automatically classify people as adults or children based on their handwritten data, collected
using a pen tablet. The best accuracy (up to 81%) was achieved by the SVM classifier with
textural features [16], leaving much room for performance improvement in age prediction
from hand writing. As can be seen, all works utilized feature engineering in conjunction with
conventional classifiers. Deep learning algorithms for age classification have not been used in
any research
4
SYSTEM ANALYSIS AND DESIGN
5
3.1 Existing system`
Gaussian Mixture Models (GMM) was used to represent the distribution of facial patches. In
GMM were used again for representing the distribution of local facial measurements, but
robust descriptors were used instead of pixel patches. Finally, instead of GMM, Hidden-
Markov Model, super-vectors were used for representing face patch distributions.
SVM classifiers were used by, applied directly to image intensities. Rather than using SVM,
used AdaBoost for the same purpose, here again, applied to image intensities. Finally,
viewpoint-invariant age and gender classification
One of the first applications of convolutional neural networks (CNN) is perhaps the LeNet-5
network described by for optical character recognition. Compared to modern deep CNN, their
network was relatively modest due to the limited computational resources of the time and the
algorithmic challenges of training bigger networks. Though much potential laid in deeper
CNN architectures (networks with more neuron layers), only recently have they became
prevalent, following the dramatic increase in both the computational power (due to Graphical
Processing Units), the amount of training data readily available on the Internet, and the
development of more effective methods for training such complex models. One recent and
notable examples is the use of deep CNN for image classification on the challenging Image
net benchmark.
Advantages
For age classification, we measure and compare both the accuracy when the algorithm gives
the exact age-group classification and when the algorithm is off by one adjacent age-group
(i.e., the subject belongs to the group immediately older or immediately younger than the
predicted group). This follows others who have done so in the past, and reflects the
uncertainty inherent to the task – facial features often change very little between oldest faces
in one age class and the youngest faces of the subsequent class.
6
3.3 FEASIBILITY STUDY
Preliminary investigation examines project feasibility; the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational, and Economic feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:
Technical Feasibility
Operational Feasibility
Economic Feasibility
The technical issue usually raised during the feasibility stage of the
investigation includes the following:
Does the necessary technology exist to do what is suggested?
Do the proposed equipment’s have the technical capacity to hold the
data required to use the new system?
Will the proposed system provide adequate response to inquiries,
regardless of the number or location of users?
Can the system be upgraded if developed?
Are there technical guarantees of accuracy, reliability, ease of access and data
security?
7
and security. The software and hard requirements for the development of this project
are not many and are already available in-house at NIC or are available as free as
open source. The work for the project is done with the current equipment and
existing software technology. Necessary bandwidth exists for providing a fast
feedback to the users irrespective of the number of users using the system.
Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organization’s operating requirements. Operational
feasibility aspects of the project are to be taken as an important part of the project
implementation. Some of the important issues raised are to test the operational
feasibility of a project includes the following: -
A system can be developed technically and that will be used if installed must still be a good
investment for the organization. In the economic feasibility, the development cost in creating
the system is evaluated against the ultimate benefit derived from the new systems. Financial
benefits must equal or exceed the costs. The system is economically feasible. It does not
require any addition hardware or software. Since the interface for this system is developed
8
using the existing resources and technologies available at NIC, there is nominal expenditure
and economic feasibility for certain.
FUNCTIONAL REQUIREMENTS
Outputs from computer systems are required primarily to communicate the results of
processing to users. They are also used to provide a permanent copy of the results for
later consultation. The various types of outputs in general are:
External Outputs, whose destination is outside the organization.
Internal Outputs whose destination is within organization and they are the
User’s main interface with the computer.
Operational outputs whose use is purely within the computer department.
Interface outputs, which involve the user in communicating directly.
Understanding user’s preferences, expertise level and his business
requirements through a friendly questionnaire.
Input data can be in four different forms - Relational DB, text files, .xls and
xml files. For testing and demo you can choose data from any
domain. User-B can provide business data as input.
INPUT STAGES
Data recording
Data transcription
Data conversion
Data verification
Data control
9
Data transmission
Data validation
INPUT TYPES
It is necessary to determine the various types of inputs. Inputs can be categorized as follows
NON-FUNCTIONAL REQUIREMENTS
1. Secure access of confidential data (user’s details). SSL can be used.
2. 24 X 7 availability.
PERFORMANCE REQUIREMENTS
Performance is measured in terms of the output provided by the application. Requirement
specification plays an important part in the analysis of a system. Only when the requirement
specifications are properly given, it is possible to design a system, which will fit into required
environment. It rests largely in the part of the users of the existing system to give the
requirement specifications because they are the people who finally use the system. This is
because the requirements have to be known during the initial stages so that the system can be
designed according to those requirements. It is very difficult to change the system once it has
been designed and on the other hand designing a system, which does not cater to the
requirements of the user, is of no use. The requirement specification for any system can be
broadly stated as given below:
10
The system should be better than the existing system
The existing system is completely dependent on the user to perform all the duties.
Hardware requirements
RAM: 4GB and Higher
Processor: Intel i3 and above
Hard Disk: 500GB: Minimum
Software Requirements
OS: Windows: or Linux
Python IDE: python 2.7.x and above
Pycharm IDE Required
Setup tools and pip to be installed for 3.6 and above
Language: Python Scripting
11
Software Development Life Cycle
There are various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also referred as
"Software Development Process Models". Each process model follows a particular life cycle
in order to ensure success in process of software development.
12
Requirements
Business requirements are gathered in this phase. This phase is the main focus of the project
managers and stake holders. Meetings with managers, stake holders and users are held in
order to determine the requirements. Who is going to use the system? How will they use the
system? What data should be input into the system? What data should be output by the
system? These are general questions that get answered during a requirements gathering
phase. This produces a nice big list of functionality that the system should provide, which
describes functions the system should perform, business logic that processes data, what data
is stored and used by the system, and how the user interface should work. The overall result
is the system as a whole and how it performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase.
Architects have the ball in their court during this phase and this is the phase in which their
focus lies. This is where the details on how the system will work is produced. Architecture,
including hardware and software, communication, software design (UML is produced here)
are all part of the deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation, and this
is the longest phase of the software development life cycle. For a developer, this is the main
focus of the life cycle because this is where the code is produced. Implementation my overlap
with both the design and testing phases. Many tools exist (CASE tools) to actually automate
the production of code using information gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered during the requirements phase.
Unit tests and system/acceptance tests are done during this phase. Unit tests act on a specific
component of the system, while system tests act on the system as a whole.
In the flexibility of uses the interface has been developed a graphics concept in mind,
associated through a browser interface. The GUI’s at the top level has been categorized as
13
follows Administrative User Interface Design the Operational and Generic User Interface
DesignThe administrative user interface concentratesInformation that is practically, part of
the organizational activities and which needs proper authentication for the data collection.
The Interface helps the administration with all the transactional states like data insertion, data
deletion, and data updating along with executive data search capabilities.
A graphical tool used to describe and analyze the moment of data through a system manual or
automated including the process, stores of data, and delays in the system. Data Flow
Diagrams are the central tool and the basis from which other components are developed. The
transformation of data from input to output, through processes, may be described logically
and independently of the physical components associated with the system. The DFD is also
known as a data flow graph or a bubble chart. DFDs are the model of the proposed system.
They clearly should show the requirements on which the new system should be built. Later
during design activity, this is taken as the basis for drawing the system’s structure charts.
The Basic Notation used to create a DFD’s are as follows:
14
2.Process: People, procedures, or devices that use or produce (Transform)
Data. The physical component is not identified.
1. Data Store: Here data are stored or referenced by a process in the System.
15
3.6 UML DIAGRAMS
Software design sits at the technical kernel of the software engineering process and is applied
regardless of the development paradigm and area of application. Design is the first step in the
development phase for any engineered product or system. The designer’s goal is to produce a
model or representation of an entity that will later be built. Beginning, once system
requirement has been specified and analyzed, system design is the first of the three technical
activities -design, code and test that is required to build and verify software.
The importance can be stated with a single word “Quality”. Design is the place where quality
is fostered in software development. Design provides us with representations of software that
can assess for quality. Design is the only way that we can accurately translate a customer’s
view into a finished software product or system. Software design serves as a foundation for
all the software engineering steps that follow. Without a strong design we risk building an
unstable system – one that will be difficult to test, one whose quality cannot be assessed until
the last stage. The purpose of the design phase is to plan a solution of the problem specified
by the requirement document. This phase is the first step in moving from the problem domain
to the solution domain. In other words, starting with what is needed, design takes us toward
how to satisfy the needs. The design of a system is perhaps the most critical factor affection
the quality of the software; it has a major impact on the later phase, particularly testing,
maintenance. The output of this phase is the design document. This document is similar to a
16
blueprint for the solution and is used later during implementation, testing and maintenance.
The design activity is often divided into two separate phases System Design and Detailed
Design.
System Design also called top-level design aims to identify the modules that should be in the
system, the specifications of these modules, and how they interact with each other to produce
the desired results. At the end of the system design all the major data structures, file formats,
output formats, and the major modules in the system and their specifications are decided.
During, Detailed Design, the internal logic of each of the modules specified in system design
is decided. During this phase, the details of the data of a module is usually specified in a
high- level design description language, which is independent of the target language in which
the software will eventually be implemented.
Active class
Active classes initiate and control the flow of activity, while passive classes store data and
serve other classes. Illustrate active classes with a thicker border.
17
Associations
Associations represent static relationships between classes. Place association names above,
on, or below the association line. Use a filled arrow to indicate the direction of the
relationship. Place roles near the end of an association. Roles represent the way the two
classes see each other.
Note: It's uncommon to name both the association and the class roles.
Composition is a special type of aggregation that denotes a strong ownership between Class
A, the whole, and Class B, its part. Illustrate composition with a filled diamond. Use a hollow
diamond to represent a simple aggregation relationship, in which the "whole" class plays a
more important role than the "part" class, but the two classes are not dependent on each other.
The diamond end in both a composition and aggregation relationship points toward the
"whole" class or the aggregate.
18
CLASS DIAGRAM
19
FIG 3.7.1: Class Diagram on Credit Card Fraud Detection
Use case diagrams model the functionality of a system using actors and use cases. Use cases
are services or functions provided by the system to its users.
System
Draw your system's boundaries using a rectangle that contains use cases. Place actors outside
the system's boundaries.
Use Case
Draw use cases using ovals. Label with ovals with verbs that represent the system's functions
Actors
20
Relationships
Illustrate relationships between an actor and a use case with a simple line. For relationships
among use cases, use arrows labeled either "uses" or "extends." A "uses" relationship
indicates that one use case is needed by another in order to perform a task. An "extends"
relationship indicates alternative options under a certain use case.
21
USE CASE DIAGRAM
22
3.7.3 ACTIVITY DIAGRAM
Activity diagrams are graphical representations of Workflows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modeling
Language, activity diagrams can be used to describe the business and operational step-by-step
workflows of components in a system. An activity diagram shows the overall flow of control.
23
FIG 3.7.3: Activity Diagram
24
FIG 3.7.4: Sequence Diagram
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to
put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people,
keying the data directly into the system. The design of input focuses on controlling the
amount of input required, controlling the errors, avoiding delay, avoiding extra steps and
keeping process simple. The input is designed in such a way so that it provides security and
ease of use with. retaining the privacy. Input Design considered the following things.
OBJECTIVES
Input Design is the process of converting a user-oriented description of the input into a
computer- based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system. It is achieved by creating user-friendly screens for the data entry to
handle large volume of data. The goal of designing input is to make data entry easier and to
be free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities. When the data is
entered it will check for its validity. Data can be entered with the help of screens. Appropriate
messages are provided as when needed so that the user will not be in maize of instant. Thus
the objective of input design is to create an input layout that is easy to follow
25
3.9 OUTPUT DESIGN
1. A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users
and to other system through outputs. In output design it is determined how the
information is to be displaced for immediate need and also the hard copy output. It is the
most important and direct source information to the user. Efficient and intelligent output
design improves the system’s relationship to help user decision-making. Designing
computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.
Select methods for presenting information.
2. Create document, report, or other formats that contain information produced by the
system.
Output design
1. In this place user can login after that he will migrate vims’
from one (A) host to another host(B).
2. He will measure downtime and migration time.
26
IMPLEMENTATION
27
4.1 MODULES
image data
pre- processing
segmentation image
feature extraction
data training and testing
deep learning algorithm
detection
Dataset collection
Data Cleaning
28
When combining multiple data sources, there are many opportunities for data to be
duplicated or mislabeled.
data cleaning or data scrubbing, is the process of fixing incorrect, incomplete,
duplicate or otherwise erroneous data in a data set.
It involves identifying data errors and then changing, updating or removing data to
correct them.
Feature Extraction:
Model training
● Plan and simplify. In the beginning we must think about how does the computer sees
the images.
● Collect. For all the tasks try to get the most variable and diverse training dataset.
Sort and upload. You have your images ready and its time to sort them.
29
● Train the network on the training data.
● Test the network on the test data.
Testing model:
● In this module we test the trained deep learning model using the test dataset
● A type of test that makes detailed pictures of areas inside the body. Imaging tests use
different forms of energy, such as x-rays (high-energy radiation), ultrasound (high-
energy sound waves), radio waves, and radioactive substances. They may be used to
help diagnose disease, plan treatment, or find out how well treatment is working.
● Examples of imaging tests are computed tomography (CT), mammography,
ultrasonography, magnetic resonance imaging (MRI), and nuclear medicine tests.
Also called imaging procedure
Performance Evaluation
● In this module, we evaluate the performance of trained deep learning model using
performance evaluation criteria such as F1 score, accuracy and classification error.
● To evaluate object detection models like R-CNN and YOLO, the mean average
precision (map) is used. The map compares the ground-truth bounding box to the
detected box and returns a score. The higher the score, the more accurate the model is
in its detections.
● Model evaluation is the process of using different evaluation metrics to understand a
machine learning model's performance, as well as its strengths and weaknesses.
● Model evaluation is important to assess the efficacy of a model during initial research
phases, and it also plays a role in model monitoring.
Detection
30
● First, we take an image as input.
● Then we divide the image into various regions.
● We will then consider each region as a separate image.
● Pass all these regions (images) to the CNN and classify them into various classes.
4.2.1 PYTHON
Machine Learning
GUI Applications (like Kivy, Tkinter, PyQt etc.)
Web frameworks like Django (used by YouTube, Instagram, Dropbox)
31
Image processing (like Opencv, Pillow)
Web scraping (like Scrappy, Beautiful Soup, Selenium)
Test frameworks
Multimedia
Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-
68, Small Talk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU
General Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido
van Rossum still holds a vital role in directing its progress.
Python Features
Python's features include –
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
32
modules enable programmers to add to or customize their tools to be more efficient.
Databases − Python provides interfaces to all major commercial databases.
GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than
shell scripting.
Advantages of Python Over Other Languages
1.Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have to
search for any third-party libraries to get your job done. This is the reason that many people
suggest learning Python to beginners.
2.Affordable
Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it gives
you better community support.
The 2019 GitHub annual survey showed us that Python has overtaken Java in the most
popular programming language category.
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder programming
language.
33
Python a versatile programming language doesn’t come pre-installed on your computer
devices. Python was first released in the year 1991 and until today it is a very popular high-
level programming language. Its style philosophy emphasizes code readability with its notable
use of great whitespace. The object-oriented approach and language construct provided by
python enable programmers to write both clear and logical code for projects. This software
does not come pre-packaged with windows.
There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python
but this tutorial will solve your query. The latest or the newest version of Python is
version3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices. Before you
start with the installation process of Python. First, you need to know about your System
Requirements. Based on your system type i.e. operating system and based processor, you must
download the python version. My system type is a Windows 64-bit operating system. So the
steps below are to install python version 3.7.4 on Windows 7 device or to install Python 3.
Download the Python Cheat sheet here.The steps on how to install Python on Windows 10, 8 and
7 are divided into 4 parts to help understand better.
Numpy
Pandas is a python library. Pandas is used for data analysis and data manipulation tool. It is
used to read the dataset and load the dataset. It is fast, flexible when working with data.
34
Keras
Keras is advanced stage of neural network application programming interface (API). It is able
of run on top of tensor flow. Keras is mainly used while implementing deep learning
algorithms such as CNN, RNN because its user friendly, modularity, and easy to
extensibility. It runs on
both CPU and GPU. In the experiment of finding the fraud or non-fraud credit card
transaction we had used Keras along with backend running tensor flow. This Keras along
with tenor flow backend makes excellent choice for training neural network architecture.
MySQL
MySQL is database which is used for storage purpose. In the experiment of fraud
identification in card transaction we had used MySQL for storing the user details namely user
name, password, email-id and phone number. While entering into application, user needs to
register by providing the credential. These credentials are stored in database. Thereafter, user
needs to login by giving username and password. The application will validate the login and
registered information than user is moved to next window.
Open cv:
35
● OpenCV (Open Source Computer Vision Library) is an open source computer vision
and machine learning software library. OpenCV was built to provide a common
infrastructure for computer vision applications and to accelerate the use of machine
perception in the commercial products.
● OpenCV (Open Source Computer Vision Library) is a library of programming
functions mainly aimed at real-time computer vision. Originally developed by Intel, it
was later supported by Willow Garage then Itseez (which was later acquired by Intel).
● OpenCV is a great tool for image processing and performing computer vision tasks. It
is an open-source library that can be used to perform tasks like face detection,
objection tracking, landmark detection, and much more.
● Some of these functions are really common and are used in almost every computer
vision task.
● The project has to show a final product as a website that accepts your input data as a
picture and then tells us age and gender.
● The website will be an instantaneous website that will convert the information right
away, yet cannot store or reproduce the same information again. Just acts as an end-
to-end volatile conversion interface.
4.3.1 PROJECT CODE
import cv2
import math
import argparse
net.setInput(blob)
detections=net.forward()
faceBoxes=[]
36
for i in range(detections.shape[2]):
confidence=detections[0,0,i,2]
if confidence>conf_threshold:
x1=int(detections[0,0,i,3]*frameWidth)
y1=int(detections[0,0,i,4]*frameHeight)
x2=int(detections[0,0,i,5]*frameWidth)
y2=int(detections[0,0,i,6]*frameHeight)
faceBoxes.append([x1,y1,x2,y2])
cv2.rectangle(frameOpencvDnn, (x1,y1),(x2,y2) (0,255,0),
int(round(frameHeight/150)), 8)
return frameOpencvDnn, faceBoxes
parser=argparse. ArgumentParser()
parser.add_argument('--image')
args=parser.parse_args()faceProto="opencv_face_detector.pbtxt"
faceModel="opencv_face_detector_uint8.pb"
ageProto="age_deploy.prototxt"
ageModel="age_net.caffemodel"
genderProto="gender_deploy.prototxt"
genderModel="gender_net.caffemodel"
faceNet=cv2.dnn.readNet(faceModel,faceProto)
ageNet=cv2.dnn.readNet(ageModel,ageProto)
genderNet=cv2.dnn.readNet(genderModel,genderProto)
video=cv2.VideoCapture(args.image if args.image else 0)
padding=20
37
while cv2.waitKey(1)<0:
hasFrame,frame=video.read()
if not hasFrame:
cv2.waitKey()
break
resultImg,faceBoxes=highlightFace(faceNet,frame)
if not faceBoxes:
print ("No face detected")
ageNet.setInput(blob)
agePreds=ageNet.forward()
age=ageList[agePreds[0].argmax()]
print(f'Age: {age [1:-1]} years')
38
SYSTEM TESTING AND SCREENSHOTS
39
5.1 SYSTEM TESTING
Introduction to Decision Tree Algorithm
Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike
other supervised learning algorithms, decision tree algorithm can be used for solving
regression and classification problems too.
STRATEGIC APPROACH
The software engineering process can be viewed as a spiral. Initially system engineering
defines the role of software and leads to software requirement analysis where the information
domain, functions, behavior, performance, constraints and validation criteria for software are
established. Moving inward along the spiral, we come to design and finally to coding. To
develop computer software, we spiral in along streamlines that decrease the level of
abstraction on each turn. A strategy for software testing may also be viewed in the context of
the spiral. Unit testing begins at the vertex of the spiral and concentrates on each unit of the
software as implemented in source code. Testing progress by moving outward along the
spiral to integration testing, where the focus is on the design and the construction of the
software architecture. Talking another turn on outward on the spiral we encounter validation
testing where requirements established as part of software requirements analysis are validated
against the software that has been constructed. Finally, we arrive at system testing.
40
5.1.1 UNIT TESTING
Unit testing focuses verification effort on the smallest unit of software design, the module.
The unit testing, we have is white box oriented and some modules the steps are conducted in
parallel.
follow the concept of white box testing we have tested each form. We have created
independently to verify that Data flow is correct, all conditions are exercised to check their
validity, all loops are executed on their boundaries.
Established technique of flow graph with Cyclomatic complexity was used to derive test
cases for all the functions. The main steps in deriving test cases were:
Use the design of the code and draw correspondent flow graph.
Determine the Cyclomatic complexity of resultant flow graph, using
formula: V(G)=E-N+2 or
V(G)=P+1 or
41
N is the number of flow graph nodes
In this part of the testing each of the conditions were tested to both true and false aspects.
And all the resulting paths were tested. So that each path that may be generate on particular
condition is traced to uncover any possible errors.
This type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some local
variable was declared. The definition-use chain method was used in this type of
testing. These were particularly useful in nested statements
In this type of testing all the loops are tested to all the limits possible. The following
exercise was adopted for all loops:
All the loops were tested at their limits, just above them and just below them.
All the loops were skipped at least once.
For nested loops test the inner most loop first and then work outwards.
For concatenated loops the values of dependent loops were set with the help of
connected loop. Unstructured loops were resolved into nested loops
or concatenated loops and tested as above.
42
TEST CASES
1 Check if dataset is taken as input If taken process next step else passed
throw error
2 check for null values in the dataset If null values drop the null Passed
Records
3 Check for future e extraction and If file not saved throw the passed
save the file in. pkl format error
43
5.2 SYSTEM SECURITY
The protection of computer based resources that include hardware, software, data, procedures
and people against unauthorized use or natural Disaster is known as System Security.
Security
Integrity
Privacy
Confidentiality
SYSTEM SECURITY refers to the technical innovations and procedures applied to the
hardware and operation systems to protect against deliberate or accidental damage from a
defined threat.
DATA SECURITY is the protection of data from loss, disclosure, modification and
destruction.
PRIVACY defines the rights of the user or organizations to determine what information they
are willing to share with or accept from others and how the organization can be protected
against unwelcome, unfair or excessive dissemination of information about it.
44
5.3 SCREEN SHOTS
45
5.2.2 MALE(4-6)
46
5.2.3
MALE(15-20)
47
48
5.2.4 BACK OUTPUT SCREEN
CONCLUSION&FUTURE SCOPE
49
6.1 Conclusion:
Researches on age and gender estimation have been divided into two main groups: one is
to devise appropriate features that reflect the age and gender correctly, while the other is to
use deep CNN which automatically learn the features from the massive training data. In this
paper, we have proposed a method to get the benefits of both methods by enforcing the CNN
to use appropriate hand-crafted features. We believe that the advantage of our scheme is to let
the network to focus on useful features, which improves the performance as demonstrated in
the experiments.
Overall study of contribution made on gender classification and age estimation can be used in
to solve the real-time application problems. In this paper most of the research work done is in
Convolutional Neural Networks. Eleven unlike types of neural networks have been discussed
with their MAE and accuracy obtained by models. Additionally, function extraction in
addition to distinction of a few functions is actually carried out using just an individual
element extractor or maybe a one-time classifier along with in various additional works,
fusion is actually followed to do distinction process or maybe attribute extraction. On the
future direction, results that are Good for gender recognition as well as years’ opinion can
continue to be received utilizing transfer learning strategies with expansion in reliability.
Combos of fusions as well as datasets of attributes might be what is on the horizon for the
development ethnicity estimation, Affective behaviour analysis and numerous additional
demographic features could be verified for the performance of them by the classifier of
Neural Networks.His paper has identified the most widely used approaches for maintenance
classification of building maintenance. Many researchers pointed out that although corrective
maintenance is rational when the impact of failure is rather than small, carrying out the
corrective maintenance required performing immediately
Otherwise, higher costs than expected may be consequences when these faults happen in
unexpected ways and at the wrong time, causing inconvenience to users and downtime
50
independent components or systems. The authors also stated that preventive maintenance is
justifiable if the consequence of fault is high about the cost of doing omething that in advance
reduces the risk for the fault (Lind and Muyingo,2012). However, limitation of this
maintenance approach is redundant tasks may be carried out or manufacturer's
recommendation has limited local conditions and the actual process.
The literature review also indicates that to develop a rational maintenance plan requires both
building inspection data and recording data on previous conservation works. Without this
information, it is hard to decide on a maintenance policy estimate the expenditure for a
budget. Traditionally, to the asset, the building condition usually by visual only hardly to
discover all problems. However, new methods and technologies such as a 3D scanner and
Building Information Modelling have not been applied widely yet in the areas. Additionally,
failures of maintenance sometimes have occurred since lack of communication between
different management evels of maintenance and lack of previous maintenance knowledge of
building manager and in-house staff whose responsible for maintenance activities in the (Yin,
2008; Shah Ali, 2009). One idea can support the issue is knowledge management system
which is discussed in Zavadskas et al., (2010). Key questions of the system are what
components/systems should be monitor automatically and how test lesson-learned from
previous conservation and similar buildings
51
REFERENCES&BIBILOGRAPHY
52
REFERENCES
[1] Philip Smith, Cuixian Chen Transfer Learning with Deep CNNs for Gender Recognition
and Age Estimation, IEEE International Conference on Big Data 2018.
[2] Ke Zhang, Liru Guo, Miao Sun, Xing fang Yuan, TonyX. Han, Zhenbing Zhao and
Baogang Li Age Group and Gender Estimation in the Wild with Deep RoR Architecture,
IEEE Access COMPUTER VISION BASED ON CHINESE CONFERENCE ON
COMPUTER VISION Volume 5 (CCCV)2017.
[3] Sepidehsadat Hosseini, Seok Hee Lee, Hyuk Jin Kwon, Hyung Ii Koo and Nam Ik Cho
Age and Gender Classification Using Wide Convolutional Neural Network and Gabor Filter,
Institute for Information and communications Technology Promotion (IITP) 2018.
[4] Jia-Hong Lee, Yi-Ming Chan, Ting-Yen Chen and Chu-Song Chen Joint Estimation of
Age and Gender from Unconstrained Face Images using Lightweight Multi-task CNN for
Mobile Applications, IEEE Conference on Multimedia Information Processing and Retrieval
2018.
[5] Gil Levi and Tal Hassner Age and Gender Classification using Convolutional Neural
Networks, Intelligence Advanced Research Projects Activity (IARPA) 2015. [6] Nisha
Srinivas, Harleen Atwal, Derek C. Rose, Gayathri Mahalingam, Karl Ricanek Jr. and David
S. Bolme, Age, Gender, and Fine-Grained Ethnicity Prediction using Convolutional Neural
Networks for the East Asian Face Dataset, 12th International Conference on Automatic Face
& Gesture Recognition 2017.
[7] M Uri car, R Timofte, R Rothe, J Mata’s and L Van Gool Structured output svm
prediction of apparent age, gender and smile from deep features, Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition Workshops 2016.
53
[8] M. Fatih Aydogdu and M. Fatih Demirci Age Classification Using an Optimized CNN
Architecture, Association for Computing Machinery 2017.
[9] ByungIn Yoo, Youngjun Kwak, Young sung Kim, Changkyu Choi and Junmo Kim, Deep
Facial Age Estimation Using Conditional Multitask Learning with Weak Label Expansion,
SIGNAL PROCESSING LETTERS, VOL. 25, NO. 6 2018.
[10] Abhijit Das, Antitza Dantcheva and Francois Bremond Mitigating Bias in Gender, Age
and Ethnicity Classification: A Multi-Task Convolution Neural Network Approach, European
Conference of Computer Vision (ECCV) 2019.
[11] Marco Del Coco, Pierluigi Carcagni, Marco Leo, Paolo Spagnolo, Pier Luigi Mazzeo
and Cosimo Distante Multi-branch CNN for Multi-Scale Age Estimation, Springer ICIAP
2017, Part II, pp. 234–244, 2017.
[12] F. Dornaika, Arganda-Carreras and C. Belver, Age estimation in facial images through
transfer learning, Machine Vision and Applications 2018.
[13] Jin huang, Bin Li, Jia Zhu and Jian Chen, Age classification with deep learning face
representation, Multimed Tools Appl, Science Business Media New York 2017.
[14] Jian Lin, Tianyue Zheng, Yanbing Liao and Weihong Deng, CNN-Based Age
Classification via Transfer Learning, Springer International Publishing AG 2017.
54
55