0% found this document useful (0 votes)
13 views46 pages

Unit 5

Generative Adversarial Networks (GANs) are a type of neural network introduced in 2014 that consist of a generator and a discriminator, which work against each other to create realistic data. The generator produces synthetic data while the discriminator evaluates its authenticity, and through adversarial learning, both networks improve over time. GANs have various applications, including image synthesis, data augmentation, and high-resolution image enhancement, and come in different types such as Vanilla GANs, Conditional GANs, and Deep Convolutional GANs.

Uploaded by

dipalikhanore45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views46 pages

Unit 5

Generative Adversarial Networks (GANs) are a type of neural network introduced in 2014 that consist of a generator and a discriminator, which work against each other to create realistic data. The generator produces synthetic data while the discriminator evaluates its authenticity, and through adversarial learning, both networks improve over time. GANs have various applications, including image synthesis, data augmentation, and high-resolution image enhancement, and come in different types such as Vanilla GANs, Conditional GANs, and Deep Convolutional GANs.

Uploaded by

dipalikhanore45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Generative Adversarial Network (GAN)

Generative Adversarial Networks (GANs) were introduced by Ian Goodfellow and his
colleagues in 2014. GANs are a class of neural networks that autonomously learn
patterns in the input data to generate new examples resembling the original dataset.
GAN’s architecture consists of two neural networks:
Generator: creates synthetic data from random noise to produce data so realistic that
the discriminator cannot distinguish it from real data.
Discriminator: acts as a critic, evaluating whether the data it receives is real or fake.
Generative Adversarial Network (GAN)

 Generator Model
 The generator is a deep neural network that takes random noise as input to generate
realistic data samples (e.g., images or text). It learns the underlying data distribution by
adjusting its parameters through backpropagation.
 The generator’s objective is to produce samples that the discriminator classifies as real. The
loss function is:
 JG=−1mΣi=1mlogD(G(zi))JG=−m1Σi=1mlogD(G(zi))

 Where,
 JGJG measure how well the generator is fooling the discriminator.
 log D(G(zi))D(G(zi))represents log probability of the discriminator being correct for generated
samples.
 The generator aims to minimize this loss, encouraging the production of samples that the
discriminator classifies as real (logD(G(zi))(logD(G(zi)), close to 1
 Discriminator Model
 The discriminator acts as a binary classifier, distinguishing between real and generated data. It
learns to improve its classification ability through training, refining its parameters to detect fake
samples more accurately.
 When dealing with image data, the discriminator often employs convolutional layers or other
relevant architectures suited to the data type. These layers help extract features and enhance the
model’s ability to differentiate between real and generated samples.
 The discriminator reduces the negative log likelihood of correctly classifying both produced and real
samples. This loss incentivizes the discriminator to accurately categorize generated samples as fake
and real samples with the following equation:

 JD=−1mΣi=1mlogD(xi)–1mΣi=1mlog(1–D(G(zi))JD=−m1Σi=1mlogD(xi)–m1Σi=1mlog(1–D(G(zi))
 JDJD assesses the discriminator’s ability to discern between produced and actual samples.
 The log likelihood that the discriminator will accurately categorize real data is represented
by logD(xi)logD(xi).
 The log chance that the discriminator would correctly categorize generated samples as fake is
represented by log(1−D(G(zi)))log(1−D(G(zi))).
 By minimizing this loss, the discriminator becomes more effective at distinguishing between real and
generated samples
 MinMax Loss
 GANs follow a minimax optimization where the generator and discriminator are adversaries:
 minGmaxD(G,D)=[Ex∼pdata[logD(x)]+Ez∼pz(z)[log(1–
D(g(z)))]minGmaxD(G,D)=[Ex∼pdata[logD(x)]+Ez∼pz(z)[log(1–D(g(z)))]
Where,
 G is generator network and is D is the discriminator network
 Actual data samples obtained from the true data distribution pdata(x)pdata(x) are represented by x.
 Random noise sampled from a previous distribution pz(z)pz(z)(usually a normal or uniform
distribution) is represented by z.
 D(x) represents the discriminator’s likelihood of correctly identifying actual data as real.
 D(G(z)) is the likelihood that the discriminator will identify generated data coming from the generator
as authentic.
 The generator aims to minimize the loss, while the discriminator tries to maximize its classification
accuracy.

 Generator’s First Move
 G takes a random noise vector as input. This noise vector contains random values and acts as the
starting point for G’s creation process. Using its internal layers and learned patterns, G
transforms the noise vector into a new data sample, like a generated image.
 2. Discriminator’s Turn
 D receives two kinds of inputs:
 Real data samples from the training dataset.
 The data samples generated by G in the previous step.
 D’s job is to analyze each input and determine whether it’s real data or something G cooked up.
It outputs a probability score between 0 and 1. A score of 1 indicates the data is likely real, and
0 suggests it’s fake.
 Adversarial Learning
 If the discriminator correctly classifies real data as real and fake data as fake, it strengthens its
ability slightly.
 If the generator successfully fools the discriminator, it receives a positive update, while the
discriminator is penalized.
 4. Generator’s Improvement
 Every time the discriminator misclassifies fake data as real, the generator learns and improves.
Over multiple iterations, the generator produces more convincing synthetic samples.
 Discriminator’s Adaptation
 The discriminator continuously refines its ability to distinguish real from fake data. This ongoing
duel between the generator and discriminator enhances the overall model’s learning process.
 6. Training Progression
 As training continues, the generator becomes highly proficient at producing realistic data.
 Eventually, the discriminator struggles to distinguish real from fake, indicating that the GAN has
reached a well-trained state.
 At this point, the generator can be used to generate high-quality synthetic data for various
applications.
Types of GANs
 Vanilla GAN
 Vanilla GAN is the simplest type of GAN. It consists of:
 A generator and a discriminator, both are built using multi-layer perceptrons (MLPs).
 The model optimizes its mathematical formulation using stochastic gradient descent (SGD).
 While Vanilla GANs serve as the foundation for more advanced GAN models, they often struggle with
issues like mode collapse and unstable training.
 2. Conditional GAN (CGAN)
 Conditional GANs (CGANs) introduce an additional conditional parameter to guide the generation
process. Instead of generating data randomly, CGANs allow the model to produce specific types of
outputs.
 Working of CGANs:
 A conditional variable (y) is fed into both the generator and the discriminator.
 This ensures that the generator creates data corresponding to the given condition (e.g., generating
images of specific objects).
 The discriminator also receives the labels to help distinguish between real and fake data.
 Deep Convolutional GAN (DCGAN)
 Deep Convolutional GANs (DCGANs) are among the most popular and widely used types of GANs,
particularly for image generation.
 What Makes DCGAN Special?
 Uses Convolutional Neural Networks (CNNs) instead of simple multi-layer perceptrons (MLPs).
 Max pooling layers are replaced with convolutional stride, making the model more efficient.
 Fully connected layers are removed, allowing for better spatial understanding of images.
 DCGANs have been highly successful in generating high-quality images, making them a go-to choice for
deep learning researchers.
 4. Laplacian Pyramid GAN (LAPGAN)
 Laplacian Pyramid GAN (LAPGAN) is designed to generate ultra-high-quality images by leveraging a
multi-resolution approach.
 Working of LAPGAN:
 Uses multiple generator-discriminator pairs at different levels of the Laplacian pyramid.
 Images are first downsampled at each layer of the pyramid and upscaled again using Conditional GANs
(CGANs).
 This process allows the image to gradually refine details, reducing noise and improving clarity.
Super Resolution GAN (SRGAN)
 Super-Resolution GAN (SRGAN) is specifically designed to increase the resolution of low-quality
images while preserving details.
 Working of SRGAN:
 Uses a deep neural network combined with an adversarial loss function.
 Enhances low-resolution images by adding finer details, making them appear sharper and more
realistic.
 Helps reduce common image upscaling errors, such as blurriness and pixelation.
Libraries and tools for GAN

 Developing adversarial examples in Generative Adversarial Networks (GANs)


typically involves using deep learning frameworks, adversarial attack
libraries, and specialized tools for model evaluation. Here are the main
libraries and tools used for this purpose:
 1. Deep Learning Frameworks
 These are essential for building and training GANs:
 TensorFlow – Used for implementing GAN architectures, adversarial training,
and gradient-based adversarial attacks.
 PyTorch – Popular due to its dynamic computation graph, ease of use, and
extensive support for adversarial learning.
 Keras – A high-level API built on TensorFlow, useful for rapid prototyping of
GAN models.
 Adversarial Attack Libraries
 These provide ready-to-use adversarial attack implementations:
 Foolbox – A Python library that offers various adversarial attack methods on deep neural
networks, including white-box and black-box attacks.
 CleverHans – Developed by Google Brain, it includes adversarial attack techniques and defenses
for TensorFlow-based models.
 AdverTorch – A PyTorch-based toolkit providing adversarial attacks and defenses, useful for
research in adversarial ML.
 3. GAN-Specific Libraries
 For designing and experimenting with different GAN architectures:
 TorchGAN – A PyTorch-based framework with prebuilt GAN models, loss functions, and training
utilities.
 TF-GAN – TensorFlow’s dedicated library for GANs, including support for adversarial training and
evaluation.
 Optimization and Robustness Tools
 ART (Adversarial Robustness Toolbox) – Developed by IBM, it provides tools to evaluate and
defend against adversarial attacks.
 AutoAttack – An ensemble-based adversarial attack framework that ensures strong adversarial
robustness evaluations.
 PGD (Projected Gradient Descent) – Often implemented within PyTorch or TensorFlow for
adversarial training.
 5. Visualization and Debugging
 Matplotlib / Seaborn – Useful for visualizing adversarial examples and their effects.
 TensorBoard – Helps track GAN training progress and inspect adversarial example generation.
 Application Of Generative Adversarial Networks (GANs)
 Image Synthesis & Generation: GANs generate realistic images, avatars, and high-
resolution visuals by learning patterns from training data. They are widely used in
art, gaming, and AI-driven design.
 Image-to-Image Translation: GANs can transform images between domains while
preserving key features. Examples include converting day images to night,
sketches to realistic images, or changing artistic styles.
 Text-to-Image Synthesis: GANs create visuals from textual descriptions, enabling
applications in AI-generated art, automated design, and content creation.
 Data Augmentation: GANs generate synthetic data to improve machine learning
models, making them more robust and generalizable, especially in fields with
limited labeled data.
 High-Resolution Image Enhancement: GANs upscale low-resolution images,
improving clarity for applications like medical imaging, satellite imagery, and
video enhancement.
 Advantages of GAN
 The advantages of the GANs are as follows:
 Synthetic data generation: GANs can generate new, synthetic data that
resembles some known data distribution, which can be useful for data
augmentation, anomaly detection, or creative applications.
 High-quality results: GANs can produce high-quality, photorealistic results in
image synthesis, video synthesis, music synthesis, and other tasks.
 Unsupervised learning: GANs can be trained without labeled data, making
them suitable for unsupervised learning tasks, where labeled data is scarce or
difficult to obtain.
 Versatility: GANs can be applied to a wide range of tasks, including image
synthesis, text-to-image synthesis, image-to-image translation, anomaly
detection, data augmentation, and others.
 A deep neural network (DNN) is an ANN with multiple hidden layers between
the input and output layers. Similar to shallow ANNs, DNNs can model
complex non-linear relationships.
 The main purpose of a neural network is to receive a set of inputs, perform
progressively complex calculations on them, and give output to solve real
world problems like classification. We restrict ourselves to feed forward
neural networks.
 Neural network consists of layers of interconnected nodes, or neurons, that
collaborate to process input data. In a fully connected deep neural network,
data flows through multiple layers, where each neuron performs nonlinear
transformations, allowing the model to learn intricate representations of the
data.
 In a deep neural network, the input layer receives data, which passes
through hidden layers that transform the data using nonlinear functions. The
final output layer generates the model’s prediction.
 Types of neural networks
 1. Feedforward neural networks (FNNs) are the simplest type of ANN, where data flows in
one direction from input to output. It is used for basic tasks like classification.
 2. Convolutional Neural Networks (CNNs) are specialized for processing grid-like data, such
as images. CNNs use convolutional layers to detect spatial hierarchies, making them ideal for
computer vision tasks.
 3. Recurrent Neural Networks (RNNs) are able to process sequential data, such as time
series and natural language. RNNs have loops to retain information over time, enabling
applications like language modeling and speech recognition. Variants like LSTMs and GRUs
address vanishing gradient issues.
 4. Generative Adversarial Networks (GANs) consist of two networks—a generator and a
discriminator—that compete to create realistic data. GANs are widely used for image
generation, style transfer, and data augmentation.
 5. Autoencoders are unsupervised networks that learn efficient data encodings. They
compress input data into a latent representation and reconstruct it, useful for dimensionality
reduction and anomaly detection.
 6. Transformer Networks has revolutionized NLP with self-attention mechanisms.
Transformers excel at tasks like translation, text generation, and sentiment analysis,
powering models like GPT and BERT.
Attacks against deep neural networks
 Generative Model-Inversion Attacks (GMIA) against Deep Neural Networks
(DNNs) are a class of adversarial attacks designed to extract sensitive
information about the training data from a target model. These attacks
leverage generative models to reconstruct input data that closely resembles
original training samples, posing significant privacy risks in machine learning.
Key Concepts of GMIA

•Model Inversion Attacks (MIA): Traditional model inversion attacks attempt to


•recover input features from a trained model by exploiting its learned representations.
•Generative Model-Based Inversion: Instead of direct optimization, generative
• adversarial networks (GANs) or Variational Autoencoders (VAEs) are used to
• synthesize high-quality images or data resembling the training set.
•Black-Box vs. White-Box Attacks:
•White-box GMIA: The attacker has full access to the target model’s parameters,
•making inversion easier.
•Black-box GMIA: The attacker only queries the model and learns from its outputs,
•requiring more sophisticated generative techniques.
•Optimization & Gradient Exploitation: Many attacks use gradient-based techniques or
leverage publicly available data distributions to refine reconstructions.
 Threats & Implications
 Privacy Breaches: Can reveal personal information from models trained on
sensitive datasets (e.g., medical, facial recognition).
 Data Leakage: Organizations deploying DNNs risk exposing proprietary or
confidential training data.
 Membership Inference & Reconstruction: Attackers can determine whether a
particular sample was in the training set and reconstruct approximations of
original samples.
 Defense Mechanisms
 Differential Privacy: Injecting noise into model training to prevent precise
reconstruction.
 Adversarial Training: Enhancing model robustness against inversion attempts.
 Output Regularization: Limiting the model’s ability to reveal fine-grained
information.
 Enforcing Data Minimization: Restricting the amount of sensitive data used during
model training.
Attacks against deep neural networks (DNNs) via model substitution
 Model Substitution Attacks involve an adversary creating a surrogate model that
mimics a target model's behavior. This surrogate model can then be used to craft
adversarial examples, extract private information, or reproduce the model's
functionality without authorization. These attacks are particularly effective
against black-box machine learning services, such as cloud-based AI models.
 How Model Substitution Attacks Work?
 Querying the Target Model: The attacker sends multiple queries to the target
DNN to collect input-output pairs.
 Training a Substitute Model: Using the collected data, the attacker trains a
surrogate model that approximates the decision boundaries of the target model.
 Launching an Attack: The substitute model is used for:
 Adversarial Example Transfer: Crafting adversarial inputs that mislead both the
substitute and the target model.
 Model Extraction: Reconstructing a copy of the target model.
 Privacy Attacks: Extracting sensitive information from the training set.
 Types of Model Substitution Attacks
 a. Adversarial Transfer Attacks
 Attackers generate adversarial examples using the substitute model and transfer them to the target
model.
 Works because DNNs often learn similar decision boundaries.
 Common methods:
 Jacobian-Based Data Augmentation (Papernot et al., 2017) – Improves the surrogate model's accuracy with
synthetic queries.
 Black-Box Query-Based Attacks – Iteratively refining adversarial inputs using the substitute model.
 b. Model Extraction Attacks
 The attacker queries the target model and reconstructs a replica of its decision function.
 Goal: Steal the functionality of proprietary machine learning models.
 Types:
 Exact Extraction: Aims to fully replicate the target model’s weights and architecture.
 Approximate Extraction: Captures only the decision boundaries, sufficient for launching adversarial attacks.
 Example: Stealing cloud-based ML APIs (e.g., Google AutoML, OpenAI GPT models).
 Privacy Attacks (Membership & Property Inference)
 Once a substitute model is trained, the attacker can infer:
 Membership Inference: Whether a sample was part of the target model’s training
set.
 Property Inference: Identifying characteristics of the training data (e.g.,
demographic biases).
 Example: Using a surrogate model to analyze a medical AI system and infer if a
person’s data was used in training.
 Defense Mechanisms Against Model Substitution Attacks
 a. Query Rate Limiting
 Restrict the number of queries a user can make to the target model to prevent
large-scale data collection.
 b. Adversarial Example Detection
 Use adversarial detection models to identify inputs crafted via surrogate models.
 c. Differential Privacy
 Introduce noise to the model’s outputs to prevent precise extraction of decision
boundaries.
 d. API Watermarking & Monitoring
 Inject unique responses into model outputs to detect unauthorized copying
attempts.
 e. Distillation-Based Defenses
 Train models using defensive distillation to smooth decision boundaries, making
adversarial transfer less effective.
 Model substitution attacks pose a serious threat to DNN-based services,
enabling adversaries to steal models, craft adversarial examples, and extract
sensitive information. Implementing robust security measures such as query
monitoring, differential privacy, and adversarial training can mitigate these
risks.
Intrusion detection systems
 Intrusion is when an attacker gets unauthorized access to a device, network,
or system. Cyber criminals use advanced techniques to sneak into
organizations without being detected.
 Intrusion Detection System (IDS) observes network traffic for malicious
transactions and sends immediate alerts when it is observed. It is software
that checks a network or system for malicious activities or policy violations.
Each illegal activity or violation is often recorded either centrally using an
SIEM system or notified to an administration. IDS monitors a network or
system for malicious activity and protects a computer network from
unauthorized access from users, including perhaps insiders. The intrusion
detector learning task is to build a predictive model (i.e. a classifier) capable
of distinguishing between ‘bad connections’ (intrusion/attacks) and ‘good
(normal) connections’.
 Common Methods of Intrusion
 Address Spoofing: Hiding the source of an attack by using fake or unsecured
proxy servers making it hard to identify the attacker.
 Fragmentation: Sending data in small pieces to slip past detection systems.
 Pattern Evasion: Changing attack methods to avoid detection by IDS systems
that look for specific patterns.
 Coordinated Attack: Using multiple attackers or ports to scan a network,
confusing the IDS and making it hard to see what is happening.
 Working of Intrusion Detection System(IDS)
 An IDS (Intrusion Detection System) monitors the traffic on a computer
network to detect any suspicious activity.
 It analyzes the data flowing through the network to look for patterns and
signs of abnormal behavior.
 The IDS compares the network activity to a set of predefined rules and
patterns to identify any activity that might indicate an attack or intrusion.
 If the IDS detects something that matches one of these rules or patterns, it
sends an alert to the system administrator.
 The system administrator can then investigate the alert and take action to
prevent any damage or further intrusion.
 Classification of Intrusion Detection System(IDS)
 Intrusion Detection System are classified into 5 types:
 Network Intrusion Detection System (NIDS): Network intrusion detection systems
(NIDS) are set up at a planned point within the network to examine traffic from all
devices on the network. It performs an observation of passing traffic on the entire
subnet and matches the traffic that is passed on the subnets to the collection of
known attacks. Once an attack is identified or abnormal behavior is observed, the
alert can be sent to the administrator. An example of a NIDS is installing it on the
subnet where firewalls are located in order to see if someone is trying to crack
the firewall.
 Host Intrusion Detection System (HIDS): Host intrusion detection systems (HIDS)
run on independent hosts or devices on the network. A HIDS monitors the incoming
and outgoing packets from the device only and will alert the administrator if
suspicious or malicious activity is detected. It takes a snapshot of existing system
files and compares it with the previous snapshot. If the analytical system files were
edited or deleted, an alert is sent to the administrator to investigate. An example
of HIDS usage can be seen on mission-critical machines, which are not expected to
change their layout.
 Hybrid Intrusion Detection System: Hybrid intrusion detection system is made by the
combination of two or more approaches to the intrusion detection system. In the hybrid
intrusion detection system, the host agent or system data is combined with network
information to develop a complete view of the network system. The hybrid intrusion
detection system is more effective in comparison to the other intrusion detection system.
Prelude is an example of Hybrid IDS.
 Application Protocol-Based Intrusion Detection System (APIDS): An application Protocol-
based Intrusion Detection System (APIDS) is a system or agent that generally resides within a
group of servers. It identifies the intrusions by monitoring and interpreting the
communication on application-specific protocols. For example, this would monitor the SQL
protocol explicitly to the middleware as it transacts with the database in the web server.
 Protocol-Based Intrusion Detection System (PIDS): It comprises a system or agent that
would consistently reside at the front end of a server, controlling and interpreting the
protocol between a user/device and the server. It is trying to secure the web server by
regularly monitoring the HTTPS protocol stream and accepting the related HTTP protocol. As
HTTPS is unencrypted and before instantly entering its web presentation layer then this
system would need to reside in this interface, between to use the HTTPS.
 Signature-Based Detection: Signature-based detection checks network packets for known
patterns linked to specific threats. A signature-based IDS compares packets to a database of
attack signatures and raises an alert if a match is found. Regular updates are needed to
detect new threats, but unknown attacks without signatures can bypass this system
 Intrusion Detection System Evasion Techniques
 Fragmentation: Dividing the packet into smaller packet called fragment and
the process is known as fragmentation. This makes it impossible to identify an
intrusion because there can’t be a malware signature.
 Packet Encoding: Encoding packets using methods like Base64 or hexadecimal
can hide malicious content from signature-based IDS.
 Traffic Obfuscation: By making message more complicated to interpret,
obfuscation can be utilised to hide an attack and avoid detection.
 Encryption: Several security features such as data integrity, confidentiality,
and data privacy, are provided by encryption. Unfortunately, security features
are used by malware developers to hide attacks and avoid detection.
 Detection Method of IDS
 Signature-Based Method: Signature-based IDS detects the attacks on the
basis of the specific patterns such as the number of bytes or a number of 1s
or the number of 0s in the network traffic. It also detects on the basis of the
already known malicious instruction sequence that is used by the malware.
The detected patterns in the IDS are known as signatures. Signature-based IDS
can easily detect the attacks whose pattern (signature) already exists in the
system but it is quite difficult to detect new malware attacks as their pattern
(signature) is not known.
 Anomaly-Based Method: Anomaly-based IDS was introduced to detect
unknown malware attacks as new malware is developed rapidly. In anomaly-
based IDS there is the use of machine learning to create a trustful activity
model and anything coming is compared with that model and it is declared
suspicious if it is not found in the model. The machine learning-based method
has a better-generalized property in comparison to signature-based IDS as
these models can be trained according to the applications and hardware
configurations.
 Why Are Intrusion Detection Systems (IDS) Important?
 An Intrusion Detection System (IDS) adds extra protection to your cybersecurity setup, making
it very important. It works with your other security tools to catch threats that get past your
main defenses. So, if your main system misses something, the IDS will alert you to the threat.
 Placement of IDS
 The most optimal and common position for an IDS to be placed is behind the firewall.
The ‘behind-the-firewall‘ placement allows the IDS with high visibility of incoming network
traffic and will not receive traffic between users and network.
 In cases, where the IDS is positioned beyond a network’s firewall, it would be to defend
against noise from internet or defend against attacks such as port scans and network mapper.
An IDS in this position would monitor layers 4 through 7 of the OSI model and would use
Signature-based detection method. Showing the number of attemepted breacheds instead of
actual breaches that made it through the firewall is better as it reduces the amount of false
positives. It also takes less time to discover successful attacks against network.
 An advanced IDS incorporated with a firewall can be used to intercept complex attacks
entering the network. Features of advanced IDS include multiple security contexts in the
routing level and bridging mode. All of this in turn potentially reduces cost and operational
complexity.
 Another choice for IDS placement is within the network. This choice reveals attacks or
suspicious activity within the network. Not acknowledging security inside a network is
detrimental as it may allow users to bring about security risk, or allow an attacker who has
broken into the system to roam around freely.
 Benefits of IDS
 Detects Malicious Activity: IDS can detect any suspicious activities and alert the system
administrator before any significant damage is done.
 Improves Network Performance: IDS can identify any performance issues on the network, which
can be addressed to improve network performance.
 Compliance Requirements: IDS can help in meeting compliance requirements by monitoring
network activity and generating reports.
 Provides Insights: IDS generates valuable insights into network traffic, which can be used to identify
any weaknesses and improve network security.
 Disadvantages of IDS
 False Alarms: IDS can generate false positives, alerting on harmless activities and causing
unnecessary concern.
 Resource Intensive: It can use a lot of system resources, potentially slowing down network
performance.
 Requires Maintenance: Regular updates and tuning are needed to keep the IDS effective, which can
be time-consuming.
 Doesn’t Prevent Attacks: IDS detects and alerts but doesn’t stop attacks, so additional measures
are still needed.
 Complex to Manage: Setting up and managing an IDS can be complex and may require specialized
knowledge.
Facial recognition
 The current technology amazes people with amazing innovations that not only
make life simple but also bearable. Face recognition has over time proven to
be the least intrusive and fastest form of biometric verification. The software
uses deep learning algorithms to compare a live captured image to the stored
face print to verify one’s identity. Image processing and machine learning are
the backbones of this technology. Face recognition has received substantial
attention from researchers due to human activities found in various
applications of security like airports, criminal detection, face tracking,
forensics, etc. Compared to other biometric traits like palm print, iris,
fingerprint, etc., face biometrics can be non-intrusive.
Face recognition
 Face recognition using Artificial Intelligence(AI) is a computer
vision technology that is used to identify a person or object from an image or
video. It uses a combination of techniques including deep learning, computer
vision algorithms, and Image processing. These technologies are used to
enable a system to detect, recognize, and verify faces in digital images or
videos.
 The technology has become increasingly popular in a wide variety of
applications such as unlocking a smartphone, unlocking doors, passport
authentication, security systems, medical applications, and so on. There are
even models that can detect emotions from facial expressions.
 Difference between Face recognition & Face detection
 Face recognition is the process of identifying a person from an image or video
feed and face detection is the process of detecting a face in an image or
video feed. In the case of Face recognition, someone’s face is recognized and
differentiated based on their facial features. It involves more advanced
processing techniques to identify a person’s identity based on feature point
extraction, and comparison algorithms. and can be used for applications such
as automated attendance systems or security checks. While Face detection is
a much simpler process and can be used for applications such as image
tagging or altering the angle of a photo based on the face detected. it is the
initial step in the face recognition process and is a simpler process that simply
identifies a face in an image or video feed.
 Image Processing and Machine learning
 Image processing by computers involves the process of Computer Vision. It deals with a high-level understanding of digital
images or videos. The requirement is to automate tasks that the human visual systems can do. So, a computer should be able to
recognize objects such as the face of a human being or a lamppost, or even a statue.
 Image reading:
 The computer reads any image in a range of values between 0 and 255. For any color image, there are 3 primary colors – Red,
green, and blue. A matrix is formed for every primary color and later these matrices combine to provide a Pixel value for the
individual R, G, and B colors. Each element of the matrices provide data about the intensity of the brightness of the pixel.
 OpenCV is a Python library that is designed to solve computer vision problems. OpenCV was originally developed in 1999 by
Intel but later supported by Willow Garage.
 Machine learning
 Every Machine Learning algorithm takes a dataset as input and learns from the data it basically means to learn the algorithm
from the provided input and output as data. It identifies the patterns in the data and provides the desired algorithm. For
instance, to identify whose face is present in a given image, multiple things can be looked at as a pattern:
 Height/width of the face.
 Height and width may not be reliable since the image could be rescaled to a smaller face or grid. However, even after
rescaling, what remains unchanged are the ratios – the ratio of the height of the face to the width of the face won’t change.
 Color of the face.
 Width of other parts of the face like lips, nose, etc.
 Machine Learning does two major functions in face recognition technology. These are given below:
 Deriving the feature vector: it is difficult to manually list down all of the features because there are
just so many. A Machine Learning algorithm can intelligently label out many of such features. For
instance, a complex feature could be the ratio of the height of the nose and the width of the
forehead.
 Matching algorithms: Once the feature vectors have been obtained, a Machine Learning algorithm
needs to match a new image with the set of feature vectors present in the corpus.
 Face Recognition Operations
 Face Recognition Operations
 The technology system may vary when it comes to facial recognition. Different software applies
different methods and means to achieve face recognition. The stepwise method is as follows:
 Face Detection: To begin with, the camera will detect and recognize a face. The face can be best
detected when the person is looking directly at the camera as it makes it easy for facial recognition.
With the advancements in technology, this is improved where the face can be detected with slight
variation in their posture of face facing the camera.
 Face Analysis: Then the photo of the face is captured and analyzed. Most facial recognition relies
on 2D images rather than 3D because it is more convenient to match to the database. Facial
recognition software will analyze the distance between your eyes or the shape of your cheekbones.
 Image to Data Conversion: Now it is converted to a mathematical formula and these facial features become numbers. This
numerical code is known as a face print. The way every person has a unique fingerprint, in the same way, they have unique
face prints.
 Match Finding: Then the code is compared against a database of other face prints. This database has photos with
identification that can be compared. The technology then identifies a match for your exact features in the provided database.
It returns with the match and attached information such as name and address or it depends on the information saved in the
database of an individual.
 Implementations
 Steps:
 Import the necessary packages
 Load the known face images and make the face embedding of known image
 Launch the live camera
 Record the images from the live camera frame by frame
 Make the face detection using the face_recognization face_location command
 Make the rectangle around the faces
 Make the face encoding for the faces captured by the camera
 if the faces are matched then plot the person image else continue

You might also like