0% found this document useful (0 votes)
1K views17 pages

SIPP - Avoid Video

Cara mengantisipasi kecurangan dalam aplikasi face recognize
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views17 pages

SIPP - Avoid Video

Cara mengantisipasi kecurangan dalam aplikasi face recognize
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

once more, i've flutter application to identify the face using plugin face recognation.

the
method i'm using is face recognize and wink. this application working fine, untill some user
using start using local file video that recorded before to fake my application. so that user not
doing it by himself, and ask another person for help using local file video that recorded
before to fake my application. How to avoid that? and how to make my application can detect
that this is the video recorded, not real face?

To prevent users from tricking your Flutter face recognition application using pre-recorded
video files, you'll need to implement techniques to distinguish between live video and pre-
recorded content. Here are some approaches to achieve this:

1. Liveness Detection:

 Blink Detection: You already have wink detection. Enhance this by adding more
complex liveness detection techniques like blinking patterns, head movements, or
asking the user to perform specific actions (e.g., smile, nod, or turn their head). You
can request random actions that would be difficult for a pre-recorded video to
replicate on demand.
 Face Distortion Detection: Analyze subtle changes in lighting or reflections on the
face, which may not occur naturally in a pre-recorded video.

2. Anti-Spoofing Techniques:

 3D Depth Sensing (if available): If your device has depth sensors (such as those
found in some modern smartphones), use them to detect the depth of the face to
ensure it's a real 3D object rather than a 2D video.
 Image Texture Analysis: Use machine learning models to detect texture differences
between a live face and a video image. Pre-recorded video frames may have artifacts
or unnatural patterns that can be detected through analysis.
 Motion Detection: Use motion patterns of the face in real-time. For example, ask the
user to move their head in random directions, which would be hard to replicate with a
pre-recorded video.

3. Challenge-Response Mechanism:

 Randomized Challenges: Implement a challenge-response mechanism where your


app asks the user to perform random actions (e.g., blink twice, move left or right,
smile). The app will verify the user’s actions in real-time. Pre-recorded videos would
struggle to meet the unpredictability of these requests.
 Multi-Angle Face Recognition: Require the user to move their head at different
angles. Pre-recorded videos often show faces from a single angle, so a multi-angle
approach can help detect attempts to deceive the app.

4. Analyze Frame Consistency:

 Frame Rate & Timing Analysis: Analyze frame consistency and timing to check if
the video has irregular patterns that are uncommon in real-time streaming (e.g., slight
jitter or uneven frame intervals could suggest playback).
 Check for Looping: Use algorithms to detect if the same sequences of frames appear
multiple times (i.e., looping video).

5. Device-Level Authentication:

 Trusted Execution Environment (TEE): Some devices have secure environments


where camera input can be verified to be from the actual camera and not from a fake
source. For example, integrating with hardware security features like Secure Enclave
(on iPhones) or other trusted platforms may help verify the video is coming directly
from the camera.

Combining Multiple Methods:

For better security, combine multiple approaches such as liveness detection, motion
challenges, and device-level security. This will make it significantly harder for users to
bypass the checks using pre-recorded video.

Let me know if you'd like help implementing any of these techniques in your Flutter app!

Face Distortion Detection: Analyze subtle changes in lighting or reflections on the face,
which may not occur naturally in a pre-recorded video. can you tell me how to achive that?
using flutter

To achieve Face Distortion Detection in Flutter, specifically by analyzing subtle changes in


lighting or reflections on the face, you would need to process the video frames in real-time
and look for natural variances that wouldn't typically appear in pre-recorded video.

Here’s a general approach to implementing it in Flutter:

Approach:

1. Capture Video Frames in Real-Time


o Use the camera plugin (like camera in Flutter) to capture video frames in real-
time from the device's front camera.
2. Analyze Lighting and Reflection Changes
o Process each frame to detect lighting changes on the face.
o Subtle lighting variations occur naturally due to environmental light or the
user's movement, and analyzing these can help distinguish between real faces
and pre-recorded video.
3. Frame-to-Frame Comparison
o Compare consecutive frames to detect dynamic changes in light and
reflections. A real face will show continuous slight changes, while pre-
recorded videos will likely have consistent lighting patterns.

Example Implementation:

Step 1: Setup Camera in Flutter

Use the camera package to capture video frames:


yaml
Salin kode
dependencies:
camera: ^0.9.4+5
dart
Salin kode
import 'package:camera/camera.dart';
import 'package:flutter/material.dart';

class FaceDetectionApp extends StatefulWidget {


@override
_FaceDetectionAppState createState() => _FaceDetectionAppState();
}

class _FaceDetectionAppState extends State<FaceDetectionApp> {


CameraController? controller;

@override
void initState() {
super.initState();
initializeCamera();
}

Future<void> initializeCamera() async {


final cameras = await availableCameras();
final camera = cameras.first; // Front camera

controller = CameraController(
camera,
ResolutionPreset.medium,
enableAudio: false,
);

await controller!.initialize();
controller!.startImageStream((CameraImage image) {
// Here you can process the video frames for lighting analysis
analyzeLighting(image);
});
}

void analyzeLighting(CameraImage image) {


// Placeholder for your analysis function
// Process the image to detect lighting changes or reflections
}

@override
void dispose() {
controller?.dispose();
super.dispose();
}

@override
Widget build(BuildContext context) {
return Scaffold(
body: controller?.value.isInitialized == true
? CameraPreview(controller!)
: Center(child: CircularProgressIndicator()),
);
}
}
Step 2: Analyzing Lighting/Reflection

Once you have the video frames, you need to convert the image data into something that can
be analyzed for lighting/reflection changes. One common way to do this is by comparing the
brightness (luminance) of each pixel across consecutive frames.

You can analyze pixel data to see how light intensity changes across different regions of the
face:

 Convert the camera image into grayscale.


 Calculate the average luminance for different sections of the face (you might already
have facial landmarks detected using a face detection plugin).
 Compare the luminance values across frames to detect subtle lighting changes.

Example of Luminance Calculation:

dart
Salin kode
void analyzeLighting(CameraImage image) {
// Extract image data, usually in YUV format
final YPlane = image.planes[0]; // Y channel (luminance)

// Process pixel data to compute luminance average


int totalLuminance = 0;
for (int i = 0; i < YPlane.bytes.length; i++) {
totalLuminance += YPlane.bytes[i];
}

double avgLuminance = totalLuminance / YPlane.bytes.length;

// Compare luminance with previous frames (store previous value)


if (detectSignificantChangeInLuminance(avgLuminance)) {
print('Possible real-time interaction detected');
} else {
print('Potential pre-recorded video');
}
}

bool detectSignificantChangeInLuminance(double currentLuminance) {


// Compare with previous luminance values (e.g., using a threshold)
// For example, calculate if there's more than a 10% change
return (previousLuminance - currentLuminance).abs() > (previousLuminance
* 0.1);
}

Step 3: Detecting Real-Time Changes

You can fine-tune the threshold to decide when the lighting changes are significant enough to
assume the face is real. If you see constant lighting with minimal changes, it's likely that the
face is from a pre-recorded video.

You could also look into additional reflections by analyzing specific regions (like the
forehead or nose), where lighting might reflect more noticeably, and check for changes in
these areas.
Tools and Libraries:

 OpenCV: You can consider using OpenCV for Flutter (through platforms like
opencv in Dart via FFI) to implement more sophisticated image processing, such as
edge detection, reflection analysis, and more.
 ML Models: Use TensorFlow Lite or another ML model to train on the differences
between live and pre-recorded videos, though this requires more complex training and
dataset generation.

This is a relatively basic approach to analyzing lighting changes. For more accuracy, you may
want to combine this with other detection methods (like liveness detection with facial
landmarks or 3D analysis).

yes, i've prefer with opencv... can you tell me complex step to achive that? i'm no knowledge
about that

To use OpenCV for Face Distortion Detection in Flutter, you’ll need to leverage image
processing techniques to detect lighting and reflection changes on the face. Below is a
detailed, step-by-step breakdown of how you can achieve this using OpenCV.

Steps to Implement Face Distortion Detection with OpenCV in Flutter

1. Set Up OpenCV in Flutter

Since OpenCV is a native C++ library, you’ll need to integrate it into your Flutter app using
platform channels. This allows you to call native OpenCV code from Flutter.

Steps:

 Install OpenCV in your native project: First, you need to set up OpenCV in your
Android (Java/Kotlin) and iOS (Objective-C/Swift) native projects.
 Use Platform Channels: You’ll use Flutter’s platform channels to communicate
between Flutter (Dart) and the native code (where OpenCV is implemented).

2. Capture Frames with the Camera Plugin

Use Flutter’s camera plugin to capture video frames in real-time. You can pass these frames
to the native layer for OpenCV processing.

dart
Salin kode
cameraController.startImageStream((CameraImage image) {
// Pass image data to platform channels for OpenCV processing
});

3. Convert Camera Frames for OpenCV

Convert the captured frames into a format that OpenCV can process. Camera frames are
usually in the YUV format, so you'll need to convert them to RGB or grayscale (which
OpenCV typically uses).
4. Implement Native OpenCV Code

Once the frames are in the native layer, you can use OpenCV to process them.

 Lighting Detection: Analyze the brightness of the face in different regions by


calculating the intensity of light in those areas.
 Reflection Detection: Detect regions of the face where reflections might occur, such
as the forehead, nose, or cheeks. Look for dynamic changes in those regions as the
person moves.

Here's an outline of what you'll do in the native code:

 Grayscale Conversion: Convert the image to grayscale to focus on luminance


(brightness) values.

cpp
Salin kode
cv::cvtColor(inputImage, grayscaleImage, cv::COLOR_BGR2GRAY);

 Region Analysis: Segment the face into different regions (e.g., forehead, eyes, nose,
etc.) and compute the average brightness in these regions.

cpp
Salin kode
cv::Rect forehead = cv::Rect(x, y, width, height);
cv::Mat foreheadRegion = grayscaleImage(forehead);

double averageBrightness = cv::mean(foreheadRegion)[0];

 Frame Comparison: Compare the brightness of these regions across frames. If there
are no significant changes over time, it's likely a pre-recorded video.

cpp
Salin kode
double brightnessDifference = abs(previousBrightness - currentBrightness);
if (brightnessDifference < threshold) {
// Potential pre-recorded video
}

5. Return Results to Flutter

After processing the frames in native code, return the results (e.g., whether the frame passed
the lighting/reflection test) back to Flutter using platform channels.

6. Challenge-Response Mechanism

You can combine this with a challenge-response mechanism to detect liveness. For example,
ask the user to move their head or perform an action (like winking), and analyze how lighting
and reflections change across frames.

Complex Steps Breakdown


1. Integrating OpenCV with Flutter (Native Layer):
o Android (Java/Kotlin): Download the OpenCV SDK and configure your
build.gradle to include OpenCV.
o iOS (Objective-C/Swift): Add OpenCV as a dependency in your Xcode
project.
o Platform Channels: Define methods to call the native OpenCV code from
Dart.
2. Processing Video Frames:
o Frame Conversion: Convert camera frames (YUV) to a suitable format (RGB
or grayscale) for OpenCV.
o Lighting Detection: Calculate the brightness of different regions of the face
and analyze changes over time.
o Reflection Detection: Focus on reflective areas of the face, such as the nose
or forehead, and track how reflections shift based on movement.
3. Compare Frames for Liveness:
o Implement a mechanism to analyze consecutive frames to see if there are
continuous subtle lighting changes, which are typical for live interactions.
4. Return Results and Take Action:
o Use platform channels to send results (e.g., whether it’s likely a real face or a
pre-recorded video) back to the Flutter app, and take appropriate action (e.g.,
reject if it’s a pre-recorded video).

Example Code Structure

1. Flutter Dart Code:

dart
Salin kode
import 'package:flutter/services.dart';

void analyzeFrame(CameraImage image) async {


// Convert the frame to a format suitable for OpenCV
final result = await platform.invokeMethod('processFrame', <String,
dynamic>{
'imageData': image.planes[0].bytes,
});

if (result['isReal']) {
print('Real face detected');
} else {
print('Pre-recorded video detected');
}
}

2. Android Native Code (Java/Kotlin):

java
Salin kode
public class FaceDetection {
public boolean processFrame(byte[] imageData) {
Mat frame = Imgcodecs.imdecode(new MatOfByte(imageData),
Imgcodecs.IMREAD_UNCHANGED);
Mat gray = new Mat();
Imgproc.cvtColor(frame, gray, Imgproc.COLOR_BGR2GRAY);
// Analyze brightness, reflection, and perform liveness detection
double averageBrightness = Core.mean(gray).val[0];

// Compare with previous frame and return result


return brightnessChangedSignificantly();
}
}

Considerations:

 Performance: Processing video frames in real-time can be resource-intensive. Make


sure to optimize your OpenCV code for speed.
 Cross-Platform: Ensure you implement similar native code for both Android and
iOS.
 Testing: Test thoroughly in various lighting conditions to ensure accuracy.

By using OpenCV, you can create a more robust face detection system that can analyze
lighting and reflections in real-time and prevent pre-recorded video attacks. Let me know if
you'd like specific guidance on any of these steps!

could you tell me with opencv server side live to check that? i mean in sending live frame to
frame to the server to check if this is real face or not... using anyway is accepted.

To implement server-side liveness detection with OpenCV, you would send live frames from
the client (your Flutter app) to the server where OpenCV processes them to determine
whether the face is real or from a pre-recorded video.

Here’s an outline of how to achieve this:

Workflow Overview:

1. Client-Side (Flutter):
o Capture live video frames using the camera.
o Send the captured frames to the server over HTTP or WebSocket.
2. Server-Side (Python + OpenCV):
o Receive frames on the server.
o Use OpenCV to process the frames for liveness detection.
o Return a result indicating whether the face is real or a pre-recorded video.
3. Result Handling:
o The server responds with the detection result, and the Flutter app takes
appropriate actions based on the response (e.g., allow or block access).

Step-by-Step Implementation

1. Client-Side: Capture and Send Frames

On the client side (Flutter), you will use the camera plugin to capture video frames and send
them to the server for processing.

Flutter Code to Send Frames:


dart
Salin kode
import 'dart:convert';
import 'package:http/http.dart' as http;
import 'package:camera/camera.dart';

class LiveFaceCheck extends StatefulWidget {


@override
_LiveFaceCheckState createState() => _LiveFaceCheckState();
}

class _LiveFaceCheckState extends State<LiveFaceCheck> {


CameraController? _controller;

@override
void initState() {
super.initState();
initializeCamera();
}

Future<void> initializeCamera() async {


final cameras = await availableCameras();
_controller = CameraController(cameras[0], ResolutionPreset.medium);
await _controller!.initialize();
_controller!.startImageStream((CameraImage image) {
// Convert the image to base64 and send to server
sendFrameToServer(image);
});
}

void sendFrameToServer(CameraImage image) async {


// Convert CameraImage to bytes and then base64
List<int> bytes = image.planes[0].bytes;
String base64Image = base64Encode(bytes);

// Send the frame to server


final response = await http.post(
Uri.parse('http://your-server-url.com/verify-face'),
headers: {"Content-Type": "application/json"},
body: json.encode({
"image": base64Image,
}),
);

if (response.statusCode == 200) {
final result = json.decode(response.body);
if (result['isReal']) {
print('Real face detected');
} else {
print('Fake face detected');
}
}
}

@override
void dispose() {
_controller?.dispose();
super.dispose();
}

@override
Widget build(BuildContext context) {
return Scaffold(
body: _controller?.value.isInitialized == true
? CameraPreview(_controller!)
: Center(child: CircularProgressIndicator()),
);
}
}

2. Server-Side: Processing Frames with OpenCV

For the server-side, you can use a Python-based server (e.g., Flask, FastAPI, or Django) to
receive the frames and process them using OpenCV.

Python Server with OpenCV:

bash
Salin kode
pip install flask opencv-python numpy

Flask Server Example:

python
Salin kode
from flask import Flask, request, jsonify
import cv2
import numpy as np
import base64

app = Flask(__name__)

def analyze_frame(image_data):
# Decode the image data
np_arr = np.frombuffer(image_data, np.uint8)
frame = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)

# Perform liveness detection here


# Example: Detect face, analyze lighting, and detect movements

# Convert the frame to grayscale for analysis


gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Perform face detection (optional)


# face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
# faces = face_cascade.detectMultiScale(gray_frame, 1.1, 4)

# Analyze lighting changes or reflections


average_brightness = np.mean(gray_frame)

# Check for significant brightness changes over frames (basic example)


brightness_change = detect_brightness_change(average_brightness)

# Example of basic liveness check based on brightness change


if brightness_change:
return True # Real face detected
return False # Fake face detected

def detect_brightness_change(current_brightness):
# This function would compare the current frame's brightness to
previous frames
# For simplicity, just a threshold-based detection here
# In practice, you'd store previous frames' brightness and compare
dynamically
THRESHOLD = 10.0
previous_brightness = 0 # Replace with actual previous frame's
brightness tracking
brightness_difference = abs(current_brightness - previous_brightness)
return brightness_difference > THRESHOLD

@app.route('/verify-face', methods=['POST'])
def verify_face():
data = request.json
image_data = base64.b64decode(data['image'])

is_real = analyze_frame(image_data)
return jsonify({"isReal": is_real})

if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)

3. Liveness Detection (Advanced OpenCV Techniques)

You can enhance the liveness detection further by using advanced techniques such as:

 Blink Detection: Detect blinking over multiple frames. A lack of blinking might
indicate a pre-recorded video.
 Motion Detection: Track subtle movements of the face (e.g., slight head movement).
Pre-recorded videos often have static or repetitive movements.
 Lighting Changes: Track lighting and reflection changes on different parts of the
face across consecutive frames.

Here’s an example of more advanced processing:

python
Salin kode
def analyze_frame(image_data):
frame = cv2.imdecode(np.frombuffer(image_data, np.uint8),
cv2.IMREAD_COLOR)

gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Detect faces in the frame


face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
faces = face_cascade.detectMultiScale(gray_frame, 1.1, 4)

if len(faces) == 0:
return False # No face detected, possibly a fake video

for (x, y, w, h) in faces:


# Extract the face region and analyze
face_region = gray_frame[y:y+h, x:x+w]

# Analyze brightness in the face region


average_brightness = np.mean(face_region)
# Analyze dynamic reflection changes, blinking, etc.
if detect_brightness_change(average_brightness):
return True # Real face detected

return False # Fake or pre-recorded face detected

4. WebSocket for Real-Time Analysis (Optional)

For real-time analysis, you could switch to WebSocket for continuous frame streaming
between the client and server instead of sending individual frames over HTTP.

Using WebSocket for Real-Time Streaming:

 Client: Continuously send frames to the server via WebSocket.


 Server: Process each incoming frame in real-time and respond with the liveness
detection result.

Considerations

 Performance: Sending multiple frames over the network can be resource-intensive,


so optimize your code to reduce bandwidth (e.g., compressing frames before sending).
 Latency: Keep in mind that network latency could affect real-time performance,
especially with large video frames.
 Security: Ensure your server and communication are secure, especially if you're
transmitting sensitive biometric data.

This approach provides a scalable solution where the computational load of liveness detection
is handled server-side, while the client (Flutter) is responsible for capturing and sending
frames. Let me know if you need help setting up any part of this process!

yes, tell me how to achive that with websocket. i'm prefer using this... complex tutorial

To achieve real-time liveness detection using WebSockets, you can implement a system
where the client (Flutter app) continuously streams video frames to the server, and the server
processes these frames in real-time using OpenCV. The server then sends back a response
indicating whether the face is real or a pre-recorded video.

Here’s a comprehensive guide on how to achieve this:

Steps Overview:

1. Client-Side (Flutter):
o Set up the camera to capture frames.
o Stream frames to the server using WebSockets.
2. Server-Side (Python + OpenCV):
o Set up a WebSocket server to receive the video frames.
o Process each frame using OpenCV for liveness detection.
o Send the detection results back to the client in real-time.
3. Real-Time Interaction:
o Continuously stream frames and receive responses.
Step-by-Step Guide

1. Set Up WebSocket Communication

1.1. Client-Side (Flutter) - Using WebSocket

You will use the camera plugin to capture video frames and web_socket_channel for
WebSocket communication.

Add the dependencies to pubspec.yaml:

yaml
Salin kode
dependencies:
flutter:
sdk: flutter
camera: ^0.10.0+4
web_socket_channel: ^2.1.0

Flutter WebSocket Client Example:

dart
Salin kode
import 'dart:convert';
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:camera/camera.dart';
import 'package:web_socket_channel/io.dart';

class LiveFaceCheck extends StatefulWidget {


@override
_LiveFaceCheckState createState() => _LiveFaceCheckState();
}

class _LiveFaceCheckState extends State<LiveFaceCheck> {


CameraController? _controller;
IOWebSocketChannel? _channel;

@override
void initState() {
super.initState();
initializeCamera();
connectWebSocket();
}

Future<void> initializeCamera() async {


final cameras = await availableCameras();
_controller = CameraController(cameras[0], ResolutionPreset.medium);
await _controller!.initialize();

_controller!.startImageStream((CameraImage image) {
sendFrameToServer(image);
});
}
void connectWebSocket() {
// Connect to WebSocket server
_channel = IOWebSocketChannel.connect('ws://your-server-url.com:5000');
}

void sendFrameToServer(CameraImage image) {


// Convert CameraImage to raw bytes
List<int> bytes = image.planes[0].bytes;

// Convert to Base64 string (can optimize to send raw bytes if needed)


String base64Image = base64Encode(Uint8List.fromList(bytes));

// Send the frame via WebSocket


_channel!.sink.add(json.encode({
"type": "frame",
"data": base64Image,
}));
}

@override
void dispose() {
_controller?.dispose();
_channel?.sink.close();
super.dispose();
}

@override
Widget build(BuildContext context) {
return Scaffold(
body: _controller?.value.isInitialized == true
? CameraPreview(_controller!)
: Center(child: CircularProgressIndicator()),
);
}
}

1.2. Server-Side (Python) - WebSocket and OpenCV

To handle WebSocket connections on the server side, you can use the websockets library.
The server will receive frames from the client, decode them, and process them using
OpenCV.

Install necessary libraries:

bash
Salin kode
pip install websockets opencv-python numpy

Server Example Using WebSocket and OpenCV:

python
Salin kode
import cv2
import numpy as np
import websockets
import asyncio
import base64
import json
# Example function to analyze frames
def analyze_frame(image_data):
# Decode the image from base64 to raw bytes
np_arr = np.frombuffer(image_data, np.uint8)
frame = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)

# Process the frame with OpenCV (example: check brightness)


gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
average_brightness = np.mean(gray_frame)

# Basic liveness detection based on brightness change (example)


if average_brightness < 80: # Arbitrary threshold
return False # Could be fake or bad lighting
return True # Real face detected

# WebSocket handler for receiving frames


async def handle_connection(websocket, path):
try:
async for message in websocket:
data = json.loads(message)

if data["type"] == "frame":
# Decode the base64 image
image_data = base64.b64decode(data["data"])

# Analyze the frame for liveness detection


is_real = analyze_frame(image_data)

# Send result back to the client


await websocket.send(json.dumps({
"type": "result",
"isReal": is_real
}))
except websockets.exceptions.ConnectionClosed as e:
print("Client disconnected", e)

# WebSocket server entry point


async def main():
async with websockets.serve(handle_connection, "0.0.0.0", 5000):
await asyncio.Future() # Run forever

if __name__ == "__main__":
asyncio.run(main())

1.3. Test the WebSocket Connection

You can start the Python server and run the Flutter client. Ensure that the client sends frames,
and the server processes them with OpenCV in real-time.

2. Processing Frames in Real-Time

2.1. OpenCV Processing Techniques

In the analyze_frame function on the server-side, you will implement OpenCV techniques
for liveness detection:
 Lighting/Reflection Analysis: Analyze brightness changes in different regions of the
face.
 Blink Detection: Detect eye blinking by analyzing consecutive frames for rapid eye
closure and opening.
 Head Movement Detection: Track head movements to ensure the face is dynamic.

Example of Blink Detection:

python
Salin kode
import dlib

# Pre-trained facial landmark detector


detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def detect_blink(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = detector(gray)

for face in faces:


landmarks = predictor(gray, face)
left_eye_ratio = calculate_eye_ratio(landmarks, [36, 37, 38, 39,
40, 41])
right_eye_ratio = calculate_eye_ratio(landmarks, [42, 43, 44, 45,
46, 47])

# If eye aspect ratio is below a threshold, a blink is detected


if left_eye_ratio < 0.2 and right_eye_ratio < 0.2:
return True # Blink detected
return False

def calculate_eye_ratio(landmarks, eye_points):


# Implement the eye aspect ratio calculation
pass

2.2. Performance Considerations

 Frame Rate: Ensure the frame rate is not too high (e.g., 10-15 FPS is sufficient) to
reduce server load.
 Data Compression: Compress images before sending them to the server to save
bandwidth.

3. Handling Results in Real-Time

3.1. Client-Side Response Handling

When the server responds with the liveness detection result, you can handle it on the client-
side to take action based on the result.

Example (Flutter):

dart
Salin kode
_channel!.stream.listen((message) {
final result = json.decode(message);
if (result['type'] == 'result') {
if (result['isReal']) {
print('Real face detected');
} else {
print('Fake face detected');
}
}
});

3.2. Continuous Frame Streaming

The Flutter client will continuously send frames, and the server will respond in real-time.
You can use this setup to maintain a constant check for liveness.

Complete Workflow Recap

1. Client (Flutter):
o Captures video frames using the camera.
o Streams each frame to the server via WebSocket.
2. Server (Python + OpenCV):
o Receives each frame, decodes it, and processes it with OpenCV for liveness
detection.
o Detects liveness by checking for blinking, head movements, and lighting
variations.
o Sends the liveness result back to the client.
3. Client Response:
o Based on the server’s response, the client either allows or denies access.

This approach provides a robust and scalable real-time liveness detection system using
WebSockets and OpenCV. The system can be optimized further for performance and
security. Let me know if you need help with any specific part of this setup!

You might also like