0% found this document useful (0 votes)
117 views25 pages

Server Process Redundancy and Race Conditions v1.0

The document discusses race conditions that can occur in server processes due to redundant requests and stale data. It provides examples of how normal requests are processed sequentially but concurrent requests could result in incorrect data processing. Potential solutions proposed include throttling requests, adding where clauses to database updates to check for stale data, locking tables during updates, and queueing processes to ensure sequential execution.

Uploaded by

Rudi Chandra II
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views25 pages

Server Process Redundancy and Race Conditions v1.0

The document discusses race conditions that can occur in server processes due to redundant requests and stale data. It provides examples of how normal requests are processed sequentially but concurrent requests could result in incorrect data processing. Potential solutions proposed include throttling requests, adding where clauses to database updates to check for stale data, locking tables during updates, and queueing processes to ensure sequential execution.

Uploaded by

Rudi Chandra II
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Server Proses

Redundancy and Race


Conditions
WITH POSSIBLE SOLUTIONS
Backgrounds
Most Server is Multi-process/Multi-thread to be able to serve multiple concurrent users.
Many large system is too complicated and need a lot of time to process 1 request.
One request usually need multiple queries to the Database.
The use of Data in Memory (RAM) as their primary and trusted data (Stale Data).
Calculating and process the data in Backend Side (PHP, Java, etc) and store the data after to the
database.
More than 1 request at a time may resulted on Race Condition since more than 1 process are
using the same (stale) data at the same time and process it simultaneously.
Process the Data on Backend Side may resulted on wrong calculation since the data on the
backend side is not the latest data.
public function redeemVoucher(Request $request) {
$voucher = Voucher::find($request->get("voucher_id"));
$user = Auth::user();

$current_user_point = $user->point;
$points_needed = $voucher->price_point;

// Check if the current user point is above points needed for redeem
if ($current_user_point >= $points_needed) {
// Decrease the user's point
$user_point_after = $current_user_point - $points_needed;

$user->point = $user_point_after;
$user->save();

// Log the User Voucher Redeem


$user_voucher_log = new UserVoucherLog();
$user_voucher_log->voucher_id = $voucher->id;
$user_voucher_log->user_id = $user->id;
$user_voucher_log->save();

// Log the Transaction Wallet of that user


$user_wallet_log = new UserWalletLog();
$user_wallet_log->user_id = $user->id;
$user_wallet_log->start_point = $current_user_point;
$user_wallet_log->end_point = $user_point_after;
$user_wallet_log->change_point = $points_needed * -1;
$user_wallet_log->save();

return ApiResponse::success($voucher, "Voucher Redeem Success");


} else {
return ApiResponse::error(__("Insufficient Point."));
}
}
Illustration
(Normal Case)
Illustration
(Race Condition Case)
Example Case
(Normal)
Request 1 Request 2 (after request 1 done)
User has 1.000.000 credit User has 500.000 credit
User spend 500.000 credit User spend 500.000 credit
Client Request to Server Client Request to Server
Server get User credit from DB Server get User credit from DB
DB return 1.000.000 as user credit DB return 500.000 as user credit
Server check if the credit >= 500.000 (true) Server check if the credit >= 500.000 (true)
Server calculate remaining credit of the user (1.000.000 Server calculate remaining credit of the user (500.000 –
– 500.000 = 500.000) 500.000 = 0)
Server update the current user’s credit 500.000 Server update the current user’s credit 0
Server return success message to Client Server return success message to Client
Example Case
(Race Condition)
Request 1 Request 2 (concurrent with request 1)
User has 1.000.000 credit User has 1.000.000 credit
User spend 500.000 credit User spend 500.000 credit
Client Request to Server Client Request to Server
Server get User credit from DB Server get User credit from DB
DB return 1.000.000 as user credit DB return 1.000.000 as user credit
Server check if the credit >= 500.000 (true) Server check if the credit >= 500.000 (true)
Server calculate remaining credit of the user (1.000.000 Server calculate remaining credit of the user (1.000.000
– 500.000 = 500.000) – 500.000 = 500.000)
Server update the current user’s credit 500.000 Server update the current user’s credit 500.000
Server return success message to Client Server return success message to Client
The Core Problem
Since many of the project are using the server engine (PHP, Java, etc) to cache data and then
calculate all the necessary logic on server side (via programming language), concurrent request
will resulted the data in each process not in sync with the Database.
More complicated the system/process, the more likely the data will not be in sync. Since some
process take a long time from retrieve the data, process it, then store the updated version of
that data, if there is another request that comes in that long time, can make the data out of
sync.
Possible Solutions
1. Throttle requests
2. Update … Where …
3. Lock Table (or row)
4. Queue Process
Throttle Requests
Throttle Requests (cont’d)
In General, throttle is used for block multiple requests from 1 IP address in the short period of
time. (E.g. For prevent Bot or Data Crawler)
Throttle can be implemented on multiple layers (network layer, application layer, etc)
Since the problem is concurrent requests, we can use throttling to block concurrent requests
and just let one request pass by and be processed.
If we set to allow only 1 request every x seconds, we can mitigate the Race Condition since all
the request except 1 is blocked.
Throttle will not completely prevent Race Condition, but is the easiest and fastest one to
implement.
Laravel Throttle
https://laravel.com/docs/7.x/middleware#assigning-middleware-to-routes
We can use Laravel Throttle Middleware to throttle the requests. We can specify the Hash Key
in Laravel Throttle Middleware to block the request based on whatever we want.
E.g: Using Session Token/User ID as Hash key, so every user/session only can hit 1 time per x
second.
Laravel Throttle is using cache as the last access log. The default cache driver is Memory.
Driver Memory/Redis strongly advised since Memory/Redis is super fast.
Laravel Throttle (cont’d)
$app->group(['middleware' => 'throttle_session_token:1,0.5'], function () use ($app) {
$app->post('transactionDigitalProduct/confirm',
"TransactionDigitalProductController@createTransactionPayment");
});

Throttle the Request to transactionDigitalProduct/confirm only accept 1 request every


0.5 minute (30 seconds)
Since Laravel Throttle is a Middleware, we can specify the controller/end point that we want to applied
the throttle by adding to the Route::group();
We also can create our own Throttle Middleware by Extending the Base Throttle Middleware and
attach it in file bootstrap/app.php
Laravel Throttle (cont’d)
class RateLimitSessionTokenMiddleware extends \Illuminate\Routing\Middleware\ThrottleRequests {

...

protected function resolveRequestSignature($request)


{
if ($request->hasHeader('authorization-bearer')) {
return sha1($request->header('authorization-bearer'));
} else {
return sha1($request->ip());
}
}

We can specify the Hash Key or the Keyword that we want.


In this case, we use the session_token (from header) and use it as the key to determine should we
accept this request or should we block the request.
We can also separate each end point by modify the resolveRequestSignature() function.
Update … Where …
Update … Where … (cont’d)
This case can occurred when we try to get row using some field and need to process that row.
If there is multiple concurrent request, all process will get same row of that table and every
process will tried to process that row and can resulted on same row being processed even
though 1 row should only processed once.
This race condition can make multiple activity log from 1 row and can make conflict on multiple
activity on 1 row.
In this case, each request will get 1 PENDING Transaction, but because the request is concurrent,
all request will get same row (ID 101 in this case).
Because the row is still in PENDING status, each request will run some logic and process that
same row multiple times. Because each process is run same process on the same row, there will
be duplicate activity log for each request even though 1 row should only be processed once.
Update … Where … (cont’d)
Update … Where … (cont’d)
By adding 1 additional WHERE condition when Update the Status, we can make sure that row is
still in PENDING status when we change its status to PROCESSED.
By doing that, we can prevent multiple request process the same row. Because the Database will
only Update the status to PROCESSED if the status is still PENDING.
When another request is Update the status to PROCESSED but the current status is not
PENDING, then server will reject the request and said that the request is rejected.
This solution is effective when there is already row that in PENDING status and just need to
change the status each time 1 row is processed. If there is no row before the process is running,
this solution can’t be done.
This solution is very effective and not too hard to implement. But we should change the logic or
sequence of our query to implement this solution.
Lock Table (or row)
Again, this solution is only available if we need to update some field on some table in our
Database.
By locking the table, we are guaranteed to have the latest data on the Database and avoid not-
sync-data.
But there is no best practice in locking the table and this solution not effective for new
transaction.
Need Further Research.
Queue Process
Queue Process (cont’d)
By queueing all possible conflicting process, we can eliminate all race condition and conflict.
Since there will only one process that run the process, and we queue the job, we will never have
conflicted requests.
But in doing so, we must make sure every process that will update the fields, we must put all the
process in the queue first, then let the Queue Worker do the job sequentially.
Hence this solution is best for early stage development, and not suitable for optimization and/or
late development life cycle.
This solution also make the process not Realtime and Force Client/Front End to show waiting
page and wait for Queue Worker do the Job and reload periodically.
Queue Process (cont’d)
Ranking of Solutions
The most Best Practice The easiest / fastest Dev Time
1. Queue Process 1. Throttle
2. Update … Where … 2. Update … Where …
3. Locking Table / Row 3. Locking Table / Row
4. Throttle 4. Queue Process

The most feasible on Fresh Project The most feasible on Running Project
1. Queue Process 1. Update … Where …
2. Update … Where … 2. Throttle
3. Throttle 3. Queue Process
4. Locking Table / Row 4. Locking Table / Row
Summary
Race Condition by Concurrent Requests is an Edge-Case that can be happened by mistake or on
purpose.
The probability of this Edge-Case is slim if the system is not too complicated and not takes a long
time on each process. On the heavy process (need 3rd party API, or run many heavy queries),
probability of Race Condition will depends on time it takes from Get the Data to Update the
Data on Database.
Queue Job is the best practice to prevent Race Condition, but will severely decrease the
performance of the Application/System since the process will not be real-time (best case is semi-
real-time).
Throttle is not considered a good solution, but will eliminate most Race Condition if done right.
So, yeah…
Thank You

You might also like