Java Multithreading for Senior Engineering Interviews Part I
Java Multithreading for Senior Engineering Interviews Part I
3 1
Here’s how you can implement both scenarios with a smaller range:
Single-Threaded Summation:
Multi-Threaded Summation:
Key Points:
variable.
Overflow Consideration: Ensure that the chosen range does not cause
the sum to exceed the maximum value of the long data type to avoid
overflow.
1. Usually very hard to find bugs, some that may only rear head in
production environments
Usually, there would be some state associated with the process that is shared
among all the threads and in turn each thread would have some state private
to itself.
There are several constructs offered by various programming languages to
guard and discipline the access to the global state in access by multiple
threads.
By adding the synchronized keyword, we ensure that only one thread can
execute the increment() method at a time, preventing race conditions and
ensuring accurate results.
Another way to achieve thread safety is by using atomic variables from the
java.util.concurrent.atomic package:
import java.util.concurrent.atomic.AtomicInteger;
The AtomicInteger class provides thread-safe operations without the need for
explicit synchronization, often resulting in better performance in highly
concurrent scenarios.
When designing classes intended for use in multi-threaded environments,
it’s crucial to ensure thread safety to prevent unpredictable behavior and
data inconsistencies. This can be achieved through synchronization
mechanisms or by utilizing thread-safe classes provided by the Java API.
For more detailed information on thread safety and how to achieve it in Java,
you can refer to resources like GeeksforGeeks.
Concurrency vs Parallelism
To clarify the concept, we’ll borrow a juggler from a circus to use as an analogy.
Consider the juggler to be a machine and the balls he juggles as processes.
Serial Execution
The analogy for serial execution is a circus juggler who can only juggle one
ball at a time. Definitely not very entertaining!
Concurrency
A concurrent program is one that can be decomposed into constituent parts and
each part can be executed out of order or in partial order without affecting the
final outcome.
Going back to our circus analogy, a concurrent juggler is one who can juggle
several balls at the same time. However, at any one point in time, he can only
have a single ball in his hand while the rest are in flight. Each ball gets a time
slice during which it lands in the juggler’s hand and then is thrown back up.
A concurrent system is in a similar sense juggling several processes at the
same time.
Parallelism
A parallel system is one which necessarily has the ability to execute multiple
programs at the same time.
Concurrency vs Parallelism
1. Preemptive Multitasking
2. Cooperative Multitasking
Preemptive Multitasking
A thread or program once taken off of the CPU by the scheduler can’t
determine when it will get on the CPU next.
Cooperative Multitasking
Async execution can invoke a method and move onto the next line of code
without waiting for the invoked function to complete or receive its result.
Usually, such methods return an entity sometimes called a future or promise
that is a representation of an in-progress computation. The program can
query for the status of the computation via the returned future or promise
and retrieve the result once completed.
Another pattern is to pass a callback function to the asynchronous function
call which is invoked with the results when the asynchronous function is
done processing.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
// Keep the main thread alive until the asynchronous operation completes
try {
fileContentFuture.get();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
Waiting for Completion: The get method is called to ensure the main
thread waits for the asynchronous operation to complete before exiting.
In a real-world application, especially in a server or GUI context, you
might not need this, as the application would continue running, allowing
the asynchronous tasks to complete naturally.
Considerations:
I/O bound programs are the opposite of CPU bound programs. Such
programs spend most of their time waiting for input or output operations to
complete while the CPU sits idle. I/O operations can consist of operations
that write or read from main memory or network interfaces.
Throughput vs Latency
If you are an Instagram user, you could define throughput as the number of
images your phone or browser downloads per unit of time
The time it takes for a web browser to download Instagram images from the
internet is the latency for downloading the images.
Race conditions happen when threads run through critical sections without
thread synchronization. The threads “race” through the critical section to
write or read shared resources and depending on the order in which threads
finish the “race”, the program output changes.
Scenario:
modifierThread.start();
printerThread.start();
try {
modifierThread.join();
printerThread.join();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Expected Outcome:
Due to the race condition, the Printer Thread may sometimes print values of
sharedVariable that aren't divisible by 5. This occurs because the Modifier
Thread may increment the variable between the check ( sharedVariable % 5
2. Context Switch: Before the Printer Thread executes the print statement,
the Modifier Thread increments sharedVariable (now 11).
3. Printer Thread resumes and prints the value 11, which isn’t divisible by 5.
Mitigation:
Note:
Watch on
Deadlocks occur when two or more threads aren’t able to make any progress
because the resource required by the first thread is held by the second and
the resource required by the second thread is held by the first.
An application thread can also experience starvation, when it never gets CPU
time or access to shared resources.
A semaphore can potentially act as a mutex if the permits it can give out is
set to 1. However, the most important difference between the two is that in
case of a mutex the same thread must call acquire and subsequent release on
the mutex whereas in case of a binary sempahore, different threads can call
acquire and release on the semaphore.
As an example, say we have a consumer thread that checks for the size of the
buffer, finds it empty and invokes wait() on a condition variable. The
predicate in this example would be the size of the buffer.
The order of signaling the condition variable and releasing the mutex can be
interchanged, but generally, the preference is to signal first and then release
the mutex.
For one, a different thread could get scheduled and change the predicate
back to false before the signaled thread gets a chance to execute, therefore
the signaled thread must check the predicate again, once it acquires the
monitor.
The idiomatic and correct usage of a monitor dictates that the predicate
always be tested for in a while loop.
We can now realize that a monitor is made up of a mutex and one or more
condition variables.
Practically, in Java each object is a monitor and implicitly has a lock and is a
condition variable too. You can think of a monitor as a mutex with a wait set.
Monitors allow threads to exercise mutual exclusion as well as cooperation
by allowing them to wait and signal on conditions.
producerThread.start();
consumerThread.start();
}
while Loop for Condition Check: Both producer and consumer use a
while loop to check their respective conditions ( queue.size() == CAPACITY
for producer and queue.isEmpty() for consumer). This ensures that upon
being notified, the thread rechecks the condition, accounting for any
changes made by other threads before it reacquired the lock.
Semaphore vs Monitor
A monitor is made up of a mutex and a condition variable. One can think of a
mutex as a subset of a monitor.
The law specifies the cap on the maximum speedup that can be achieved
when parallelizing the execution of a program.
If you have a poultry farm where a hundred hens lay eggs each day, then no
matter how many people you hire to process the laid eggs, you still need to
wait an entire day for the 100 eggs to be laid. Increasing the number of
workers on the farm can’t shorten the time it takes for a hen to lay an egg.
Similarly, software programs consist of parts which can’t be sped up even if
the number of processors is increased. These parts of the program must
execute serially and aren’t amenable to parallelism.
As you can see the theoretical maximum speed-up for our program with 10%
serial execution will be 10. We can’t speed-up our program execution more
than 10 times compared to when we run the same program on a single CPU
or thread. To achieve greater speed-ups than 10 we must optimize or
parallelize the serially executed portion of the code.
There are other factors such as the memory architecture, cache misses,
network and disk I/O etc that can affect the execution time of a program and
the actual speed-up might be less than the calculated one. The Amdahl’s law
works on a problem of fixed size. However as computing resources are
improved, algorithms run on larger and even larger datasets. As the dataset
size grows, the parallelizable portion of the program grows faster than the
serial portion and a more realistic assessment of performance is given by
Gustafson’s law.
Amdahl’s Law suggests that the maximum speedup is limited by the serial
portion of the program. As more processors are added, the impact of the
non-parallelizable section becomes the bottleneck, leading to diminishing
returns.
Gustafson’s Law:
S=N−(N−1)×(1−P)
Where:
S is the scaled speedup.
Gustafson’s Law indicates that with larger problem sizes, the parallel portion
dominates, allowing for near-linear speedup with the addition of more
processors. This perspective is particularly relevant in modern computing,
where increasing dataset sizes benefit significantly from parallel processing
capabilities.
Key Differences:
Practical Implications:
Watch on
Moore’s Law
Gordon Moore, co-founder of Intel, observed the number of transistors that
can be packed into a given unit of space doubles about every two years and
in turn the processing power of computers doubles and the cost halves.
Moore’s law is more of an observation than a law grounded in formal
scientific research. It states that the number of transistors per square inch
on a chip will double every two years. This exponential growth has been
going on since the 70’s and is only now starting to slow down.
The increase in clock speeds of processors has slowed down much faster
than the increase in number of transistors that can be placed on a
microchip. If we plot clock speeds we find that the linear exponential growth
stopped after 2003 and the trend line flattened out. The clock speed
(proportional to difference between supply voltage and threshold voltage)
cannot increase because the supply voltage is already down to an extent
where it cannot be decreased to get dramatic gains in clock speed. In 10
years from 2000 to 2009, clock speed just increased from 1.3 GHz to 2.8 GHz
merely doubling in a decade rather than increasing 32 times as expected by
Moore’s law.
Each object in Java has an entity associated with it called the “monitor lock”
or just monitor. Think of it as an exclusive lock. Once a thread gets hold of
the monitor of an object, it has exclusive access to all the methods marked as
synchronized. No other thread will be allowed to invoke a method on the
object that is marked as synchronized and will block, till the first thread
releases the monitor which is equivalent of the first thread exiting the
synchronized method.
With the use of the synchronized keyword, Java forces you to implicitly
acquire and release the monitor-lock for the object within the same method!
One can’t explicitly acquire and release the monitor in different methods.
This has an important ramification, the same thread will acquire and release
the monitor! In contrast, if we used semaphore, we could acquire/release
them in different methods or by different threads.
thread1.start();
thread2.start();
}
}
Thread 1:
Thread 2:
Sleeps for 0.5 seconds to allow Thread 1 to acquire the lock first.
Potential Issues:
because it's invoking wait() on the new flag object without holding its
monitor. Similarly, Thread 2's call to flag.notifyAll() may throw the
same exception if it doesn't hold the monitor for the new flag object.
Baeldung
thread1.start();
thread2.start();
}
}
Thread 1 waits on the lock object, and Thread 2 notifies waiting threads
on the same lock object.
This ensures proper synchronization without the risk of
IllegalMonitorStateException .
Reassigning objects used for synchronization can lead to complex bugs and
exceptions in multithreaded Java applications. By maintaining consistent
references and following best practices, such issues can be avoided, leading
to more robust and reliable code.
Like the wait method, notify() can only be called by the thread which owns
the monitor for the object on which notify() is being called else an illegal
monitor exception is thrown. The notify method, will awaken one of the
threads in the associated wait queue, i.e., waiting on the thread’s monitor.
However, this thread will not be scheduled for execution immediately and
will compete with other active threads that are trying to synchronize on the
same object. The thread which executed notify will also need to give up the
object’s monitor, before any one of the competing threads can acquire the
monitor and proceed forward.
This method is the same as the notify() one except that it wakes up all the
threads that are waiting on the object’s monitor.
Interrupting Threads
You’ll often come across this exception being thrown from functions. When
a thread wait()-s or sleep()-s then one way for it to give up waiting/sleeping is
to be interrupted. If a thread is interrupted while waiting/sleeping, it’ll wake
up and immediately throw Interrupted exception.
The thread class exposes the interrupt() method which can be used to
interrupt a thread that is blocked in a sleep() or wait() call. Note that
invoking the interrupt method only sets a flag that is polled periodically by
sleep or wait to know the current thread has been interrupted and an
interrupted exception should be thrown.
In Java, threads can be interrupted to signal that they should stop their
current activity and handle the interruption appropriately. When a thread is
in a sleeping state via Thread.sleep() , it can be interrupted by another
thread, causing it to throw an InterruptedException .
The following example demonstrates a thread that is initially set to sleep for
one hour but is interrupted by the main thread shortly after starting.
// Main thread sleeps for 2 seconds before interrupting the sleeping thr
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
System.out.println("Main thread interrupted during sleep.");
Thread.currentThread().interrupt();
}
Thread Creation:
Within the thread’s run method, it attempts to sleep for one hour
( 3600000 milliseconds).
The main thread sleeps for 2 seconds ( 2000 milliseconds) to allow the
sleepingThread to enter its sleep state.
After waking up, the main thread interrupts the sleepingThread by calling
its interrupt() method.
Handling Interruption:
Key Points:
Practical Considerations:
Resource Management: Ensure that any resources held by the thread are
properly released when an interruption occurs to prevent resource leaks.
Avoiding Infinite Sleep: While the example uses a long sleep duration, in
real-world applications, it’s advisable to use shorter sleep intervals and
check for interruption more frequently to remain responsive to
interruption requests.
Watch on
1. Instantiate a ReentrantLock :
Awaiting a Condition:
lock.lock();
try {
while (!someCondition) {
condition1.await();
}
// Perform actions when the condition is met
} finally {
lock.unlock();
}
Signaling a Condition:
lock.lock();
try {
// Update the condition state
condition1.signal(); // or condition1.signalAll();
} finally {
lock.unlock();
}
Here’s how you can implement this using ReentrantLock and Condition
variables:
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
producer.start();
consumer.start();
}
notFull : Indicates that the buffer is not full, allowing the producer to add
items.
notEmpty : Indicates that the buffer is not empty, allowing the consumer
to retrieve items.
Producer Thread:
Before adding an item to the buffer, it acquires the lock and checks if the
buffer is full. If full, it waits on the notFull condition.
Once space is available, it adds the item, prints a message, and signals
the notEmpty condition to notify the consumer.
Consumer Thread:
Before retrieving an item, it acquires the lock and checks if the buffer is
empty. If empty, it waits on the notEmpty condition.
Key Points:
lock.lock();
try {
while (!conditionMet) {
condition.await();
}
// Proceed when condition is met
} finally {
lock.unlock();
}
signal() : Wakes up one waiting thread. Use this when only one thread
needs to proceed.
signalAll() : Wakes up all waiting threads. This is useful when multiple
threads might be waiting for the same condition, and any can proceed.
Best Practices:
Always Release Locks: Ensure that locks are released in a finally block
to prevent deadlocks, even if exceptions occur.
lock.lock();
try {
// Critical section
} finally {
lock.unlock();
}
Minimize Lock Scope: Keep the code within the locked section as brief as
possible to reduce contention and improve performance.
Avoid Locking Unrelated Code: Only protect shared mutable state with
locks. Locking code that doesn’t access shared resources can lead to
unnecessary performance bottlenecks.
Missed Signals
A missed signal happens when a signal is sent by a thread before the other
thread starts waiting on a condition. This is exemplified by the following
code snippet. Missed signals are caused by using the wrong concurrency
constructs. In the example below, a condition variable is used to coordinate
between the signaller and the waiter thread. The condition is signaled at a
time when no thread is waiting on it causing a missed signal.
Shared Lock Object: Both Waiter and Signaller threads synchronize on the
same lock object to coordinate their actions.
Signaller Thread:
Waiter Thread:
Acquires the lock and enters a loop, checking the isSignalled flag.
Once notified and the condition is met, it proceeds with its execution.
Issue:
In this setup, if the Signaller thread runs and sends the signal before the
Waiter thread starts waiting, the Waiter will miss the signal. When the
Waiter eventually calls lock.wait() , it will remain blocked indefinitely
because the signal was already sent, and there's no mechanism to re-send it.
Solution:
To prevent missed signals, ensure that the waiting thread is ready to receive
the signal before the signal is sent. This can be achieved by starting the
Waiter thread before the Signaller thread. Additionally, using higher-level
concurrency constructs like java.util.concurrent classes can help manage
such scenarios more effectively.
import java.util.concurrent.CountDownLatch;
waiter.start();
signaller.start();
}
Signaller Thread:
thread.
By using CountDownLatch , we ensure that the Waiter does not miss the signal,
regardless of the order in which the threads are started. This approach
provides a more robust solution to the missed signal problem.
Semaphore in Java
Java’s semaphore can be releas()-ed or acquire()-d for signalling amongst
threads. However the important call out when using semaphores is to make
sure that the permits acquired should equal permits returned. Take a look at
the following example, where a runtime exception causes a deadlock.
Consider the following Java program where a thread acquires a permit from
a semaphore but encounters a runtime exception before releasing the
permit. This scenario can lead to a deadlock, as the permit is never returned,
preventing other threads from acquiring it.
import java.util.concurrent.Semaphore;
thread1.start();
thread2.start();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
System.out.println(Thread.currentThread().getName() + " was inte
} finally {
// Ensure the permit is released even if an exception occurs
System.out.println(Thread.currentThread().getName() + " releasin
semaphore.release();
}
}
Task Runnable:
Key Points:
Keep in mind that fair semaphores may have lower throughput due to
increased overhead.
Spurious Wakeups
Spurious mean fake or false. A spurious wakeup means a thread is woken up
even though no signal has been received. Spurious wakeups are a reality and
are one of the reasons why the pattern for waiting on a condition variable
happens in a while loop as discussed in earlier chapters. There are technical
reasons beyond our current scope as to why spurious wakeups happen, but
for the curious on POSIX based operating systems when a process is
signaled, all its waiting threads are woken up.
Consider the following Java program where a Producer thread adds items to
a shared queue, and a Consumer thread removes items from it. Both threads
use a condition variable to coordinate their actions. The Consumer thread
waits for items to be available in the queue before attempting to consume
them. To handle potential spurious wakeups, the consumer checks the
condition in a loop.
import java.util.LinkedList;
import java.util.Queue;
producer.start();
consumer.start();
}
Producer Thread:
Acquires the lock on the queue and checks if the queue has reached its
MAX_CAPACITY .
If the queue is full, it enters a loop and calls queue.wait() , releasing the
lock and waiting until notified.
Upon being notified, it rechecks the condition to ensure the queue is not
full before adding a new item.
Consumer Thread:
Acquires the lock on the queue and checks if the queue is empty.
Key Points:
Using a Loop for Condition Checking: Both the producer and consumer
threads use a while loop to check their respective conditions
( queue.size() == MAX_CAPACITY for the producer and queue.isEmpty() for
the consumer). This loop ensures that after waking up, the thread re-
evaluates the condition before proceeding. This approach handles
spurious wakeups gracefully, as the thread will go back to waiting if the
condition is not met.
Watch on
Lock Fairness
When locks get acquired by threads, there’s no guarantee of the order in
which threads are granted access to a lock. A thread requesting lock access
more frequently may be able to acquire the lock unfairly greater number of
times than other locks. Java locks can be turned into fair locks by passing in
the fair constructor parameter. However, fair locks exhibit lower throughput
and are slower compared to their unfair counterparts.
Unfair Lock (Default Behavior): In this mode, threads can acquire the
lock in a non-deterministic order. A thread attempting to acquire the lock
may “jump the queue,” obtaining the lock even if other threads have been
waiting longer. This approach can lead to higher throughput but may
cause thread starvation, where some threads are perpetually delayed.
Fair Lock: When fairness is set to true , the lock favors granting access to
the longest-waiting thread, ensuring a first-come, first-served order.
While this prevents thread starvation and promotes equitable access, it
can result in reduced performance due to the overhead of managing the
queue of waiting threads.
Fair Lock:
Example Demonstration:
The following example illustrates the behavior of fair and unfair locks.
Multiple threads attempt to acquire a shared lock and perform a simple task.
The program compares the order in which threads acquire the lock under
both fairness policies.
import java.util.concurrent.locks.ReentrantLock;
Worker(ReentrantLock lock) {
this.lock = lock;
}
@Override
public void run() {
System.out.println(Thread.currentThread().getName() + " attempting t
lock.lock();
try {
System.out.println(Thread.currentThread().getName() + " acquired
// Simulate work
Thread.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
System.out.println(Thread.currentThread().getName() + " releasin
lock.unlock();
}
}
}
}
Lock Initialization:
Test Execution:
The runTest method initiates multiple threads that attempt to acquire the
provided lock. A slight delay ( Thread.sleep(10) ) between thread starts
increases contention, highlighting the difference between fair and unfair
locking.
Worker Threads:
Each Worker thread tries to acquire the lock, simulates work by sleeping
for 50 milliseconds, and then releases the lock. The console output shows
the order in which threads acquire and release the lock.
Observations:
With the unfair lock, the output may show threads acquiring the lock out
of the order they were started, demonstrating the lack of a strict ordering
policy.
With the fair lock, threads are more likely to acquire the lock in the order
they were started, adhering to the first-come, first-served principle.
Performance Considerations:
While fair locks prevent thread starvation by ensuring orderly access, they
introduce additional overhead due to the management of the waiting queue.
This can lead to lower throughput compared to unfair locks, especially under
high contention. Therefore, the choice between fair and unfair locks should
be guided by the specific requirements of your application, balancing the
need for fairness against performance implications.
Thread Pools
Imagine an application that creates threads to undertake short-lived tasks.
The application would incur a performance penalty for first creating
hundreds of threads and then tearing down the allocated resources for each
thread at the ends of its life. The general way programming frameworks
solve this problem is by creating a pool of threads, which are handed out to
execute each concurrent task and once completed, the thread is returned to
the pool.
Java offers thread pools via its Executor Framework. The framework includes
classes such as the ThreadPoolExecutor for creating thread pools.
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
@Override
public void run() {
System.out.println(name + " is being executed by " + Thread.currentThrea
try {
// Simulate a task taking time
Thread.sleep((long) (Math.random() * 1000));
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
System.out.println(name + " has completed execution.");
}
}
ThreadPoolExecutor Configuration:
corePoolSize : The number of threads to keep in the pool, even if they are
idle.
keepAliveTime and unit : The time that excess idle threads will wait for
new tasks before terminating.
Submitting Tasks:
Task Execution:
Each Task prints its name and the thread executing it, simulates work by
sleeping for a random time, and then prints a completion message.
Shutting Down the Executor:
Considerations:
Thread Pool Sizing: Choosing the appropriate size for the thread pool is
crucial. A pool that is too large can lead to resource exhaustion, while a
pool that is too small may cause underutilization of system resources.
The optimal size depends on factors such as the nature of the tasks and
the system’s capabilities. Software Engineering Stack Exchange
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}
Threads:
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (x == 0 && y == 0) {
reorderingCount++;
System.out.println("Reordering occurred at iteration: " + i);
}
}
System.out.println("Total reorderings observed: " + reorderingCount);
}
}
Key Takeaways:
Java Memory Model (JMM): The JMM defines the interaction between
threads and memory, specifying how and when changes made by one
thread become visible to others.
worker.start();
Shared Variable: The running variable is shared between the main thread
and the worker thread.
Main Thread: After sleeping for 1 second, the main thread sets running
Issue:
Without proper synchronization, the worker thread may not see the updated
value of running set by the main thread. This happens because the worker
thread might be reading a cached copy of running that remains true ,
To ensure that the worker thread sees the updated value of running , declare
running as volatile :
Key Points:
When to Use volatile : Use volatile for variables that are accessed by
multiple threads without other forms of synchronization, and when the
variable's state is independent (i.e., not involved in compound actions
like check-then-act).
Additional Considerations:
Watch on
The Java language specification (JLS) mandates the JVM to maintain within-
thread as-if-serial semantics. What this means is that, as long as the result of
the program is exactly the same if it were to be executed in a strictly
sequential environment (think single thread, single processor) then the JVM
is free to undertake any optimizations it may deem necessary. Over the
years, much of the performance improvements have come from these clever
optimizations as clock rates for processors become harder to increase.
However, when data is shared between threads, these very optimizations can
result in concurrency errors and the developer needs to inform the JVM
through synchronization constructs of when data is being shared.
3 1
https://github.com/yugal-nandurkar || https://www.linkedin.com/in/yugal-
nandurkar/ || https://medium.com/@microteam93
Responses (1)
Write a response
yogesh rajput
Jan 28
1 reply Reply
yugal-nandurkar yugal-nandurkar
Feb 7 Jan 12 6
yugal-nandurkar yugal-nandurkar
Feb 23 Feb 7
Umadevi R Sujith C
5d ago 11 Mar 13 3
Feb 20 2 5d ago 29
Help Status About Careers Press Blog Privacy Rules Terms Text to speech