Forking Vs Threading: What Is Fork/Forking
Forking Vs Threading: What Is Fork/Forking
So, finally after long time, i am able to figure out the difference between forking and threading :)
When i have been surfing around, i see a lots of threads/questions regarding forking and threading, lots of
queries which one should be used in the applications. So i wrote this post which could clarify the difference
between these two based on which you could decide what you want to use in your application/scripts.
What is Fork/Forking:
Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different
process with different process ID and having it’s own memory. Parent process creates a separate address
space for child. Both parent and child process possess the same code segment, but execute independently
from each other.
The simplest example of forking is when you run a command on shell in unix/linux. Each time a user issues
a command, the shell forks a child process and the task is done.
When a fork system call is issued, a copy of all the pages corresponding to the parent process is created,
loaded into a separate memory location by the OS for the child process, but in certain cases, this is not
needed. Like in ‘exec’ system calls, there is not need to copy the parent process pages, as execv replaces the
address space of the parent process itself.
The child process will be having it’s own unique process ID.
The child process shall have it’s own copy of parent’s file descriptor.
File locks set by parent process shall not be inherited by child process.
Any semaphores that are open in the parent process shall also be open in the child process.
Child process shall have it’s own copy of message queue descriptors of the parents.
Child will have it’s own address space and memory.
Some of the applications in which forking is used are: telnetd(freebsd), vsftpd, proftpd, Apache13, Apache2,
thttpd, PostgreSQL.
Pitfalls in Fork:
In fork, every new process should have it’s own memory/address space, hence a longer startup and
stopping time.
If you fork, you have two independent processes which need to talk to each other in some way. This
inter-process communication is really costly.
When the parent exits before the forked child, you will get a ghost process. That is all much easier
with a thread. You can end, suspend and resume threads from the parent easily. And if your parent
exits suddenly the thread will be ended automatically.
In-sufficient storage space could lead the fork system to fail.
Threads are Light Weight Processes (LWPs). Traditionally, a thread is just a CPU (and some other minimal
state) state with the process containing the remains (data, stack, I/O, signals). Threads require less overhead
than “forking” or spawning a new process because the system does not initialize a new system virtual
memory space and environment for the process. While most effective on a multiprocessor system where the
process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed
processing, gains are also found on uniprocessor systems which exploit latency in I/O and other system
functions which may halt process execution.
Process instructions
Most data
open files (descriptors)
signals and signal handlers
current working directory
User and group id
Thread ID
set of registers, stack pointer
stack for local variables, return addresses
signal mask
priority
Return value: errno
Pitfalls in threads:
Race conditions: The big loss with threads is that there is no natural protection from having multiple
threads working on the same data at the same time without knowing that others are messing with it.
This is called race condition. While the code may appear on the screen in the order you wish the
code to execute, threads are scheduled by the operating system and are executed at random. It cannot
be assumed that threads are executed in the order they are created. They may also execute at different
speeds. When threads are executing (racing to complete) they may give unexpected results (race
condition). Mutexes and joins must be utilized to achieve a predictable execution order and outcome.
Thread safe code: The threaded routines must call functions which are “thread safe”. This means that
there are no static or global variables which other threads may clobber or read assuming single
threaded operation. If static or global variables are used then mutexes must be applied or the
functions must be re-written to avoid the use of these variables. In C, local variables are dynamically
allocated on the stack. Therefore, any function that does not use static data or other shared resources
is thread-safe. Thread-unsafe functions may be used by only one thread at a time in a program and
the uniqueness of the thread must be ensured. Many non-reentrant functions return a pointer to static
data. This can be avoided by returning dynamically allocated data or using caller-provided storage.
An example of a non-thread safe function is strtok which is also not re-entrant. The “thread safe”
version is the re-entrant version strtok_r.
Advantages in threads:
Threads share the same memory space hence sharing data between them is really faster means inter-
process communication (IPC) is real fast.
If properly designed and implemented threads give you more speed because there aint any process
level context switching in a multi threaded application.
Threads are really fast to start and terminate.
Some of the applications in which threading is used are: MySQL, Firebird, Apache2, MySQL 323
FAQs:
Ans: That depends on a lot of factors. Forking is more heavy-weight than threading, and have a higher
startup and shutdown cost. Interprocess communication (IPC) is also harder and slower than interthread
communication. Actually threads really win the race when it comes to inter communication. Conversely,
whereas if a thread crashes, it takes down all of the other threads in the process, and if a thread has a buffer
overrun, it opens up a security hole in all of the threads.
which would share the same address space with the parent process and they only needed a reduced context
switch, which would make the context switch more efficient.
Ans: That is something which totally depends on what you are looking for. Still to answer, In a
contemporary Linux (2.6.x) there is not much difference in performance between a context switch of a
process/forking compared to a thread (only the MMU stuff is additional for the thread). There is the issue
with the shared address space, which means that a faulty pointer in a thread can corrupt memory of the
parent process or another thread within the same address space.
Ans: If you are a programmer and would like to take advantage of multithreading, the natural question is
what parts of the program should/ should not be threaded. Here are a few rules of thumb (if you say “yes” to
these, have fun!):
Are there groups of lengthy operations that don’t necessarily depend on other processing (like
painting a window, printing a document, responding to a mouse-click, calculating a spreadsheet
column, signal handling, etc.)?
Will there be few locks on data (the amount of shared data is identifiable and “small”)?
Are you prepared to worry about locking (mutually excluding data regions from other threads),
deadlocks (a condition where two COEs have locked data that other is trying to get) and race
conditions (a nasty, intractable problem where data is not locked properly and gets corrupted through
threaded reads & writes)?
Could the task be broken into various “responsibilities”? E.g. Could one thread handle the signals,
another handle GUI stuff, etc.?
Conclusions:
1. Whether you have to use threading or forking, totally depends on the requirement of your
application.
2. Threads more powerful than events, but power is not something which is always needed.
3. Threads are much harder to program than forking, so only for experts.
4. Use threads mostly for performance-critical applications.
References:
1. http://en.wikipedia.org/wiki/Fork_(operating_system)
2. http://tldp.org/FAQ/Threads-FAQ/Comparison.html
3. http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
4. http://linas.org/linux/threads-faq.html
If you enjoyed this post, make sure you subscribe to my RSS feed!!!!
1. areader says:
Nice and clear, indeed! Well done. Many a geek has tried to explain the difference between forks and
threads, and lost us poor readers in jargon-land.
(You have some wayward apostrophes: “it’s own” should be “its own.” Possessive, not a contraction
of “it is.” “FAQs” not “FAQ’s”. Plural, again not a contraction. I know, English is harder than
threading and forking )
Reply
2. Napster says:
@areader: Thanx for the inputs. Surely english is way tougher than forking/threading :)
Reply
Using threads will not make your program go faster. Use threads when you need asynchronous
activities done, not for speed.
See tridges’ software engineering talk “threads are evil” for details.
Jeremy.
Reply
o Napster says:
@ Jeremy:
Everyone will have different views of everything. Threads do help us to achieve performance
and I had given some pretty good reason for that too in the blog.
Reply
Threading is much better than forking, in terms of speed and resources. But, at a cost of much more
complexity, even beyond the obvious, (like race conditions).
If in a perfect world, everything was programmed using Threading, our systems would probably run
20% fast using 20% less memory.
But, humans tend to got the past of least resistance, which means letting the machine do all the
forking and such as to avoid sweating for hours or paying more money to developers that know how
to make it all work without creating security holes and crashes.
Reply
o Napster says:
Reply
5. renoX says:
So to a developer my advice would be: use process, they are more robust.
If you benchmark show that you’re using too much memory or making to much memory copy, use
shared memory.
If your benchmarks show that it’s still not good enough *then and only then* use threads.
A good example of an incorrect usage of threads is Firefox: Chrome has shown that processes allow
much better resource management, security (etc), so FF is changing its architecture to use process
now..
The debate between heavy weight processes and lightweight processes is much more subtle, and does
not boil down to “are you an expert developer who needs to create high performance software? If so,
use threads.”
As with any programming decision, there are many factors to consider, and also other options outside
of forking / threading.
For example, although you mention running on multiple cores, what about running on multiple
machines? Without specialized libraries and compilers, your software won’t scale to a distributed
environment if you assume threading inside a single process is the right model for you.
Moreover, there seems to be an implicit focus on monolithic software or “the kitchen sink”
development. On the other hand, in the views of many developers, forking processes or creating
threads is rarely needed. Instead, many developers prefer creating small, reusable components which
can be streamed together or communicate via sockets. This allows the user to “inject” the
concurrency configuration which suits her, rather than relying on some pre-defined multi-
processing / multi-threading model dictated by the developer.
Finally, there are alternatives to the process and thread models. Some programming languages, for
example, use event models to achieve concurrency. These models do not always scale to multiple
cores as easily as threading or multi-processing. But they usually make dealing with asynchronous
occurrences (such as interaction with users or devices) much simpler to program.
While I have the utmost respect for Mr. Allison, I disagree with his statement “Use threads when you
need asynchronous activities done.” Personally, I would much rather use an event model! :) “On
event X do Y.” Or “after time T do Z.” And so on.
But then I must confess a love for much-maligned / outmoded languages (HTML/JavaScript, Tcl). :)
Johann Tienhaara
Reply
7. Tomoiaga says:
Reply
8. Kevin says:
While I do not know much about forking, I have used threading in my application.
The application was an MP4 player. After parsing the .mp4 file, I had an audio component and a
video component. To play these in sync, I used one thread for the audio, one for the video and a third
one to keep them in sync.
I suppose you use threading when you want to perform independent tasks in parallel and also want
these tasks to communicate with each other.
Reply
9. Twylite says:
Forking (especially with copy-on-write) is not supported on Windows. Cygwin has managed to
create a slow & kludgy emulation of fork().
Threads are supported on POSIX and Win32 platforms, i.e. Windows, *nix, Mac OS/X.
Many languages that run in VMs (e.g. Java, .net, a number of “scripting” languages) have
significantly better support for threading as opposed to forking.
1) Your comment that locks aren’t held across forks is wrong. The answer depends on the locking
mechanism, and may depend on the OS. The behavior of such locks isn’t necessarily portable.
2) You missed what I think is the most interesting question: what happens if a threaded application
forks (say to launch an external command)? Can a thread running in the new fork screw up the thread
in an old fork by repeating some external change?
Reply
o Napster says:
Regarding point 1, that’s right. I just tried to write this more generic, then specific.
Point 2, that’s something I really missed, but never got time to update my blog. Will try to
update as soon as I can with that info.
Thanx Mike.
Reply
Note that threaded code is *not* always faster than forking. If your code is threaded, then everything
is shared, and you have to deal with locking and unlocking everything.
If you fork, then you can chose what gets shared, and you don’t have to deal with locking/unlocking
anything else.
Locking/unlocking can get expensive – especially if there are contentions. You might be better of
with one global lock, but that will mean you aren’t really getting concurrent operation, so processing
would be better. An example of this is the Python GIL. Someone actually patched python to use fine-
grained locking instead of the GIL, and the interpreter ran over 4 times slower.
Traditionally, a thread was just a CPU (and some other minimal state) state with the process
containing the remains (data, stack, I/O, signals). This would lend itself to very fast switching but
would cause basic problems (e.g. what do "fork()" or "execve()" calls mean when executed by a
thread?).
Consider Linux threads as a superset of this functionality: they still can switch fast and share process
parts, but they can also identify what parts get shared and have no problems with execve() calls.
There are four flags that determine the level of sharing:
There has been a lot of talk about "clone()". The system call (please note: "low level") clone() is an
extension to fork(). In fact, clone(0) == fork(). But with the above #define's, any combination of the
VM, filesystem, I/O, signal handlers and process ID may be shared.