Intermediate
Intermediate
Java’s memory management is primarily handled by JVM, through a combination of automatic memory
allocation, Garbage collection and Memory optimization strategies.
# Garbage Collection:
- Java uses automatic garbage collection to reclaim the memory occupied by the object that are
no longer reachable or in use.
- JVM periodically runs garbage collector to identify and remove the objects that are no longer
referenced by any active part of the program.
# Memory Leaks:
- While Java manages memory automatically, improper handling of resources (like not closing I/O
streams) can lead to memory leaks.
- Memory leaks occur when objects are unintentionally kept in memory due to lingering
references, preventing the garbage collector from reclaiming them.
** Note: Though method area is logically a part of heap, it may or may not be garbage collected even
if garbage collection is compulsory in heap area.
* Native method Stacks: Also called as C stacks, native method stacks are not written in Java language.
This memory is allocated for each thread when its created. And it can be of fixed or dynamic nature. It
usually stores the methods from native languages such as c and c++. Also the methods compiled by JIT
compiler are stored in this stack area.
* Program counter (PC) registers: Each JVM thread which carries out the task of a specific method has a
program counter register associated with it. The non-native method has a PC register which stores the
address of the available JVM instruction whereas in a native method, the value of program counter is
undefined. PC register is capable of storing the return address or a native pointer on some specific
platform.
A stack is created at the same time when a thread is created and is used to store data and partial
results which will be needed while returning value for method and performing dynamic linking. Stacks can
either be of fixed or dynamic size. The size of a stack can be chosen independently when it is created. The
memory for stack needs not to be contiguous.
1. It grows and shrinks as new methods are called and returned, respectively.
2. Variables inside the stack exist only as long as the method that created them is running.
3. It’s automatically allocated and deallocated when the method finishes execution.
1. Young Generation – This is where all new objects are allocated and aged. A minor Garbage
collection occurs when this fills up. This garbage collection is called as “Minor Garbage
Collection”. Young generation is again divided into 3 part – Eden memory and 2 survivor spaces.
- Most of the newly created objects are located in the eden memory space, when eden space is
filled with objects minor gc is performed and all the survivor objects are moved to one of the
survivor space.
- Minor GC also checks the survivor space and then move them to the other survivor space, so at
a time one of the survivor space is always empty.
- Objects that are survived after many cycles of Minor GC are moved to the old generation
memory space, usually it is done by setting a threshold for the age of young generation objects
before promoting to old generation.
2. Old or Tenured Generation – This is where long surviving objects are stored. When objects are
stored in the Young Generation, a threshold for the object’s age is set, and when that threshold is
reached, the object is moved to the old generation. The garbage collection is usually performed when
this memory gets full. This GC is called as Major GC and usually takes longer time.
3. Permanent Generation – This consists of JVM metadata for the runtime classes and application
methods. PermGen is populated by JVM at runtime based on the classes used by the application.
PermGen also contains Java SE library classes and methods.
In older versions of Java (up to Java 8), this region was used to store class metadata, constants,
and interned strings. In Java 8 and later, class metadata is stored in the native memory area known as
Metaspace.
4. Heap Size:
- The heap size can be configured using JVM command-line options (-Xms and -Xmx) to specify
the initial and maximum heap size.
- The JVM manages the heap size dynamically based on the available system resources and the
application's memory requirements.
We can always manipulate the size of heap memory as per our requirement.
1.It’s accessed via complex memory management techniques that include the Young Generation,
Old or Tenured Generation, and Permanent Generation.
4.This memory, in contrast to stack, isn’t automatically deallocated. It needs Garbage Collector to
free up unused objects so as to keep the efficiency of the memory usage.
5.Unlike stack, a heap isn’t thread-safe and needs to be guarded by properly synchronizing the
code.
2. Garbage collection process causes the rest of the processes or threads to be paused and thus is
costly in nature. This problem is unacceptable for the client but can be eliminated by applying several
garbage collector-based algorithms. This process of applying algorithm is often termed as Garbage
Collector tuning and is important for improving the performance of a program.
3. Another solution is the generational garbage collectors that adds an age field to the objects that
are assigned a memory. As more and more objects are created, the list of garbage grows thereby
increasing the garbage collection time. On the basis of how many clocks cycles the objects have survived,
objects are grouped and are allocated an ‘age’ accordingly. This way the garbage collection work gets
distributed.
4. In the current scenario, all garbage collectors are generational, and hence, optimal.
** Note: System.gc() and Runtime.gc() are the methods which requests for Garbage collection to JVM
explicitly but it does’t ensures garbage collection as the final decision of garbage collection is of JVM only.
Knowing how the program and it’s data is stored or organized is essential as it helps when the programmer
intends to write an optimized code in terms of resources and it’s consumption. Also it helps in finding the
memory leaks or inconsistency, and helps in debugging memory related errors. However, the memory
management concept is extremely vast and therefore one must put his best to study it as much as possible
to improve the knowledge of the same.
* Advantages of Heap Memory:
- Flexibility: Heap memory allows for dynamic allocation and de-allocation of memory, enabling the
creation and destruction of objects as needed.
- Automatic Management: The JVM's garbage collector automates memory management by reclaiming
unused memory and preventing memory leaks.
- Supports Dynamic Data Structures: Heap memory supports dynamic data structures like linked lists,
trees, and dynamic arrays where memory allocation is not known at compile time.
- Tune Garbage Collection: Understand garbage collection algorithms and tune JVM parameters to
optimize garbage collection performance based on the application's memory requirements.
- Avoid Memory Leaks: Be mindful of retaining references to objects longer than necessary to prevent
memory leaks and excessive heap memory consumption.
In summary, heap memory plays a crucial role in Java's memory management model, providing a
flexible and efficient mechanism for allocating and managing memory dynamically during program
execution. Understanding heap memory and its management is essential for writing efficient and
scalable Java applications.
1. Automatic Memory Management: Java provides automatic memory management through its garbage
collector, which frees developers from the burden of manual memory allocation and deallocation.
However, understanding how the memory management works is still important to write efficient and
high-performing Java applications.
2. Preventing Memory Leaks: Even though Java has automatic memory management, it is still possible to
create memory leaks if objects are not properly referenced. Knowing how memory management works
helps developers identify and prevent such memory leaks.
3. Optimizing Performance: Understanding Java's memory structure (heap, stack, method area, etc.) and
the garbage collection process allows developers to optimize their code and application design to reduce
the load on the garbage collector, leading to better performance.
4. Debugging Memory-Related Issues: When Java applications encounter issues related to memory, such as
OutOfMemoryError, understanding memory management is crucial for effectively debugging and
resolving these problems.
5. Tuning the JVM: Advanced Java developers may need to tune the JVM's memory management
parameters, such as heap size, garbage collection algorithm, and more, to achieve the desired
performance characteristics of their applications. This requires a deep understanding of Java's memory
management.
6. Writing Low-Latency Applications: In the context of low-latency systems, where performance is critical, a
deep understanding of Java's memory management is essential to minimize the impact of garbage
collection and other memory-related operations.
7. Efficient Resource Utilization: Proper memory management ensures that the available memory resources
are utilized efficiently, preventing unnecessary memory consumption and wastage.
8. Scalability and Concurrency: Understanding memory management is crucial when building scalable and
concurrent Java applications, as it helps developers manage shared memory access and avoid race
conditions or other concurrency-related issues.
9. Compliance with Memory Constraints: In certain environments, such as embedded systems or mobile
devices, Java applications may need to operate within strict memory constraints. Knowing how to
manage memory effectively is crucial in such scenarios.
10. Interoperability with Native Code: When Java applications need to interact with native code (e.g.,
through the Java Native Interface, or JNI), understanding memory management becomes essential to
ensure proper data exchange and avoid memory-related bugs.
11. Predictable Behavior: Comprehending Java's memory management model helps developers write more
predictable and deterministic code, as they can anticipate and control the behavior of their applications
concerning memory usage and garbage collection.
12. Reduced Cognitive Overhead: By understanding memory management, Java developers can focus more
on the core functionality of their applications, rather than spending time debugging and troubleshooting
memory-related issues.
13. Compliance with Best Practices: Adhering to memory management best practices, such as proper object
lifecycle management and efficient memory allocation, is essential for writing high-quality, maintainable,
and secure Java code.
14. Improved Testability and Observability: Knowledge of memory management aids in designing testable
and observable Java applications, as developers can better understand and control the memory-related
aspects of their systems.
Overall, a deep understanding of Java's memory management is a crucial skill for Java developers, as it enables
them to write efficient, scalable, and robust applications that make the most of the language's automatic
memory management capabilities.
5. String Concatenation: String concatenation in Java can be performed using the + operator or the
concat() method. String concatenation involving string literals is optimized by the compiler to use
StringBuilder under the hood for efficiency.
6. Unicode Support: Java String class supports Unicode characters, allowing representation of text
in different languages and scripts.
1. String(byte[] byte_arr): Construct a new String by decoding the byte array. It uses the platform’s
default character set for decoding.
Example:
2. String(byte[] byte_arr, Charset char_set): Construct a new String by decoding the byte array. It uses
the char_set for decoding.
Example:
Charset cs = Charset.defaultCharset();
3. String(byte[] byte_arr, String char_set_name): Construct a new String by decoding the byte array. It
uses the char_set_name for decoding. It looks similar to the above constructs and they appear before
similar functions but it takes the TIllMay2024.String(which contains char_set_name) as parameter
while the above constructor takes CharSet.
Example:
4. String(byte[] byte_arr, int start_index, int length): Construct a new string from the bytes array
depending on the start_index(Starting location) and length(number of characters from starting location).
Example:
Construct a new string from the bytes array depending on the start_index(Starting location) and
length(number of characters from starting location).Uses char_set for decoding.
Example:
Charset cs = Charset.defaultCharset();
6. String(byte[] byte_arr, int start_index, int length, String char_set_name): Construct a new string
from the bytes array depending on the start_index(Starting location) and length(number of characters
from starting location).Uses char_set_name for decoding.
Example:
7. String(char[] char_arr): Allocates a new String from the given Character array
Example:
8. String(char[] char_array, int start_index, int count): Allocates a String from a given character array but
choose count characters from the start_index.
Example:
9. String(int[] uni_code_points, int offset, int count): Allocates a String from a uni_code_array but choose
count characters from the start_index.
Example:
10. String(StringBuffer s_buffer): Allocates a new string from the string in s_buffer
Example:
11. String(StringBuilder s_builder): Allocates a new string from the string in s_builder
Example:
"GeeksforGeeks".length(); // returns 13
3. String substring (int i): Return the substring from the ith index character to end.
4. String substring (int i, int j): Returns the substring from i to j-1 index.
5. String concat(String str): Concatenates specified string to the end of this string.
String s1 = ”Geeks”;
String s2 = ”forGeeks”;
6. int indexOf (String s): Returns the index within the string of the first occurrence of the specified string.
If String s is not present in input string then -1 is returned as the default value.
1. String s = ”Learn Share Learn”;
7. int indexOf (String s, int i): Returns the index within the string of the first occurrence of the specified
8. Int lastIndexOf(String s): Returns the index within the string of the last occurrence of the specified
string. If String s is not present in input string then -1 is returned as the default value.
9. boolean equals( Object otherObj): Compares this string to the specified object.
10. boolean equalsIgnoreCase (String anotherString): Compares string to another string, ignoring case
considerations.
// strings to be compared
12. int compareToIgnoreCase(String anotherString): Compares two string lexicographically, ignoring case
considerations.
// strings to be compared
Note: In this case, it will not consider case of a letter (it will ignore whether it is uppercase or
lowercase).
13. String toLowerCase(): Converts all the characters in the String to lower case.
14. String toUpperCase(): Converts all the characters in the String to upper case.
15. String trim(): Returns the copy of the String, by removing whitespaces at both ends. It does not affect
whitespaces in the middle.
String s1 = “feeksforfeeks“;
17. boolean contains(string): Returns true if string contains contains the given string
String s1="geeksforgeeks";
String s2="geeks";
String s1="geeksforgeeks";
char []ch=s1.toCharArray(); // returns [ 'g', 'e' , 'e' , 'k' , 's' , 'f', 'o', 'r' , 'g' , 'e' , 'e' , 'k' ,'s' ]
19. boolean starsWith(string): Return true if string starts with this prefix.
String s1="geeksforgeeks";
String s2="geeks";
Ans: As most of the string objects are stored in scp, there a single string object is referenced by many references.
If any one of the reference is trying to change the object that will affect all other reference objects, so to prevent
this java has made string class as immutable.
The key differences between final and immutable are:
- Final: Relates to variables, methods, or classes that cannot be changed or overridden after initialization
or declaration.
- Immutable: Relates to objects whose state cannot be modified after construction, typically achieved by
making all fields final and ensuring no mutation methods are provided.
While final emphasizes immutability at the variable or method level, "immutable" describes the characteristic of
an entire object's state being unchangeable once created. Immutable objects provide benefits such as thread
safety, simplification of code, and improved reliability in concurrent environments.
StringBuffer and StringBuilder in Java:
StringBuffer Class: StringBuffer is a peer class of String that provides much of the functionality of
strings. The string represents fixed-length, immutable character sequences while StringBuffer represents
growable and writable character sequences. StringBuffer may have characters and substrings inserted in
the middle or appended to the end. It will automatically grow to make room for such additions and often
has more characters preallocated than are actually needed, to allow room for growth. In order to create a
string buffer, an object needs to be created, (i.e.), if we wish to create a new string buffer with name str,
then:
StringBuilder Class: Similar to StringBuffer, the StringBuilder in Java represents a mutable sequence of
characters. Since the String Class in Java creates an immutable sequence of characters, the StringBuilder
class provides an alternative to String Class, as it creates a mutable sequence of characters. The function
of StringBuilder is very much similar to the StringBuffer class, as both of them provide an alternative to
String Class by making a mutable sequence of characters. Similar to StringBuffer, in order to create a new
string with the name str, we need to create an object of StringBuilder, (i.e.):
Usage This is used when we want This is used when Thread This is used when Thread
immutability. safety is not required. safety is required.
Methods in StringBuffer class: Both the methods in stringbuffer and StringBuilder class are almost
similar to each other, they are as follows
capacity() the total allocated capacity can be found by the capacity( ) method.
charAt() This method returns the char value in this sequence at the specified
index.
delete() Deletes a sequence of characters from the invoking object.
Similarly, there are many methods in these (can refer to: https://www.geeksforgeeks.org/stringbuffer-
class-in-java/)
# HashCode:
In Java, the hashCode() method is used to generate a hash code for an object. The hash code is an
integer value that is used to support hash tables such as those provided by HashMap, HashSet, and Hashtable.
Understanding how hashCode() works and how to override it properly is crucial for ensuring the correct behavior
of hash-based collections.
• The hashCode() method must be consistent with the equals() method. This means that if two objects are
equal according to the equals() method, they must have the same hash code.
• The converse is not necessarily true: two objects having the same hash code do not have to be equal
according to the equals() method (though this will lead to hash collisions).
private int i;
Javatpoint(){}
Javatpoint(int i){
this.i=i;
}
public int getValue(){
return i;
}
public void setValue(int i){
this.i=i;
}
@Override
public String toString() {
return Integer.toString(i);
}
}
class TestJavatpoint{
public static void main(String[] args){
Javatpoint j=new Javatpoint(40);
System.out.println(j);
}}
• Representing primitive data types as objects: Wrapper classes provide a way to use primitive data types
as objects, which is necessary in certain situations where only objects are accepted, such as in
collections like ArrayList and HashMap.
• Storing null values: Primitive data types in Java cannot store null values, but wrapper classes can be set
to null, which can be useful in representing missing or unknown values.
• Method parameters: Wrapper classes can be used as method parameters, allowing methods to accept
objects instead of primitive data types.
• Autoboxing and unboxing: Wrapper classes provide automatic conversions between primitive data types
and their corresponding wrapper objects through a feature called autoboxing and unboxing.
• Converting between primitive data types and strings: Wrapper classes provide methods for converting
primitive values to and from strings, which can be useful for reading data from a file or user input.
• Comparing values: Wrapper classes provide methods for comparing values, such as equals() and
compareTo(), which can be useful for sorting and searching.
• Performing mathematical operations: Wrapper classes provide methods for performing mathematical
operations, such as addition, subtraction, and multiplication, on primitive values.
• Provide an Object representation of primitive data types: Wrapper classes provide an object
representation of primitive data types, which allows developers to use primitives as objects in
their code.
• Facilitate type conversion: Wrapper classes can be used to convert primitive data types to and
from String representation, making it easy to store primitives in collections or pass them as
method arguments.
• Help in implementing Autoboxing and Unboxing: Java provides a feature called Autoboxing and
Unboxing, which automatically converts between primitive and wrapper class objects,
simplifying the code and reducing the number of explicit type conversions required.
• Performance Overhead: The process of converting primitive data types to and from Wrapper
classes can result in performance overhead, especially in large-scale applications where this
conversion takes place frequently.
• Increased Memory Usage: Wrapper classes consume more memory than primitive data types,
as they are objects and contain additional information like type information, methods, etc.
• Immutable: Wrapper classes are immutable, meaning their values cannot be changed once they
are created. This can be limiting in certain situations where it is necessary to modify the value of
a primitive data type.
* Arrays in Java => In Java, all arrays are dynamically allocated. Arrays may be stored in contiguous
memory [consecutive memory locations]. Since arrays are objects in Java, we can find their length using the
object property length. This is different from C/C++, where we find length using sizeof. A Java array variable can
also be declared like other variables with [] after the data type. The variables in the array are ordered, and each
has an index beginning with 0. Java array can also be used as a static field, a local variable, or a method
parameter. An array can contain primitives (int, char, etc.) and object (or non-primitive) references of a class
depending on the definition of the array. In the case of primitive data types, the actual values might be stored in
contiguous memory locations (JVM does not guarantee this behavior). In the case of class objects, the actual
objects are stored in a heap segment.
Array Literal in Java => In a situation where the size of the array and variables of the array are already known,
array literals can be used.
// Declaring array literal
int[] intArray = new int[]{ 1,2,3,4,5,6,7,8,9,10 };
The length of this array determines the length of the created array. There is no need to write the new int[] part in
the latest versions of Java.
Accessing Java Array Elements using for Loop => Each element in the array is accessed via its index. The index
begins with 0 and ends at (total array size)-1. All the elements of array can be accessed using Java for Loop.
// accessing the elements of the specified array
for (int i = 0; i < arr.length; i++){
System.out.println("Element at index " + i + " : "+ arr[i]);
}
*Exceptions in Java:
Exception Handling in Java is one of the effective means to handle runtime errors so that the regular flow
of the application can be preserved. Java Exception Handling is a mechanism to handle runtime errors such as
ClassNotFoundException, IOException, SQLException, RemoteException, etc.
Errors represent irrecoverable conditions such as Java virtual machine (JVM) running out of memory, memory
leaks, stack overflow errors, library incompatibility, infinite recursion, etc. Errors are usually beyond the control of
the programmer, and we should not try to handle errors.
Error: An Error indicates a serious problem that a reasonable application should not try to catch.
Exception: Exception indicates conditions that a reasonable application might try to catch.
Exception Hierarchy:
All exception and error types are subclasses of the class Throwable,
which is the base class of the hierarchy. One branch is headed by Exception. This
class is used for exceptional conditions that user programs should catch.
NullPointerException is an example of such an exception. Another branch, Error is
used by the Java run-time
system(JVM) to indicate errors having
to do with the run-time environment
itself(JRE). StackOverflowError is an
example of such an error.
Types of Exceptions:
Java defines several types of exceptions that relate to its various class libraries. Java also allows users to
define their own exceptions.
Built-in exceptions are the exceptions that are available in Java libraries. These exceptions are suitable to
explain certain error situations.
Checked Exceptions: Checked exceptions are called compile-time exceptions because these exceptions are
checked at compile-time by the compiler.
Unchecked Exceptions: The unchecked exceptions are just opposite to the checked exceptions. The compiler will
not check these exceptions at compile time. In simple words, if a program throws an unchecked exception, and
even if we didn’t handle or declare it, the program would not give a compilation error.
Sometimes, the built-in exceptions in Java are not able to describe a certain situation. In such cases, users can
also create exceptions, which are called ‘user-defined Exceptions’.
1. printStackTrace(): This method prints exception information in the format of the Name of the exception:
description of the exception, stack trace.
2. toString(): The toString() method prints exception information in the format of the
Name of the exception: description of the exception.
3. getMessage(): The getMessage() method prints only the description of the exception.
How Does JVM Handle an Exception?
Default Exception Handling: Whenever inside a method, if an exception has occurred, the method creates an
Object known as an Exception Object and hands it off to the run-time system(JVM). The exception object contains
the name and description of the exception and the current state of the program where the exception has occurred.
Creating the Exception Object and handling it in the run-time system is called throwing an Exception. There might
be a list of the methods that had been called to get to the method where an exception occurred. This ordered list
of methods is called Call Stack. Now the following procedure will happen.
• The run-time system searches the call stack to find the method that contains a block of code that can
handle the occurred exception. The block of the code is called an Exception handler.
• The run-time system starts searching from the method in which the exception occurred and proceeds
through the call stack in the reverse order in which methods were called.
• If it finds an appropriate handler, then it passes the occurred exception to it. An appropriate handler
means the type of exception object thrown matches the type of exception object it can handle.
• If the run-time system searches all the methods on the call stack and couldn’t have found the
appropriate handler, then the run-time system handover the Exception Object to the default exception
handler, which is part of the run-time system. This handler prints the exception information in the
following format and terminates the program abnormally.
Look at the below diagram to understand the flow of the call stack.
Java Exception Keywords:
Keyword Description
The "try" keyword is used to specify a block where we should place an exception code.
try It means we can't use try block alone. The try block must be followed by either catch
or finally.
The "catch" block is used to handle the exception. It must be preceded by try block
catch
which means we can't use catch block alone. It can be followed by finally block later.
The "finally" block is used to execute the necessary code of the program. It is executed
finally
whether an exception is handled or not.
throw The "throw" keyword is used to throw an exception.
The "throws" keyword is used to declare exceptions. It specifies that there may occur an
throws exception in the method. It doesn't throw an exception. It is always used with method
signature.
try{
}catch(Exception_class_Name ref){}
Syntax of try-finally block:
try{
}finally{}
The JVM firstly checks whether the exception is handled or not. If exception is not handled, JVM provides
a default exception handler that performs the following tasks:
But if the application programmer handles the exception, the normal flow of the application is maintained,
i.e., rest of the code is executed.
Arithmetic exception:
NullPointerException:
It is a run time exception. It is thrown when a null value is assigned to a reference object and the
program tries to use that null object.
ArrayIndexOutOfBoundsException:
It occurs when we access an array with an invalid index. This means that either the index value is less
than zero or greater than that of the array’s length.
NumberFormatException:
It is a type of unchecked exception that occurs when we are trying to convert a string to an int or other
numeric value. This exception is thrown in cases when it is not possible to convert a string to other numeric
types.
InputMismatchException:
It occurs when an input provided by the user is incorrect. The type of incorrect input can be out of range
or incorrect data type.
IllegalStateException:
It is a run time exception that occurs when a method of a code is triggered or invoked at the wrong time.
This exception is used to give a signal that the method is invoked at the wrong time.
Basis of
Checked Exceptions Unchecked Exceptions
Comparison
These Exceptions occur during the compilation These Exceptions occur at runtime of the java
Compilation
time of the java programs. programs.
Compiler Checked Exceptions are checked by the java Unchecked Exceptions are not checked by the
Checking compiler. java compiler.
Exceptions These Exceptions can be handled during the These Exceptions cannot be handled at
Handling compilation time. compilation time.
IOException FileNotFoundException ArithmeticException InputMismatchException
Examples:
InterruptedException NullPointerException
Case 3: Return statement in try block and end of method but statement after return:
When you will try to execute the preceding program, you will get an unreachable code error. This
is because any statement after return statement will result in compile-time error stating “Unreachable
code”.
Case 4: Return statement in try block and at end of method but exception occurred in try block:
In the preceding code, an exception occurred in a try block, and the control of execution is
transferred to catch block to handle exception thrown by the try block. Due to an exception occurred in
try block, return statement in try block did not execute. Return statement defined at the end of method
has returned a value 20 to the calling method.
Case 8: Return statement in catch block but exception occurred in try block.
Case 9: Return statement in try block and finally block:
In the preceding code, finally block overrides the value returned by try block. Therefore, this
would return value 50 because the value returned by try has been overridden by finally block.
Case 11: Return statement in catch and finally blocks but a statement after finally block:
Here we'll get compile time error: Un-reachable code
JVM automatically throws system-generated exceptions. All those exceptions are called implicit
exceptions. If we want to throw an exception manually or explicitly, for this, Java provides a keyword throw.
Throw in Java is a keyword that is used to throw a built-in exception or a custom exception explicitly or
manually. Using throw keyword, we can throw either checked or unchecked exceptions in Java programming.
When an exception occurs in the try block, throw keyword transfers the control of execution to the caller
by throwing an object of exception.
Only one object of exception type can be thrown by using throw keyword at a time. Throw keyword can
be used inside a method or static block provided that exception handling is present.
throw exception_name;
Control flow of try-catch block with Java throw Statement:
When a throw statement is encountered in a program, the flow of execution of subsequent statements
stops immediately in the try block and the corresponding catch block is searched. The nearest try block is inspected
to see if it has a catch block that matches the type of exception. If corresponding catch block is found, it is executed
otherwise the control is transferred to the next statement. In case, no matching catch block is found, JVM transfers
the control of execution to the default exception handler that stops the normal flow of program and displays error
message on the output screen.
1. The keyword throw is used to throw an exception explicitly, while the throws clause is used to declare an
exception.
2. Throw is followed by an instance variable, while throws is followed by the name of exception class.
3. We use throw keyword inside method body to call an exception, while the throws clause is used in method
signature.
4. With throw keyword, we cannot throw more than one exception at a time, while we can declare multiple
exceptions with throws.
User-defined exceptions in Java are those exceptions that are created by a programmer (or user) to meet
the specific requirements of the application. That’s why it is also known as user-defined exception. It is useful when
we want to properly handle the cases that are highly specific and unique to different applications.
For example:
1. A banking application, a customer whose age is lower than 18 years, the program throws a custom
exception indicating “needs to open a joint account”.
2. Voting age in India: If a person’s age entered is less than 18 years, the program throws “invalid age” as
a custom exception.
Actually, there are mainly two drawbacks of predefined exception handling mechanism. They are:
• Predefined exceptions of Java always generate the exception report in a predefined format.
• After generating the exception report, it immediately terminates the execution of the program.
The user-defined exception handling mechanism does not contain the above explained two drawbacks. It can
generate the report of error message in any special format that we prefer. It will automatically resume the
execution of the program after generating the error report.
Step 1: User-defined exceptions can be created simply by extending the Exception class. This is done as:
OwnException()
Step 3: If you want to store exception details, define a parameterized constructor with string as a parameter, call
the superclass (Exception) constructor from this, and store variable “str”. This can be done as follows:
OwnException(String str)
{ super(str); // Call superclass exception constructor and store variable "str" in it.
Step 4: In the last step, we need to create an object of the user-defined exception class and throw it using throw
clause.
throw obj;
or,
For example, let us assume that a, b, and c are objects of three different exception types A, B, and C, respectively.
The object a of type A causes an exception of type B to occur and an object of B type also causes an exception of
C type.
This process is called chaining of exceptions in Java, and the exceptions involved in this process are called
chained exceptions. This feature helps the programmer to know when and where is the cause for the exception.
This form of constructor creates a new Throwable object with the specified cause. It takes only one
parameter: Throwable causeExc, which represents an exception that causes the current exception. If the causeExc
is null, it will return the null of message. Otherwise, it will return string representation of the message. The
message contains the name of class and the detailed information of the cause.
1. toString(): This method returns an exception followed by a description of the exception. The general syntax is
as follows:
3. printStackTrace(): This method displays stack trace. It returns nothing. The general syntax is given below.
4. getCause(): The getCause() method returns the exception that caused the occurrence of current exception. If
there is no caused exception then null is returned. The syntax is as follows:
5. initCause(): The initCause() method joins “causeExc” with the invoking exception and returns a reference to
the exception.
6. fillInStackTrace(): This method returns a Throwable object that contains a completed stack trace. The object
can be rethrown. The basic syntax for fillInStackTrace() method is as below:
7. getStackTrace(): The getStackTrace() method returns an array that contains each element on the stack trace.
The element at index 0 represents the top of call stack and the last element represents the bottom of call stack.
The general syntax for this method is as follows:
Errors in Java:
Errors in Java occur when a programmer violates the rules of Java programming language. It might be due
to programmer’s typing mistakes while developing a program. It may produce incorrect output or may terminate
the execution of the program abnormally. For example, if you use the right parenthesis in a Java program where a
right brace is needed, you have made a syntax error. You have violated the rules of Java language. Therefore, it is
important to detect and fix properly all errors occurring in a program so that the program will not terminate during
execution.
Types of Errors in Java Programming:
In Java, or other programming languages, when we write a program for the first time, it usually contains errors.
We mainly divided these errors into three types:
Such a program that contains runtime errors may produce wrong results due to wrong logic or terminate
the program. These runtime errors are usually known as exceptions. For example, if a user inputs a value of string
type in a program, but the computer is expecting an integer value, a runtime error will be generated.
The most common runtime errors in Java programming language are as follows:
When such errors are encountered in a program, Java generates an error message and terminates the
program abnormally. To handle these kinds of errors during the runtime, we use exception handling technique in
Java program.
For example, a programmer wants to print even numbers from an array, but he uses a division (/) operator
instead of a modulus (%) operator to get the remainder of each number. Due to which he got the wrong results.
You can pass any object as a resource that implements java.lang.AutoCloseable, which includes all objects
which implement java.io.Closeable. By this, now we don’t need to add an extra finally block for just passing the
closing statements of the resources. The resources will be closed as soon as the try-catch block is executed.
Syntax: Try-with-resources:
Syntax:
try {
// code
}
catch (ExceptionType1 | Exceptiontype2 ex) {
// catch block
}
Important Points:
1. If all the exceptions belong to the same class hierarchy, we should be catching the base exception type.
However, to catch each exception, it needs to be done separately in their catch blocks.
2. Single catch block can handle more than one type of exception. However, the base (or ancestor) class and
subclass (or descendant) exceptions cannot be caught in one statement. For Example,
Points to remember
1. If a parent class has implemented Serializable interface then child class doesn’t need to
implement it but vice-versa is not true.
2. Only non-static data members are saved via Serialization process.
3. Static data members and transient data members are not saved via Serialization process. So, if you
don’t want to save value of a non-static data member then make it transient.
4. Constructor of object is never called when an object is deserialized.
5. Associated objects must be implementing Serializable interface.
6. Serializable is a marker interface — it does not consist of any methods or data members. If a java class
implements a Serializable interface it gets certain capabilities. It is also important to note that objects
of a class can only be serialized if the class implements the Serializable interface.
Example :
The ObjectOutputStream contains the method writeObject() which is used for serializing an object and the
ObjectInputStream contains the method readObject() used for deserializing the byte stream.
SerialVersionUID:
The Serialization runtime associates a version number with each Serializable class called a
SerialVersionUID, which is used during Deserialization to verify that sender and receiver of a serialized object
have loaded classes for that object which are compatible with respect to serialization. If the receiver has loaded
a class for the object that has different UID than that of corresponding sender’s class, the Deserialization will
result in an InvalidClassException.
A Serializable class can declare its own UID explicitly by declaring a field name. It must be static, final
and of type long. i.e- ANY-ACCESS-MODIFIER static final long serialVersionUID=42L; If a serializable class doesn’t
explicitly declare a serialVersionUID, then the serialization runtime will calculate a default one for that class
based on various aspects of class, as described in Java Object Serialization Specification. However it is strongly
recommended that all serializable classes explicitly declare serialVersionUID value, since its computation is
highly sensitive to class details that may vary depending on compiler implementations, any change in class or
using different id may affect the serialized data. It is also recommended to use private modifier for UID since it
is not useful as inherited member. serialver The serialver is a tool that comes with JDK. It is used to get
serialVersionUID number for Java classes.
In case of transient variables:- A variable defined with transient keyword is not serialized during serialization
process.This variable will be initialized with default value during deserialization. (e.g: for objects it is null, for int it
is 0).
In case of static Variables:- A variable defined with static keyword is not serialized during serialization
process.This variable will be loaded with current value defined in the class during deserialization.
Transient Vs Final:
final variables will be participated into serialization directly by their values.
Hence declaring a final variable as transient there is no use.
//the compiler assign the value to final variable
example:
thread.start();
The run method contains the code that the thread will execute.
Though, in this case, tasks look like they are running simultaneously, but essentially they MAY not. They
take advantage of the CPU time-slicing feature of the operating system where each task runs part of its task and
then goes to the waiting state. When the first task is waiting, the CPU is assigned to the second task to complete
its part of the task.
The operating system based on the priority of tasks, thus, assigns CPU and other computing resources e.g.
memory; turn by turn to all tasks and gives them a chance to complete. To the end-user, it seems that all tasks are
running in parallel. This helps by concurrency solve multiple tasks faster.
2. What is Parallelism?
Parallelism does not require two tasks to exist. It, literally, physically runs parts of tasks OR multiple tasks,
at the same time using the multi-core infrastructure of the CPU, by assigning one core to each task or sub-task.
Generally, in the case of parallelism, a task is split into subtasks across multiple CPU cores. These subtasks
are computed in parallel and each of them represents a partial solution for the given task. By joining these partial
solutions, we obtain the final solution. Ideally, solving a task in parallel should result in less wall-clock time than in
the case of solving the same task sequentially.
In a nutshell, in parallelism, at least two threads are running at the same time which means that
parallelism can solve a single task faster. Parallelism requires the hardware with multiple processing units,
essentially. In a single-core CPU, we may get concurrency but NOT parallelism.
• Concurrency is when two tasks can start, run, and complete in overlapping time periods. Parallelism is
when tasks literally run at the same time, eg. on a multi-core processor.
• Concurrency is the composition of independently executing processes, while parallelism is the
simultaneous execution of (possibly related) computations.
• Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.
• An application can be concurrent but not parallel, which means that it processes more than one task at
the same time, but no two tasks are executed at the same time instant.
• An application can be parallel but not concurrent, which means that it processes multiple sub-tasks of a
task in a multi-core CPU at the same time.
• An application can be neither parallel nor concurrent, which means that it processes all tasks one at a
time, sequentially.
• An application can be both parallel and concurrent, which means that it processes multiple tasks
concurrently in a multi-core CPU at the same time.
• Typically, we measure parallelism efficiency in latency (the amount of time needed to complete the task),
while the efficiency of concurrency is measured in throughput (the number of tasks that we can solve).
Multithreading in Java:
Multithreading is a Java feature that allows concurrent execution of two or more parts of a program for
maximum utilization of CPU. Each part of such program is called a thread. So, threads are light-weight processes
within a process.
• New State
• Runnable State
• Blocked State
• Waiting State
• Timed Waiting State
• Terminated State
The diagram shown below represents various states of a thread at any instant in time.
Life Cycle of a Thread
• New Thread: When a new thread is created, it is in the new state. The thread has not yet started to run
when the thread is in this state. When a thread lies in the new state, its code is yet to be run and hasn’t
started to execute.
• Runnable State: A thread that is ready to run is moved to a runnable state. In this state, a thread might
actually be running or it might be ready to run at any instant of time. It is the responsibility of the thread
scheduler to give the thread, time to run.
A multi-threaded program allocates a fixed amount of time to each individual thread. Each and every
thread runs for a short while and then pauses and relinquishes the CPU to another thread so that other
threads can get a chance to run. When this happens, all such threads that are ready to run, waiting for
the CPU and the currently running thread lie in a runnable state.
• Blocked: The thread will be in blocked state when it is trying to acquire a lock but currently the lock is
acquired by the other thread. The thread will move from the blocked state to runnable state when it
acquires the lock.
• Waiting state: The thread will be in waiting state when it calls wait() method or join() method. It will
move to the runnable state when other thread will notify or that thread will be terminated.
• Timed Waiting: A thread lies in a timed waiting state when it calls a method with a time-out parameter.
A thread lies in this state until the timeout is completed or until a notification is received. For example,
when a thread calls sleep or a conditional wait, it is moved to a timed waiting state.
• Terminated State: A thread terminates because of either of the following reasons:
Because it exits normally. This happens when the code of the thread has been entirely executed by
the program.
Because there occurred some unusual erroneous event, like a segmentation fault or an unhandled
exception.
1. New: Thread state for a thread that has not yet started.
public static final Thread.State NEW
2. Runnable: Thread state for a runnable thread. A thread in the runnable state is executing in the Java virtual
machine but it may be waiting for other resources from the operating system such as a processor.
3. Blocked: Thread state for a thread blocked waiting for a monitor lock. A thread in the blocked state is waiting
for a monitor lock to enter a synchronized block/method or reenter a synchronized block/method after calling
Object.wait().
4. Waiting: Thread state for a waiting thread. A thread is in the waiting state due to calling one of the following
methods:
5. Timed Waiting: Thread state for a waiting thread with a specified waiting time. A thread is in the timed waiting
state due to calling one of the following methods with a specified positive waiting time:
Thread.sleep
Object.wait with timeout
Thread.join with timeout
LockSupport.parkNanos
LockSupport.parkUntil
6. Terminated: Thread state for a terminated thread. The thread has completed execution.
Thread Scheduler:
If more than one thread is waiting for a chance to run, the Thread Scheduler will determine which thread
should be executed. The exact algorithm followed by the Thread scheduler will not be determined, as the algorithm
followed by each Thread scheduler varies from JVM to JVM. Hence, in multithreading, we can’t guarantee the
exact output; every time we run the code, we will get different outputs.
In this algorithm, each thread is given a fixed time slice (quantum) to execute. When the time slice expires, the
scheduler interrupts the thread and moves it to the back of the queue. The next thread in line gets a chance to
run.
Priority-Based Scheduling:
Threads are assigned priority levels, and the scheduler selects the highest-priority thread to execute. This
approach can lead to priority inversion, where lower-priority threads block higher-priority ones, impacting
overall system performance. To mitigate this, techniques like priority inheritance and priority ceiling protocols
are employed.
Also known as Shortest Job First (SJF) or shortest remaining time scheduling, this algorithm selects the thread
with the smallest execution time remaining. SJN aims to minimize average waiting time and turnaround time, but
it requires knowledge of thread execution times, which can be challenging to estimate accurately.
Threads are executed in the order they arrive in the ready queue. While simple to implement, FCFS may lead to
the “convoy effect,” where a long-running thread prevents shorter tasks from executing, causing inefficiencies.
Threads are divided into multiple queues based on priority, and each queue may have its own scheduling
algorithm. This approach provides differentiation between threads of varying importance, such as interactive and
background tasks.
Found in the Linux kernel, CFS allocates CPU time based on the concept of fairness among threads. It attempts to
distribute CPU time proportionally among threads, ensuring that each thread gets its fair share over time.
Deadline-Based Scheduling:
Primarily used in real-time systems, this algorithm assigns deadlines to threads and schedules them to meet
their respective deadlines. Threads with tighter deadlines are given priority.
Multicore Scheduling:
With the advent of multi-core processors, thread scheduling extends to distributing threads across multiple
cores. Algorithms focus on load balancing, minimizing contention, and optimizing cache utilization.
User-level threading (ULT) schedulers operate at the application level rather than within the operating system
kernel. In this approach, the application itself manages its own threads and scheduling, bypassing the kernel’s
thread management. User-level threads provide a level of control and flexibility for application developers, but
they also come with trade-offs in terms of efficiency and system interaction.
Time-Sliced Scheduling:
Time-sliced scheduling, also known as time-sharing or round-robin scheduling, involves dividing CPU time into
fixed intervals called time slices or quanta. Each thread is allocated a time slice during which it can execute on
the CPU. Once a thread’s time slice expires, it is preemptively moved out of the CPU, and another thread is given
a chance to execute. The preempted thread is placed at the end of the scheduling queue and will receive
another time slice when its turn comes up again. Time-sliced scheduling ensures that all threads get a fair share
of CPU time, preventing any single thread from monopolizing the CPU for extended periods. This approach is
particularly effective for scenarios where responsiveness and fairness are essential, such as interactive
multitasking environments.
Preemptive Scheduling:
Preemptive scheduling takes the concept of time slicing further by allowing the scheduler to forcibly interrupt a
running thread and switch to another thread. This interruption is known as preemption. In preemptive
scheduling, threads can be preempted at any time, even before their time slice expires, based on certain events
or priorities. Preemptive scheduling introduces finer control over thread execution and enables the operating
system to respond quickly to high-priority tasks or events. It is especially useful in scenarios where real-time
responsiveness, priority-based execution, and resource allocation are critical.
Thread States:
Threads typically go through different states during their lifecycle, including “running,” “ready,” “blocked,” and
“terminated.” The scheduler manages transitions between these states based on thread behaviour and external
events.
Threads that are ready to execute but are waiting for CPU time are placed in a queue known as the “ready
queue.” The scheduler maintains this queue, which holds threads with varying priorities or characteristics.
Scheduling Criteria:
The scheduler uses various criteria to make decisions about which thread to run next. These criteria can include
thread priorities, execution history, expected CPU burst times, and any special requirements (e.g., real-time
constraints).
Scheduling Algorithms:
Different scheduling algorithms dictate how threads are selected from the ready queue for execution. Common
algorithms include:
1. Round-Robin Scheduling: Threads are given a fixed time slice (quantum) to execute, and they rotate in and out
of the CPU in a circular fashion.
2. Priority-Based Scheduling: Threads are assigned priorities, and the scheduler runs the highest-priority thread
that is ready to execute.
3. Shortest Job Next (SJN) Scheduling: The thread with the shortest estimated execution time is selected next.
4. Multilevel Queue Scheduling: Threads are divided into priority-based queues, and the scheduler chooses
threads from different queues based on their priorities.
5. Preemption: Many modern schedulers use preemption, which allows the scheduler to forcibly interrupt a
running thread to give another thread a chance to execute. Preemption is essential for enforcing priority-based
scheduling and ensuring that high-priority threads get CPU time when needed.
Context Switching:
When the scheduler decides to switch from one thread to another, it performs a context switch. During a context
switch, the current thread’s state is saved, and the state of the next thread to be executed is loaded. This
involves updating registers, memory mappings, and other relevant information.
Interrupt Handling:
The scheduler interacts with hardware interrupts and timers to handle events such as I/O completion, timeouts,
or hardware interrupts. These events can trigger context switches and affect thread execution.
Resource Management:
The scheduler manages various system resources that threads might contend for, such as memory, I/O devices,
and synchronization primitives like locks and semaphores.
Dynamic Adjustment:
Some schedulers dynamically adjust priorities or time slices based on factors like thread behavior, past execution
history, or system load. This adaptive approach helps optimize performance under varying conditions.
Real-Time Scheduling:
In real-time systems, where meeting deadlines is critical, the scheduler ensures that threads with strict timing
constraints are given priority to execute within their specified time frames.
1. yield() Method:
Suppose there are three threads t1, t2, and t3. Thread t1 gets the processor and starts its execution and
thread t2 and t3 are in Ready/Runnable state. The completion time for thread t1 is 5 hours and the completion
time for t2 is 5 minutes. Since t1 will complete its execution after 5 hours, t2 has to wait for 5 hours to just finish
5 minutes job. In such scenarios where one thread is taking too much time to complete its execution, we need a
way to prevent the execution of a thread in between if something important is pending. yield() helps us in doing
so.
The yield() basically means that the thread is not doing anything particularly important and if any other
threads or processes need to be run, they should run. Otherwise, the current thread will continue to run.
• Whenever a thread calls java.lang.Thread.yield method gives hint to the thread scheduler that it is ready
to pause its execution. The thread scheduler is free to ignore this hint.
• If any thread executes the yield method, the thread scheduler checks if there is any thread with the same
or high priority as this thread. If the processor finds any thread with higher or same priority then it will
move the current thread to Ready/Runnable state and give the processor to another thread and if not –
the current thread will keep executing.
• Once a thread has executed the yield method and there are many threads with the same priority is waiting
for the processor, then we can’t specify which thread will get the execution chance first.
• The thread which executes the yield method will enter in the Runnable state from Running state.
• Once a thread pauses its execution, we can’t specify when it will get a chance again it depends on the
thread scheduler.
• The underlying platform must provide support for preemptive scheduling if we are using the yield method.
2. sleep() Method:
This method causes the currently executing thread to sleep for the specified number of milliseconds,
subject to the precision and accuracy of system timers and schedulers.
Syntax:
• Based on the requirement we can make a thread to be in a sleeping state for a specified period of time
• Sleep() causes the thread to definitely stop executing for a given amount of time; if no other thread or
process needs to be run, the CPU will be idle (and probably enter a power-saving mode).
3. join() Method
The join() method of a Thread instance is used to join the start of a thread’s execution to the end of
another thread’s execution such that a thread does not start running until another thread ends. If join() is called
on a Thread instance, the currently running thread will block until the Thread instance has finished
executing. The join() method waits at most this many milliseconds for this thread to die. A timeout of 0 means
to wait forever
Syntax:
• If any executing thread t1 calls join() on t2 i.e; t2.join() immediately t1 will enter into waiting state
until t2 completes its execution.
• Giving a timeout within join(), will make the join() effect to be nullified after the specific timeout.
Comparison of yield(), join(), sleep() Methods
Is it
NO YES YES
overloaded?
Property yield() join() sleep()
Is it final? NO YES NO
Synchronization in Java:
Multi-threaded programs may often come to a situation where multiple threads try to access the same
resources and finally produce erroneous and unforeseen results.
Java Synchronization is used to make sure by some synchronization method that only one thread can
access the resource at a given point in time.
Java provides a way of creating threads and synchronizing their tasks using synchronized blocks.
A synchronized block in Java is synchronized on some object. All synchronized blocks synchronize on the
same object and can only have one thread executed inside them at a time. All other threads attempting to enter
the synchronized block are blocked until the thread inside the synchronized block exits the block.
Syntax:
1. Process Synchronization
2. Thread Synchronization
Process Synchronization is a technique used to coordinate the execution of multiple processes. It ensures that the
shared resources are safe and in order.
Thread Synchronization is used to coordinate and ordering of the execution of the threads in a multi-threaded
program. There are two types of thread synchronization are mentioned below:
• Mutual Exclusive
• Cooperation (Inter-thread communication in Java)
Mutual Exclusive:
Mutual Exclusive helps keep threads from interfering with one another while sharing data. There are three types
of Mutual Exclusive mentioned below:
• Synchronized method.
• Synchronized block.
• Static synchronization.
The process of testing a condition repeatedly till it becomes true is known as polling. Polling is usually
implemented with the help of loops to check whether a particular condition is true or not. If it is true, a certain
action is taken. This wastes many CPU cycles and makes the implementation inefficient.
For example, in a classic queuing problem where one thread is producing data, and the other is consuming it.
To avoid polling, Java uses three methods, namely, wait(), notify(), and notifyAll(). All these methods belong to
object class as final so that all classes have them. They must be used within a synchronized block only.
• wait(): It tells the calling thread to give up the lock and go to sleep until some other thread enters the same
monitor and calls notify().
• notify(): It wakes up one single thread called wait() on the same object. It should be noted that calling
notify() does not give up a lock on a resource.
• notifyAll(): It wakes up all the threads called wait() on the same object.
Producer-Consumer Problem:
It is also known as bounded-buffer problem. Producer and Consumer are two separate processes. Both processes
share a common buffer or queue. The producer continuously produces certain data and pushes it onto the buffer,
whereas the consumer consumes those data from the buffer.
• Both producer and consumer may try to update the queue at the same time. This could lead to data loss
or inconsistencies.
• Producers might be slower than consumers. In such cases, the consumer would process elements fast
and wait.
• In some cases, the consumer can be slower than the producer. This situation leads to a queue overflow
issue.
• In real scenarios, we may have multiple producers, multiple consumers, or both. This may cause the
same message to be processed by different consumers.
• The producer’s job is to generate data, put it into the buffer, and start again.
• At the same time, the consumer is consuming the data (i.e. removing it from the buffer), one piece at a
time.
In this problem, we need two threads, Thread t1 (produces the data) and Thread t2 (consumes the data).
However, both the threads shouldn’t run simultaneously.
To put it simply, daemon threads serve user threads by handling background tasks and have no role other
than supporting the main execution.
Some examples of daemon threads in Java include garbage collection (gc) and finalizer. These threads work
silently in the background, performing tasks that support the main execution without interfering with the user’s
operations.
1. No Preventing JVM Exit: Daemon threads cannot prevent the JVM from exiting when all user threads
finish their execution. If all user threads complete their tasks, the JVM terminates itself, regardless of
whether any daemon threads are running.
2. Automatic Termination: If the JVM detects a running daemon thread, it terminates the thread and
subsequently shuts it down. The JVM does not check if the daemon thread is actively running; it
terminates it regardless.
3. Low Priority: Daemon threads have the lowest priority among all threads in Java.
By default, the main thread is always a non-daemon thread. However, for all other threads, their daemon
nature is inherited from their parent thread. If the parent thread is a daemon, the child thread is also a daemon,
and if the parent thread is a non-daemon, the child thread is also a non-daemon.
Note: Whenever the last non-daemon thread terminates, all the daemon threads will be terminated
automatically.
Parameters:
Exceptions:
2. boolean isDaemon():
This method is used to check that the current thread is a daemon. It returns true if the thread is Daemon. Else, it
returns false.
Returns: This method returns true if this thread is a daemon thread; false otherwise
• Priority: When only daemon threads remain in a process, the JVM exits. This makes sense because when
only daemon threads are running, there is no need for a daemon thread to provide a service to another
thread.
• Usage: Daemon threads are primarily used to provide background support to user threads. They handle
tasks that support the main execution without interfering with the user’s operations.
Understanding daemon threads is essential for Java developers to effectively manage thread behavior and
optimize application performance.
Constructors:
1. public ThreadGroup (String name): Constructs a new thread group. The parent of this new group is the
thread group of the currently running thread.
Throws: SecurityException - if the current thread cannot create a thread in the specified thread group.
2. public ThreadGroup (ThreadGroup parent, String name): Creates a new thread group. The parent of this
new group is the specified thread group.
SecurityException - if the current thread cannot create a thread in the specified ThreadGroup.
Methods:
1. int activeCount(): This method returns the number of threads in the group plus any group for which
this thread is parent.
2. int activeGroupCount(): This method returns an estimate of the number of active groups in this thread
group.
3. void checkAccess(): Causes the security manager to verify that the invoking thread may access and/ or
change the group on which checkAccess() is called.
4. void destroy(): Destroys the thread group and any child groups on which it is called.
5. int enumerate(Thread group[]): The thread that comprise the invoking thread group are put into the
group array.
6. int enumerate(Thread[] group, boolean recurse): The threads that comprise the invoking thread group
are put into the group array. If all is true, then threads in all subgroups of the thread are also put into
group.
7. int enumerate(ThreadGroup[] group): The subgroups of the evoking thread group are put into the group
array.
8. int enumerate(ThreadGroup[] group, boolean all): The subgroups of the invoking thread group are put
into the group array. If all is true, then all subgroups of the subgroups(and so on) are also put into group.
9. int getMaxPriority(): Returns the maximum priority setting for the group.
10. String getName(): This method returns the name of the group
11. ThreadGroup getParent(): Returns null if the invoking ThreadGroup object has no parent. Otherwise, it
returns the parent of the invoking object.
12. void interrupt(): Invokes the interrupt() methods of all threads in the group.
13. boolean isDaemon(): Tests if this thread group is a daemon thread group. A daemon thread group is
automatically destroyed when its last thread is stopped or its last thread group is destroyed.
14. boolean isDestroyed(): This method tests if this thread group has been destroyed.
15. void list(): Displays information about the group.
16. boolean parentOf(ThreadGroup group): This method tests if this thread group is either the thread group
argument or one of its ancestor thread groups. void setDaemon(boolean isDaemon): This method
changes the daemon status of this thread group. A daemon thread group is automatically destroyed
when its last thread is stopped or its last thread group is destroyed.
17. void setDaemon(boolean isDaemon): This method changes the daemon status of this thread group. A
daemon thread group is automatically destroyed when its last thread is stopped or its last thread group
is destroyed.
18. void setMaxPriority(int priority): Sets the maximum priority of the invoking group to priority.
19. String toString(): This method returns a string representation of this Thread group.
Lock Interface:
A java.util.concurrent.locks.Lock is a thread synchronization mechanism just like synchronized blocks.
A Lock is, however, more flexible and more sophisticated than a synchronized block. Since Lock is an interface, you
need to use one of its implementations to use a Lock in your applications. ReentrantLock is one such
implementation of Lock interface.
Here is the simple use of Lock interface.
First a Lock is created. Then it’s lock() method is called. Now the Lock instance is locked. Any other thread
calling lock() will be blocked until the thread that locked the lock calls unlock(). Finally unlock() is called, and
the Lock is now unlocked so other threads can lock it.
As the name says, ReentrantLock allows threads to enter into the lock on a resource more than once.
When the thread first enters into the lock, a hold count is set to one. Before unlocking the thread can re-enter into
lock again and every time hold count is incremented by one. For every unlocks request, hold count is decremented
by one and when hold count is 0, the resource is unlocked.
Reentrant Locks also offer a fairness parameter, by which the lock would abide by the order of the lock
request i.e. after a thread unlocks the resource, the lock would go to the thread which has been waiting for the
longest time. This fairness mode is set up by passing true to the constructor of the lock.
ReentrantLock() Methods:
• lock(): Call to the lock() method increments the hold count by 1 and gives the lock to the thread if the
shared resource is initially free.
• unlock(): Call to the unlock() method decrements the hold count by 1. When this count reaches zero, the
resource is released.
• tryLock(): If the resource is not held by any other thread, then call to tryLock() returns true and the hold
count is incremented by one. If the resource is not free, then the method returns false, and the thread is
not blocked, but exits.
• tryLock(long timeout, TimeUnit unit): As per the method, the thread waits for a certain time period as
defined by arguments of the method to acquire the lock on the resource before exiting.
• lockInterruptibly(): This method acquires the lock if the resource is free while allowing for the thread to
be interrupted by some other thread while acquiring the resource. It means that if the current thread is
waiting for the lock but some other thread requests the lock, then the current thread will be interrupted
and return immediately without acquiring the lock.
• getHoldCount(): This method returns the count of the number of locks held on the resource.
• isHeldByCurrentThread(): This method returns true if the lock on the resource is held by the current
thread.
• hasQueuedThread(): This Method Queries whether the given thread is waiting to acquire this lock.
• newCondition(): Returns a Condition instance for use with this Lock instance.
Important Points:
1. One can forget to call the unlock() method in the finally block leading to bugs in the program. Ensure
that the lock is released before the thread exits.
2. The fairness parameter used to construct the lock object decreases the throughput of the program.
The ReentrantLock is a better replacement for synchronization, which offers many features not provided by
synchronized. However, the existence of these obvious benefits are not a good enough reason to always prefer
ReentrantLock to synchronize. Instead, make the decision on the basis of whether you need the flexibility offered
by a ReentrantLock.
• Java provides the Executor framework which is centered around the Executor interface, its sub-interface
–ExecutorService and the class-ThreadPoolExecutor, which implements both of these interfaces. By
using the executor, one only has to implement the Runnable objects and send them to the executor to
execute.
• They allow you to take advantage of threading, but focus on the tasks that you want the thread to
perform, instead of thread mechanics.
• To use thread pools, we first create a object of ExecutorService and pass a set of tasks to it.
ThreadPoolExecutor class allows to set the core and maximum pool size.The runnables that are run by a
particular thread are executed sequentially.
Tuning Thread Pool:
The optimum size of the thread pool depends on the number of processors available and the nature of
the tasks. On a N processor system for a queue of only computation type processes, a maximum thread pool size
of N or N+1 will achieve the maximum efficiency.But tasks may wait for I/O and in such a case we take into account
the ratio of waiting time(W) and service time(S) for a request; resulting in a maximum pool size of N*(1+ W/S) for
maximum efficiency.
The thread pool is a useful tool for organizing server applications. It is quite straightforward in concept,
but there are several issues to watch for when implementing and using one, such as deadlock, resource thrashing.
Use of executor service makes it easier to implement.
Submit() Method:
1. Purpose: The submit() method is used to submit a task for execution in the thread pool. It can accept
a Runnable, Callable, or a Future task.
2. Return Type:
• When a Runnable is submitted, it returns a Future<?> object that can be used to check if the task
is complete and to retrieve the result (which will be null for Runnable tasks).
• When a Callable is submitted, it returns a Future<T> object that represents the result of the
computation.
3. Usage:
• The submit() method allows you to submit tasks that can be executed asynchronously. The tasks
are queued and executed by the available threads in the pool.
• It provides a way to handle exceptions thrown during task execution, as you can check the status
of the Future object.
Shutdown() Method:
1. Purpose: The shutdown() method is used to initiate an orderly shutdown of the thread pool. It prevents
new tasks from being submitted and allows previously submitted tasks to complete.
2. Behavior:
• After calling shutdown(), the thread pool will not accept any new tasks. However, it will continue
to execute all previously submitted tasks.
• Once all tasks have completed, the thread pool will terminate.
3. Usage:
• It is important to call shutdown() when you are done using the thread pool to free up resources
and avoid memory leaks.
• If you want to immediately stop all actively executing tasks, you can use shutdownNow(), which
attempts to stop all actively executing tasks and returns a list of the tasks that were waiting to be
executed.
Callable and Future in Java:
There are two ways of creating threads – one by extending the Thread class and other by creating a thread
with a Runnable. However, one feature lacking in Runnable is that we cannot make a thread return result when it
terminates, i.e. when run() completes. For supporting this feature, the Callable interface is present in Java.
Future:
When the call() method completes, answer must be stored in an object known to the main thread, so that
the main thread can know about the result that the thread returned. How will the program store and obtain this
result later? For this, a Future object can be used. Think of a Future as an object that holds the result – it may not
hold it right now, but it will do so in the future (once the Callable returns). Thus, a Future is basically one way the
main thread can keep track of the progress and result from other threads.
It is a part of java.lang package since Java 1.0 It is a part of the java.util.concurrent package since Java
1.5.
It cannot return the result of computation. It can return the result of the parallel processing of a
task.
In Java, the Collection interface (java.util.Collection) and Map interface (java.util.Map) are the two main
“root” interfaces of Java collection classes.
A framework is a set of classes and interfaces which provide a ready-made architecture. In order to
implement a new feature or a class, there is no need to define a framework. However, an optimal object-oriented
design always includes a framework with a collection of classes such that all the classes perform the same kind of
task.
Before the Collection Framework(or before JDK 1.2) was introduced, the standard methods for grouping
Java objects (or collections) were Arrays or Vectors, or Hashtables. All of these collections had no common
interface. Therefore, though the main aim of all the collections is the same, the implementation of all these
collections was defined independently and had no correlation among them. And also, it is very difficult for the
users to remember all the different methods, syntax, and constructors present in every collection class.
Since the lack of a collection framework gave rise to the above set of disadvantages, the following are the
advantages of the collection framework.
1. Consistent API: The API has a basic set of interfaces like Collection, Set, List, or Map, all the classes
(ArrayList, LinkedList, Vector, etc) that implement these interfaces have some common set of methods.
2. Reduces programming effort: A programmer doesn’t have to worry about the design of the Collection but
rather he can focus on its best use in his program. Therefore, the basic concept of Object-oriented
programming (i.e.) abstraction has been successfully implemented.
3. Increases program speed and quality: Increases performance by providing high-performance
implementations of useful data structures and algorithms because in this case, the programmer need not
think of the best implementation of a specific data structure. He can simply use the best implementation
to drastically boost the performance of his algorithm/program.
Hierarchy of the Collection Framework in Java:
Before understanding the different components
in the above framework, let’s first understand a class and
an interface.
To keep the number of core collection interfaces manageable, the Java platform doesn’t provide separate
interfaces for each variant of each collection type. If an unsupported operation is invoked, a collection
implementation throws an UnsupportedOperationException.
1. Collection interface: This is the root of the collection hierarchy. A collection represents a group of objects known
as its elements. The Java platform doesn’t provide any direct implementations of this interface.
The interface has methods to tell you how many elements are in the collection (size, isEmpty), to check
whether a given object is in the collection (contains), to add and remove an element from the collection (add,
remove), and to provide an iterator over the collection (iterator). Collection interface also provides bulk operations
methods that work on the entire collection – containsAll, addAll, removeAll, retainAll, clear. The toArray methods
are provided as a bridge between collections and older APIs that expect arrays on input.
2. Iterator Interface: Iterator interface provides methods to iterate over the elements of the Collection. We can get
the instance of iterator using iterator() method. Iterator takes the place of Enumeration in the Java Collections
Framework. Iterators allow the caller to remove elements from the underlying collection during the iteration.
Iterators in collection classes implement Iterator Design Pattern.
3. Set Interface: Set is a collection that cannot contain duplicate elements. This interface models the mathematical
set abstraction and is used to represent sets, such as the deck of cards. The Java platform contains three general-
purpose Set implementations: HashSet, TreeSet, and LinkedHashSet. Set interface doesn’t allow random-access to
an element in the Collection. You can use iterator or foreach loop to traverse the elements of a Set.
4. List Interface: List is an ordered collection and can contain duplicate elements. You can access any element from
its index. List is more like array with dynamic length. List is one of the most used Collection
type. ArrayList and LinkedList are implementation classes of List interface.
List interface provides useful methods to add an element at a specific index, remove/replace element
based on the index and to get a sub-list using the index.
Collections class provide some useful algorithm for List – sort, shuffle, reverse, binarySearch etc.
5. Queue Interface: Queue is a collection used to hold multiple elements prior to processing. Besides basic
Collection operations, a Queue provides additional insertion, extraction, and inspection operations.
Queues typically, but do not necessarily, order elements in a FIFO (first-in-first-out) manner. Among the
exceptions are priority queues, which order elements according to a supplied comparator or the elements’ natural
ordering. Whatever the ordering used, the head of the queue is the element that would be removed by a call to
remove or poll. In a FIFO queue, all new elements are inserted at the tail of the queue.
6. Dequeue Interface: A linear collection that supports element insertion and removal at both ends. The name
deque is short for “double-ended queue” and is usually pronounced “deck”. Most Deque implementations place
no fixed limits on the number of elements they may contain, but this interface supports capacity-restricted deques
as well as those with no fixed size limit.
This interface defines methods to access the elements at both ends of the deque. Methods are provided
to insert, remove, and examine the element.
7. Map Interface: Java Map is an object that maps keys to values. A map cannot contain duplicate keys: Each key
can map to at most one value. The Java platform contains three general-purpose Map implementations: HashMap,
TreeMap, and LinkedHashMap.
The basic operations of Map are put, get, containsKey, containsValue, size, and isEmpty.
8. ListIterator Interface: An iterator for lists that allows the programmer to traverse the list in either direction,
modify the list during iteration, and obtain the iterator’s current position in the list.
Java ListIterator has no current element; its cursor position always lies between the element that would
be returned by a call to previous() and the element that would be returned by a call to next().
9. SortedSet Interface: SortedSet is a Set that maintains its elements in ascending order. Several additional
operations are provided to take advantage of the ordering. Sorted sets are used for naturally ordered sets, such as
word lists and membership rolls.
10. SortedMap Interface: A map that maintains its mappings in ascending key order. This is the Map analog of
SortedSet. Sorted maps are used for naturally ordered collections of key/value pairs, such as dictionaries and
telephone directories.
1. HashSet Class:
Java HashSet is the basic implementation the Set interface that is backed by a HashMap. It makes no
guarantees for iteration order of the set and permits the null element.
This class offers constant time performance for basic operations (add, remove, contains and size),
assuming the hash function disperses the elements properly among the buckets. We can set the initial capacity
and load factor for this collection. The load factor is a measure of how full the hash map is allowed to get before
its capacity is automatically increased.
2. TreeSet Class:
A NavigableSet implementation based on a TreeMap. The elements are ordered using their natural
ordering, or by a Comparator provided at set creation time, depending on which constructor is used.
This implementation provides guaranteed log(n) time cost for the basic operations (add, remove, and contains).
Note that the ordering maintained by a set (whether or not an explicit comparator is provided) must be
consistent with equals if it is to correctly implement the Set interface. (See Comparable or Comparator for a
precise definition of consistent with equals.) This is so because the Set interface is defined in terms of the equals
operation, but a TreeSet instance performs all element comparisons using its compareTo (or compare) method,
so two elements that are deemed equal by this method are, from the standpoint of the set, equal.
3. ArrayList Class:
Java ArrayList is the resizable-array implementation of the List interface. Implements all optional list
operations, and permits all elements, including null. In addition to implementing the List interface, this class
provides methods to manipulate the size of the array that is used internally to store the list. (This class is roughly
equivalent to Vector, except that it is unsynchronized.)
The size, isEmpty, get, set, iterator, and list iterator operations run in constant time. The add operation
runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in
linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.
4. LinkedList Class:
Doubly-linked list implementation of the List and Deque interfaces. Implements all optional list
operations, and permits all elements (including null).
All of the operations perform as expected for a doubly-linked list. Operations that index into the list will
traverse the list from the start or the end, whichever is closer to the specified index.
5. HashMap Class:
Hash table based implementation of the Map interface. This implementation provides all of the optional
map operations and permits null values and the null key. HashMap class is roughly equivalent to Hashtable,
except that it is unsynchronized and permits null. This class makes no guarantees for the order of the map.
This implementation provides constant-time performance for the basic operations (get and put). It
provides constructors to set initial capacity and load factor for the collection.
1. HashMap uses its static inner class Node<K,V> for storing the entries into the map.
2. HashMap allows at most one null key and multiple null values.
3. The HashMap class does not preserve the order of insertion of entries into the map.
4. HashMap has multiple buckets or bins which contain a head reference to a singly linked list. That means
there would be as many linked lists as there are buckets. Initially, it has a bucket size of 16 which grows
to 32 when the number of entries in the map crosses the 75%. (That means after inserting in 12 buckets
bucket size becomes 32)
5. HashMap is almost similar to Hashtable except that it’s unsynchronized and allows at max one null key
and multiple null values.
6. HashMap uses hashCode() and equals() methods on keys for the get and put operations. So HashMap
key objects should provide a good implementation of these methods.
7. That’s why the Wrapper classes like Integer and String classes are a good choice for keys for HashMap as
they are immutable and their object state won’t change over the course of the execution of the program.
HashMap stores the data in the form of key-value pairs. Each key-value pair is stored in an object of Entry<K,
V> class. Entry<K, V> class is the static inner class of HashMap which is defined like below.
final K key;
V value;
Entry<K,V> next;
int hash;
As you see, this inner class has four fields. key, value, next, and hash.
key: It stores the key of an element and its final.
next: It holds the pointer to the next key-value pair. This attribute makes the key-value pairs stored as a linked
list.
These Entry objects are stored in an array called table[]. This array is initially of size 16. It is defined like below.
*/
• To summarize the whole HashMap structure, each key-value pair is stored in an object of Entry<K,
V> class. This class has an attribute called next which holds the pointer to next key-value pair. This makes
the key-value pairs stored as a linked list. All these Entry<K, V> objects are stored in an array
called table[]. The below image best describes the HashMap structure.
What Is Hashing?
The whole HashMap data structure is based on the principle of Hashing. Hashing is nothing but the function or
algorithm or method which when applied on any object/variable returns a unique integer value representing
that object/variable. This unique integer value is called hash code. Hash function or simply hash is said to be the
best if it returns the same hash code each time it is called on the same object. Two objects can have the same
hash code.
Whenever you insert new key-value pair using the put() method, HashMap blindly doesn’t allocate a slot in
the table[] array. Instead, it calls a hash function on the key. HashMap has its own hash function to calculate the
hash code of the key. This function is implemented so that it overcomes poorly
implemented hashCode() methods. Below is the implementation code of hash().
/**
* Retrieve object hash code and applies a supplemental hash function to the
* result hash, which defends against poor quality hash functions. This is
* in lower bits. Note: Null keys always map to hash 0, thus index 0.
*/
int h = 0;
if (useAltHashing) {
if (k instanceof String) {
h = hashSeed;
h ^= k.hashCode();
After calculating the hash code of the key, it calls indexFor() method by passing the hash code of the key and
length of the table[] array. This method returns the index in the table[] array for that particular key-value pair.
/*
*/
Note: To have a high-performance hashMap we need good implementation of hashCode() and equals() method
along with hash function.
hashCode():
The hash function is a function that maps a key to an index in the hash table. It obtains an index from a key and
uses that index to retrieve the value for a key.
A hash function first converts a search key (object) to an integer value (known as hash code) and then
compresses the hash code into an index to the hash table.
The Object class (root class) of Java provides a hashCode method that other classes need to override. hashCode()
method is used to retrieve the hash code of an object. It returns an integer hash code by default that is a
memory address (memory reference) of an object.
int h = key.hashCode();
The value obtained from the hashCode() method is used as the bucket index number. The bucket index number
is the address of the entry (element) inside the map. If the key is null then the hash value returned by the
hashCode() will be 0.
equals():
The equals() method is a method of the Object class that is used to check the equality of two objects. HashMap
uses the equals() method to compare Keys to whether they are equal or not.
The equals() method of the Object class can be overridden. If we override the equals() method, it is mandatory
to override the hashCode() method.
The put() method of HashMap is used to store the key-value pairs. The syntax of the put() method to add
key/value pair is as follows:
hashmap.put(key, value);
Let’s take an example where we will insert three (Key, Value) pairs in the HashMap.
hmap.put("John", 20);
hmap.put("Harry", 5);
hmap.put("Deep", 10);
Let’s understand at which index the key-value pairs will be stored into HashMap.
When we call the put() method to add a key-value pair to hashmap, HashMap calculates a hash value or hash
code of key by calling its hashCode() method. HashMap uses that code to calculate the bucket index in which
key/value pair will be placed.
The formula for calculating the index of the bucket (where n is the size of an array of the bucket) is given below:
Suppose the hash code value for “John” is 2657860. Then the index value for “John” is:
The value 4 is the computed index value where the key and value will be stored in HashMap.
Note: Since HashMap allows only one null Key, the hash value returned by the hashCode(key) method will be 0
because the hashcode for null is always 0. The 0th bucket location will be used to place key/value pair.
When hashCode() method produces the same index value for a new Key and the Key that already exists in the
hash table, HashMap uses the same bucket index that already contains nodes in the form of a linked list.
A new node is created at the last of the linked list and connects this node object to the existing node object
through the LinkedList Hence both Keys will be stored at the same index value.
When a new value object is inserted with an existing Key, HashMap replaces the old value with the current value
related to the Key. To do it, HashMap uses the equals() method.
This method checks whether both Keys are equal or not. If Keys are the same, this method returns true and the
value of that node is replaced with the current value.
The get() method in HashMap is used to retrieve the value by its key. If we don’t know the Key, it will not fetch
the value. The syntax for calling get() method is as follows:
value = hashmap.get(key);
When the get(K Key) method takes a Key, it calculates the index of the bucket using the method mentioned
above. Then that bucket’s List is searched for the given key using the equals() method and the final result is
returned.
HashMap stores a key-value pair in constant time which is O(1) for insertion and retrieval. But in the worst case,
it can be O(n) when all node returns the same hash value and are inserted into the same bucket.
The traversal cost of n nodes will be O(n) but after the changes made by Java 1.8 version, it can be a maximum of
O(log n).
Concept of Rehashing:
Rehashing is a process that occurs automatically by HashMap when the number of keys in the map reaches the
threshold value. The threshold value is calculated as threshold = capacity * (load factor of 0.75).
In this case, a new size of bucket array is created with more capacity and all the existing contents are copied over
to it.
For example:
When the 13th key-value pair is inserted into the HashMap, HashMap grows its bucket array size to 16*2 = 32.
Next time when 25th key-value pair is inserted into HashMap, HashMap grows its bucket array size to 32*2 = 64
and so on.
1. Data structure to store entry objects is an array named table of type Entry.
2. A particular index location in array is referred as bucket, because it can hold the first element of a
linkedlist of entry objects.
3. Key object’s hashCode() is required to calculate the index location of Entry object.
5. Value object’s hashCode() and equals() method are not used in HashMap’s get() and put() methods.
6. Hash code for null keys is always zero, and such entry object is always stored in zero index in Entry[].
6. TreeMap Class:
A Red-Black tree based NavigableMap implementation. The map is sorted according to the natural
ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.
This implementation provides guaranteed log(n) time cost for the containsKey, get, put, and remove
operations. Algorithms are adaptations of those in Cormen, Leiserson, and Rivest’s Introduction to Algorithms.
Note that the ordering maintained by a TreeMap, like any sorted map, and whether or not an explicit
comparator is provided, must be consistent with equals if this sorted map is to correctly implement the Map
interface. (See Comparable or Comparator for a precise definition of consistent with equals.) This is so because
the Map interface is defined in terms of the equals operation, but a sorted map performs all key comparisons
using its compareTo (or compare) method, so two keys that are deemed equal by this method are, from the
standpoint of the sorted map, equal. The behavior of a sorted map is well-defined even if its ordering is
inconsistent with equals; it just fails to obey the general contract of the Map interface.
7. PriorityQueue Class:
Queue processes its elements in FIFO order but sometimes we want elements to be processed based on their
priority. We can use PriorityQueue in this case and we need to provide a Comparator implementation while
instantiation the PriorityQueue. PriorityQueue doesn’t allow null values and it’s unbounded. For more details
about this, please head over to Java Priority Queue where you can check its usage with a sample program.
Fail-Fast Iterators:
1. Definition: Fail-fast iterators immediately throw a ConcurrentModificationException if they detect that
the underlying collection has been structurally modified after the iterator was created. Structural
modifications include adding, removing, or updating elements in the collection.
2. How It Works:
• Fail-fast iterators use an internal counter (often called modCount) to track the number of
structural modifications made to the collection.
• On each call to the next() method, the iterator checks if modCount has changed. If it has, the
iterator throws a ConcurrentModificationException.
3. Examples: Common examples of fail-fast iterators include those found in ArrayList, HashMap, and other
standard collection classes in Java. Here’s an example:
java
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
cityCode.put("Delhi", "India");
cityCode.put("Moscow", "Russia");
while (iterator.hasNext()) {
System.out.println(cityCode.get(iterator.next()));
}
Fail-Safe Iterators:
1. Definition: Fail-safe iterators, on the other hand, do not throw exceptions if the underlying collection is
modified during iteration. Instead, they operate on a copy (or clone) of the collection, allowing for safe
iteration even if the original collection is modified.
2. How It Works:
• Fail-safe iterators create a snapshot of the collection at the time the iterator is created. Any
modifications to the original collection do not affect the snapshot.
• As a result, the iterator can continue to operate without throwing exceptions, even if the original
collection is altered.
• Behavior on Modification:
• Fail-Safe: Allows modifications without throwing exceptions, as it iterates over a snapshot of the
collection.
• Use Cases:
• Fail-Fast: Suitable for scenarios where data integrity is critical, and modifications during iteration
should be avoided.
• Fail-Safe: Useful in concurrent environments where you want to allow modifications without
interrupting the iteration process.
Conclusion
Understanding the difference between fail-fast and fail-safe iterators is crucial for managing concurrent
modifications in Java collections. Fail-fast iterators prioritize immediate feedback on structural changes, while
fail-safe iterators provide flexibility in concurrent scenarios by allowing modifications without interruption. This
knowledge helps developers choose the right collection type based on their specific use case and concurrency
requirements.
Comparable:
Comparable interface is mainly used to sort the arrays (or lists) of custom objects. Lists (and arrays) of
objects that implement Comparable interface can be sorted automatically by Collections.sort (and Arrays.sort).
The Comparable interface is used to define how a class is to be sorted. It is not to be confused with
the Comparator interface, which is implemented in a separate class. The Comparable interface is implemented in
the class to be sorted.
Syntax:
This method determines how items are sorted by methods such as Arrays.sort() and Collections.sort().
Differences:
List Set Map
The list interface allows Set does not allow duplicate The map does not allow duplicate
duplicate elements elements. elements
The list maintains insertion Set do not maintain any The map also does not maintain any
order. insertion order. insertion order.
We can add any number of null But in set almost only one null The map allows a single null key at
values. value. most and any number of null values.
List implementation classes Set implementation classes Map implementation classes
are Array List, LinkedList. are HashSet, LinkedHashSet, are HashMap, HashTable, TreeMap, Co
and TreeSet. ncurrentHashMap,
and LinkedHashMap.
The list provides get() method to Set does not provide get method The map does not provide get method
get the element at a specified to get the elements at a to get the elements at a specified index
index. specified index
If you need to access the If you want to create a collection If you want to store the data in the
elements frequently by using of unique elements then we can form of key/value pair then we can use
the index then we can use the use set the map.
list
To traverse the list elements by Iterator can be used traverse the Through keyset, value, and entry set.
using Listlterator. set elements
1) Is it a legacy? Yes no no
5)
Accessibility? Only read. Both read and remove. Read/remove/replace/add.
hasMoreElement() nextElement()
6) Methods hasNext() next() remove() 9 methods
ConcurrentMap Interface in java:
ConcurrentMap is an interface and it is a member of the Java Collections Framework, which is introduced
in JDK 1.5 represents a Map that is capable of handling concurrent access to it without affecting the consistency
of entries in a map. ConcurrentMap interface present in java.util.concurrent package. It provides some extra
methods apart from what it inherits from the SuperInterface i.e. java.util.Map. It has inherited the Nested
Interface Map.Entry<K, V>.
HashMap operations are not synchronized, while Hashtable provides synchronization. Though Hashtable
is a thread-safe, it is not very efficient. To solve this issue, the Java Collections Framework
introduced ConcurrentMap in Java 1.5.
The Hierarchy of ConcurrentMap
Declaration:
public interface ConcurrentMap<K,V> extends Map<K,V>
Here, K is the type of key Object and V is the type of value Object.
• It extends the Map interface in Java.
• ConcurrentNavigableMap<K,V> is the SubInterface.
• ConcurrentMap is implemented by ConcurrentHashMap, ConcurrentSkipListMap classes.
• ConcurrentMap is known as a synchronized Map.
Implementing Classes
Since it belongs to java.util.concurrent package, we must import is using
import java.util.concurrent.ConcurrentMap
or
import java.util.concurrent.*
The ConcurrentMap has two implementing classes which are ConcurrentSkipListMap and ConcurrentHashMap.
The ConcurrentSkipListMap is a scalable implementation of the ConcurrentNavigableMap interface which
extends ConcurrentMap interface. The keys in ConcurrentSkipListMap are sorted by natural order or by using a
Comparator at the time of construction of the object. The ConcurrentSkipListMap has the expected time cost
of log(n) for insertion, deletion, and searching operations. It is a thread-safe class, therefore, all basic operations
can be accomplished concurrently.
Syntax:
// ConcurrentMap implementation by ConcurrentHashMap
CocurrentMap<K, V> numbers = new ConcurrentHashMap<K, V>();
Basic Methods
1. Add Elements
The put() method of ConcurrentSkipListMap is an in-built function in Java which associates the specified value
with the specified key in this map. If the map previously contained a mapping for the key, the old value is
replaced.
2. Remove Elements
The remove() method of ConcurrentSkipListMap is an in-built function in Java which removes the mapping for
the specified key from this map. The method returns null if there is no mapping for that particular key. After this
method is performed the size of the map is reduced.
3. Accessing the Elements
We can access the elements of a ConcurrentSkipListMap using the get() method.
4. Traversing
We can use the Iterator interface to traverse over any structure of the Collection Framework. Since Iterators
work with one type of data we use Entry< ? , ? > to resolve the two separate types into a compatible format.
Then using the next() method we print the elements of the ConcurrentSkipListMap.
ConcurrentHashMap in Java:
The ConcurrentHashMap class is introduced in JDK 1.5 belongs to java.util.concurrent package, which
implements ConcurrentMap as well as to Serializable interface also. ConcurrentHashMap is an enhancement of
HashMap as we know that while dealing with Threads in our application HashMap is not a good choice because
performance-wise HashMap is not up to the mark.
ConcurrentHashMap is a thread-safe implementation of the Map interface in Java, which means multiple
threads can access it simultaneously without any synchronization issues. It’s part of the java.util.concurrent
package and was introduced in Java 5 as a scalable alternative to the traditional HashMap class.
One of the key features of the ConcurrentHashMap is that it provides fine-grained locking, meaning that
it locks only the portion of the map being modified, rather than the entire map. This makes it highly scalable and
efficient for concurrent operations. Additionally, the ConcurrentHashMap provides various methods for atomic
operations such as putIfAbsent(), replace(), and remove().
Constructors of ConcurrentHashMap
• Concurrency-Level: It is the number of threads concurrently updating the map. The implementation
performs internal sizing to try to accommodate this many threads.
• Load-Factor: It’s a threshold, used to control resizing.
• Initial Capacity: Accommodation of a certain number of elements initially provided by the
implementation. if the capacity of this map is 10. It means that it can store 10 entries.
1. ConcurrentHashMap(): Creates a new, empty map with a default initial capacity (16), load factor (0.75)
and concurrencyLevel (16).
2. ConcurrentHashMap(int initialCapacity): Creates a new, empty map with the specified initial capacity,
and with default load factor (0.75) and concurrencyLevel (16).
3. ConcurrentHashMap(int initialCapacity, float loadFactor): Creates a new, empty map with the specified
initial capacity and load factor and with the default concurrencyLevel (16).
5. ConcurrentHashMap(Map m): Creates a new map with the same mappings as the given map.
ConcurrentHashMap vs Hashtable
HashTable
• Hashtable is an implementation of Map data structure
• This is a legacy class in which all methods are synchronized on Hashtable instances using the
synchronized keyword.
• Thread-safe as it’s method are synchronized
ConcurrentHashMap
• ConcurrentHashMap implements Map data structure and also provide thread safety like Hashtable.
• It works by dividing complete hashtable array into segments or portions and allowing parallel access to
those segments.
• The locking is at a much finer granularity at a hashmap bucket level.
• Use ConcurrentHashMap when you need very high concurrency in your application.
• It is a thread-safe without synchronizing the whole map.
• Reads can happen very fast while the write is done with a lock on segment level or bucket level.
• There is no locking at the object level.
• ConcurrentHashMap doesn’t throw a ConcurrentModificationException if one thread tries to modify it
while another is iterating over it.
• ConcurrentHashMap does not allow NULL values, so the key can not be null in ConcurrentHashMap
• ConcurrentHashMap doesn’t throw a ConcurrentModificationException if one thread tries to modify it,
while another is iterating over it.
Conclusion:
Advantages of ConcurrentHashMap:
Disadvantages of ConcurrentHashMap:
1. Higher memory overhead: The fine-grained locking mechanism used by ConcurrentHashMap requires
additional memory overhead compared to other synchronization mechanisms.
2. Complexity: The fine-grained locking mechanism used by ConcurrentHashMap can make the code more
complex, especially for developers who are not familiar with concurrent programming.