Interview
Interview
JVM internals
2. Access Modifiers
4. Constructors
5. Java Keywords - Static, Final, volatile, synchronized, transient, this, super final, finally, finalize
8. method hiding
9. Abstraction
10. Encapsulation
17. Collections.sort()
19. Synchronization
21. Clone()
40. Semaphore
41. CountDownLatch
42. ForkJoin
43. CyclicBarrier
45. threadLocal
46. volatile
(i) Creational Design Patterns :- Singleton, Factory Method, Abstract Factory, Builder, Prototype
50. Group By
53. Maven
54. Gradle
56. kafka
57. Hibernate
--------------------------------------------------MircoService Round------------------------------------------------------------
Rest Api, Rest principles ,Rest best practices , Rest Api versioning, Implementation of Spring
Boot, Open Api Consumers like Swagger, standard Microservices netflix suite zull Hystrix
Ribbon ,Security mocking, transaction, DP of Microservices , saga pattern Important, caching in
microservices , ELK, CQRS ,bounded context, service decomposition, cloud setup and
deployment like GC , API security client cridincials, jwt token, API gateway policy implementation
security, service mesh, istio,zull, multithreading
1. principles of REST
5. Swagger
6. zull
7. Ribbon
8. Hystrix
9. DP of Microservices
10.
------------------------------------------------------------------------------------------------------------------------------
1. Serialization
To serialize an object means to convert its state to a byte stream so that the byte stream can be
reverted back into a copy of the object. A Java object is serializable if its class or any of its
superclasses implements java.io.Serializable interface. Deserialization is the process of
converting the serialized form of an object back into a copy of the object.
an object can be represented as a sequence of bytes that includes the object's data as well as
information about the object's type and the types of data stored in the object.
the entire process is JVM independent, meaning an object can be serialized on one platform and
deserialized on an entirely different platform.
The ObjectOutputStream class contains many write methods for writing various data types, but
one method in particular stands out −
The above method serializes an Object and sends it to the output stream. Similarly, the
ObjectInputStream class contains the following method for deserializing an object −
Serializable is a marker interface (has no data member and method). It is used to “mark” java
classes so that objects of these classes may get certain capability. Other examples of marker
interfaces are:- Cloneable and Remote.
Points to remember
1. If a parent class has implemented Serializable interface then child class doesn’t need to
implement it but vice-versa is not true.
3. Static data members and transient data members are not saved via Serialization process.So, if
you don’t want to save value of a non-static data member then make it transient.
2. clone
Object cloning refers to creation of exact copy of an object. It creates a new instance of the class
of current object and initializes all its fields with exactly the contents of the corresponding fields
of this object.
Every class that implements clone() should call super.clone() to obtain the cloned object
reference.
The class must also implement java.lang.Cloneable interface whose object clone we want to
create otherwise it will throw CloneNotSupportedException when clone method is called on that
class’s object.
Syntax:
Shallow copy is method of copying an object and is followed by default in cloning. In this method
the fields of an old object X are copied to the new object Y. While copying the object type field the
reference is copied to Y i.e object Y will point to same location as pointed out by X. If the field
value is a primitive type it copies the value of the primitive type.
Therefore, any changes made in referenced objects in object X or Y will be reflected in other
object.
If we want to create a deep copy of object X and place it in a new object Y then new copy of any
referenced objects fields are created and these references are placed in object Y. This means any
changes made in referenced object fields in object X or Y will be reflected only in that object and
not in the other. In below example, we create a deep copy of object.
A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by
the fields. A deep copy occurs when an object is copied along with the objects to which it refers.
Java Enums :-
The Enum in Java is a data type which contains a fixed set of constants.The Java enum
constants are static and final.
Enum may implement many interfaces but cannot extend any class because it internally extends
Enum class
SOLID Principles :-
SOLID design principles encourage us to create more maintainable, understandable, and flexible
software. Consequently, as our applications grow in size, we can reduce their complexity and
save ourselves a lot of headaches further down the road!
Single Responsibility
Open/Closed
Liskov Substitution
Interface Segregation
Dependency Inversion
How does this principle help us to build better software? Let's see a few of its benefits:
Testing – A class with one responsibility will have far fewer test cases
Lower coupling – Less functionality in a single class will have fewer dependencies
Organization – Smaller, well-organized classes are easier to search than monolithic ones
Liskov Substitution :- Derived types/class must be completely substitutable for their base types
ArrayList :-
Note that till Java 6 the new capacity calculation used to be like this -
Which is changed in Java 7 to use right shift operator. With right shift operator also it will grow by
50% of old capacity.
int newCapacity = oldCapacity + (oldCapacity >> 1); 10+(10>>1) => 10+5 => 15
When elements are removed from an ArrayList in Java using either remove(int i) (i.e using index)
or remove(Object o), gap created by the removal of an element has to be filled in the underlying
array. That is done by Shifting any subsequent elements to the left (subtracts one from their
indices). System.arrayCopy method is used for that.
Here index+1 is the source position and index is the destination position. Since element at the
position index is removed so elements starting from index+1 are copied to destination starting
from index.
Points to note
2. Internally ArrayList class uses an array of Object class to store its elements.
3. When initializing an ArrayList you can provide initial capacity then the array would be of the
size provided as initial capacity.
4. If initial capacity is not specified then default capacity is used to create an array. Default
capacity is 10.
5. When an element is added to an ArrayList it first verifies whether it can accommodate the new
element or it needs to grow, in case capacity has to be increased then the new capacity is
calculated which is 50% more than the old capacity and the array is increased by that much
capacity.
6. When elements are removed from an ArrayList space created by the removal of an element
has to be filled in the underlying array. That is done by Shifting any subsequent elements to the
left.
The add operation runs in amortized constant time, that is, adding n elements requires O(n) time.
All of the other operations run in linear time.
The size, isEmpty, get, set, iterator, and listIterator operations run in constant time O(1).
HashMap :-
As we know now that in case of hash collision entry objects are stored as a node in a linked-list
and equals() method is used to compare keys. That comparison to find the correct key with in a
linked-list is a linear operation so in a worst case scenario the complexity becomes O(n).
To address this issue, Java 8 hash elements use balanced trees instead of linked lists after a
certain threshold is reached. Which means HashMap starts with storing Entry objects in linked
list but after the number of items in a hash becomes larger than a certain threshold, the hash will
change from using a linked list to a balanced tree, which will improve the worst case
performance from O(n) to O(log n).
Important Points
1. Time complexity is almost constant for put and get method until rehashing is not done.
2. In case of collision, i.e. index of two or more nodes are same, nodes are joined by link list i.e.
second node is referenced by first node and third by second and so on.
3. If key given already exists in HashMap, the value is replaced with new value.
5. When getting an object with its key, the linked list is traversed until the key matches or null is
found on next field.
]
ELK :-
"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.
Elasticsearch is a search and analytics engine. Logstash is a server side data processing pipeline
that ingests data from multiple sources simultaneously, transforms it, and then sends it to a
"stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in
Elasticsearch.
ELK Often referred to as Elasticsearch, the ELK stack gives us the ability to aggregate logs from
all our systems and applications, analyze these logs, and create visualizations for application
and infrastructure monitoring, faster troubleshooting, security analytics, and more.
ELK provides centralized logging that be useful when attempting to identify problems with
servers or applications. It allows ymnou to search all your logs in a single place. It also helps to
find issues that occur in multiple servers by connecting their logs during a specific time frame
E = Elasticsearch
we can send data in the form of JSON documents to Elasticsearch using the API or ingestion
tools such as Logstash. Elasticsearch automatically stores the original document and adds a
searchable reference to the document in the cluster’s index. we can then search and retrieve the
document using the Elasticsearch API. we can also use Kibana, an open-source visualization
tool, with Elasticsearch to visualize your data and build interactive dashboards.
L = Logstash
2. PRE-BUILT FILTERS
K = Kibana
Kibana is an open-source data visualization and exploration tool used for log and time-series
analytics, application monitoring, and operational intelligence use cases. It offers powerful and
easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in
geospatial support. Also, it provides tight integration with Elasticsearch, a popular analytics and
search engine, which makes Kibana the default choice for visualizing data stored in
Elasticsearch.
Logstash: Collect logs and events data. It even parses and transforms data
ElasticSearch: The transformed data from Logstash is Store, Search, and indexed.
However, one more component is needed or Data collection called Beats. This led Elastic to
rename ELK as the Elastic Stack.
log -> Data Collection(beats) -> Data Processing(Logstash) -> storage(elasticsearch) ->
visualize(kibana)
Full-Text Search
What is a Cluster? :-
A cluster is a collection of nodes which together holds data and provides joined indexing and
search capabilities.
These commands accept a query string parameter. This helps to see all info and headers and
info they provide and the /_cat command, which allows you to lists all the available commands.
Cat health, cat indices, cat master, pending tasks, cat plugins, cat recovery
cat repositories, cat snapshots, cat templates.
What is Docker?
Answer
Docker is a containerization platform which packages your application and all its dependencies
together in the form of containers so as to ensure that your application works seamlessly in any
environment be it development or test or production.
Docker containers, wrap a piece of software in a complete filesystem that contains everything
needed to run: code, runtime, system tools, system libraries etc. anything that can be installed on
a server.
This guarantees that the software will always run the same, regardless of its environment.
factory pattern :-
factory pattern is used to create instances/objects of different classes of the same type.
Factory pattern introduces loose coupling between classes which is the most important principle
one should consider and apply while designing the application architecture. Loose coupling can
be introduced in application architecture by programming against abstract entities rather than
concrete implementations. This not only makes our architecture more flexible but also less
fragile.
Builder pattern :-
Builder pattern aims to “Separate the construction of a complex object from its representation so
that the same construction process can create different representations.”
It’s much like saying “please close the door first; otherwise some other crook will come in and
rearrange your stuff”.
Though above approach is safe and it does work, but it put a significant penalty on your
application in terms of performance. Reason is simple that waiting threads can not do anything
unless they also get a chance and perform the guarded operation.
There exists one more approach which is more efficient in performance, and it optimistic in
nature. In this approach, you proceed with an update, being hopeful that you can complete it
without interference.
The optimistic approach is like the old saying, “It is easier to obtain forgiveness than permission”,
where “easier” here means “more efficient”.
The framework mainly separates task creation and execution. Task creation is mainly boiler plate
code and is easily replaceable.
With an executor, we have to create tasks which implement either Runnable or Callable interface
and send them to the executor.
Executor is responsible for executing the tasks, running them with the necessary threads from
the pool.
Another important advantage of the Executor framework was the Callable interface. It’s similar to
the Runnable interface with two benefits:
It’s call() method returns a result after the thread execution is complete.
When we send a Callable object to an executor, we get a Future object’s reference. We can use
this object to query the status of thread and the result of the Callable object.
CountDownLatch :-
CountDownLatch was introduced with JDK 1.5. This class enables a java thread to wait until
other set of threads completes their tasks.e.g. Application’s main thread want to wait, till other
service threads which are responsible for starting framework services have completed started all
services.
CountDownLatch works by having a counter initialized with number of threads, which is
decremented each time a thread complete its execution. When count reaches to zero, it means
all threads have completed their execution, and thread waiting on latch resume the execution.
This count is essentially the number of threads, for which latch should wait. This value can be set
only once, and CountDownLatch provides no other mechanism to reset this count.
ForkJoin :-
Basically the Fork-Join breaks the task at hand into mini-tasks until the mini-task is simple
enough that it can be solved without further breakups. It’s like a divide-and-conquer algorithm.
One important concept to note in this framework is that ideally no worker thread is idle. They
implement a work-stealing algorithm in that idle workers steal the work from those workers who
are busy.
The core classes supporting the Fork-Join mechanism are ForkJoinPool and ForkJoinTask.
CyclicBarrier :-
CyclicBarrier is used when multiple thread carry out different sub tasks and the output of these
sub tasks need to be combined to form the final output. After completing its execution, threads
call await() method and wait for other threads to reach the barrier. Once all the threads have
reached, the barriers then give the way for threads to proceed.
Basic
A synchronization aid that allows a set of threads to all wait for each other to reach a common
barrier point.
A synchronization aid that allows one or more threads to wait until a set of operations being
performed in other threads completes.
2
Ruunable
Thread /Task
4.
A dvanceable
It is not advancable
It is advancable.
Exception
If one thread is interrupted while waiting then all other waiting threads will throw
BrokenBarrierException
Volatile :-
Volatile keyword is used to modify the value of a variable by different threads. The volatile
keyword does not cache the value of the variable and always read the variable from the main
memory. The volatile keyword cannot be used with classes or methods. However, it is used with
variables. It also guarantees visibility. It reduces the risk of memory consistency error. The
volatile variables are always visible to other threads.
The Java ThreadLocal class enables you to create variables that can only be read and written by
the same thread. Thus, even if two threads are executing the same code, and the code has a
reference to the same ThreadLocal variable, the two threads cannot see each other's
ThreadLocal variables. Thus, the Java ThreadLocal class provides a simple way to make code
thread safe that would not otherwise be so.
Docker File :-
from :- It tells docker, from which base image you want to base your image .
Add keyword :-
The ADD command is used to copy files/directories into a Docker image.Copy files from the local
storage to a destination in the Docker image.
EXPOSE :- The EXPOSE instruction informs Docker that the container listens on the specified
network ports at runtime.
ENTRYPOINT :- ENTRYPOINT is the other instruction used to configure how the container will run.
you need to specify a command and parameters.
SAGA :-
A saga is a sequence of local transactions. Each local transaction updates the database and
publishes a message or event to trigger the next local transaction in the saga. If a local
transaction fails because it violates a business rule then the saga executes a series of
compensating transactions that undo the changes that were made by the preceding local
transactions.
Choreography - each local transaction publishes domain events that trigger local transactions in
other services
Orchestration - an orchestrator (object) tells the participants what local transactions to execute