0% found this document useful (0 votes)
5 views

Interview

Interview

Uploaded by

sonuk2sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Interview

Interview

Uploaded by

sonuk2sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

1.

JVM internals

2. Access Modifiers

3. generic and Enum

4. Constructors

5. Java Keywords - Static, Final, volatile, synchronized, transient, this, super final, finally, finalize

6. Abstract class and interface

7. method overloading and overriding (Polymorphism)

8. method hiding

9. Abstraction

10. Encapsulation

11. Association, Aggregation, and Composition

12. Composition and Inheritance

13. Coupling and Cohesion

14. Strings (string pool)

15. String, StringBuilder, StringBuffer.

16. static and non-static variables

17. Collections.sort()

18. comparator and comparable

19. Synchronization

20. Serialization and deserialization

21. Clone()

22. Java Collections Framework

23. Java basics like equal and hashcode

24. ArrayList internally working in Java

25. LinkedList internally working in Java

26. HashMap internally working in Java

27. custom object as a key in Collection classes like Map or Set


28. diff bw synchronizedHashMap and ConcurrentHashMap

29. ConcurrentModification exception

30. fast fail and fail safe

31. SOLID Principles

32. Exception handling(Exceptions - Checked , Unchecked) and rules of exception handling in


method overriding

33. Deep Copy vs Shallow Copy

34. Immutability, how to create custom object immutable ?

35. Garbage Collection

36. Multithreading, concurrency, and thread basics

37. Optimistic and Pessimistic Locking

38. Lock interface, ReentrantLock

39. Executor framework

40. Semaphore

41. CountDownLatch

42. ForkJoin

43. CyclicBarrier

44. difference between CyclicBarrier and CountDownLatch

45. threadLocal

46. volatile

47. Gang of Four Design patterns

(i) Creational Design Patterns :- Singleton, Factory Method, Abstract Factory, Builder, Prototype

(ii) Structural Design Patterns :- Proxy, Facade

(iii) Behavior Design Patterns :- Strategy, Chain of Responsibility

48. JDBC (Java Database Connectivity)

49. joins in sql

50. Group By

51. ELK (Elasticsearch, Logstash, and Kibana)


52. JUnit

53. Maven

54. Gradle

55. Service Mesh

56. kafka

57. Hibernate

--------------------------------------------------MircoService Round------------------------------------------------------------

Rest Api, Rest principles ,Rest best practices , Rest Api versioning, Implementation of Spring
Boot, Open Api Consumers like Swagger, standard Microservices netflix suite zull Hystrix
Ribbon ,Security mocking, transaction, DP of Microservices , saga pattern Important, caching in
microservices , ELK, CQRS ,bounded context, service decomposition, cloud setup and
deployment like GC , API security client cridincials, jwt token, API gateway policy implementation
security, service mesh, istio,zull, multithreading

1. principles of REST

2. Rest best practices

3. Implementation of Spring Boot

4. Rest Api versioning

5. Swagger

6. zull

7. Ribbon

8. Hystrix

9. DP of Microservices

10.

------------------------------------------------------------------------------------------------------------------------------

1. Serialization

To serialize an object means to convert its state to a byte stream so that the byte stream can be
reverted back into a copy of the object. A Java object is serializable if its class or any of its
superclasses implements java.io.Serializable interface. Deserialization is the process of
converting the serialized form of an object back into a copy of the object.

an object can be represented as a sequence of bytes that includes the object's data as well as
information about the object's type and the types of data stored in the object.

the entire process is JVM independent, meaning an object can be serialized on one platform and
deserialized on an entirely different platform.

The ObjectOutputStream class contains many write methods for writing various data types, but
one method in particular stands out −

public final void writeObject(Object x) throws IOException

The above method serializes an Object and sends it to the output stream. Similarly, the
ObjectInputStream class contains the following method for deserializing an object −

public final Object readObject() throws IOException, ClassNotFoundExcept

Serializable is a marker interface (has no data member and method). It is used to “mark” java
classes so that objects of these classes may get certain capability. Other examples of marker
interfaces are:- Cloneable and Remote.

Points to remember

1. If a parent class has implemented Serializable interface then child class doesn’t need to
implement it but vice-versa is not true.

2. Only non-static data members are saved via Serialization process.

3. Static data members and transient data members are not saved via Serialization process.So, if
you don’t want to save value of a non-static data member then make it transient.

4. Constructor of object is never called when an object is deserialized.

5. Associated objects must be implementing Serializable interface.

2. clone

Object cloning refers to creation of exact copy of an object. It creates a new instance of the class
of current object and initializes all its fields with exactly the contents of the corresponding fields
of this object.
Every class that implements clone() should call super.clone() to obtain the cloned object
reference.

The class must also implement java.lang.Cloneable interface whose object clone we want to
create otherwise it will throw CloneNotSupportedException when clone method is called on that
class’s object.

Syntax:

protected Object clone() throws CloneNotSupportedException

Deep Copy vs Shallow Copy :-

Shallow copy is method of copying an object and is followed by default in cloning. In this method
the fields of an old object X are copied to the new object Y. While copying the object type field the
reference is copied to Y i.e object Y will point to same location as pointed out by X. If the field
value is a primitive type it copies the value of the primitive type.

Therefore, any changes made in referenced objects in object X or Y will be reflected in other
object.

Usage of clone() method – Deep Copy :-

If we want to create a deep copy of object X and place it in a new object Y then new copy of any
referenced objects fields are created and these references are placed in object Y. This means any
changes made in referenced object fields in object X or Y will be reflected only in that object and
not in the other. In below example, we create a deep copy of object.

A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by
the fields. A deep copy occurs when an object is copied along with the objects to which it refers.

Java Enums :-

The Enum in Java is a data type which contains a fixed set of constants.The Java enum
constants are static and final.

Points to remember for Java Enum

Enum improves type safety

Enum can be easily used in switch

Enum can be traversed


Enum can have fields, constructors and methods

Enum may implement many interfaces but cannot extend any class because it internally extends
Enum class

SOLID Principles :-

SOLID design principles encourage us to create more maintainable, understandable, and flexible
software. Consequently, as our applications grow in size, we can reduce their complexity and
save ourselves a lot of headaches further down the road!

The following 5 concepts make up our SOLID principles:

Single Responsibility

Open/Closed

Liskov Substitution

Interface Segregation

Dependency Inversion

Single Responsibility :- a class should only have one responsibility.

How does this principle help us to build better software? Let's see a few of its benefits:

Testing – A class with one responsibility will have far fewer test cases

Lower coupling – Less functionality in a single class will have fewer dependencies

Organization – Smaller, well-organized classes are easier to search than monolithic ones

Open for Extension, Closed for Modification :- extend not modify

Liskov Substitution :- Derived types/class must be completely substitutable for their base types

Interface Segregation :- seperate interfaces for individual functionlity


Dependency Inversion :- Depend on abstractions, not on concretions

ArrayList :-

ArrayList is a Resizable-array implementation of the List interface i.e. ArrayList grows


dynamically as the elements are added to it.

Note that till Java 6 the new capacity calculation used to be like this -

int newCapacity = (oldCapacity * 3)/2 + 1; 10 => 16;

Which is changed in Java 7 to use right shift operator. With right shift operator also it will grow by
50% of old capacity.

int newCapacity = oldCapacity + (oldCapacity >> 1); 10+(10>>1) => 10+5 => 15

When elements are removed from an ArrayList in Java using either remove(int i) (i.e using index)
or remove(Object o), gap created by the removal of an element has to be filled in the underlying
array. That is done by Shifting any subsequent elements to the left (subtracts one from their
indices). System.arrayCopy method is used for that.

System.arraycopy(elementData, index+1, elementData, index, numMoved);

Here index+1 is the source position and index is the destination position. Since element at the
position index is removed so elements starting from index+1 are copied to destination starting
from index.

Points to note

1. ArrayList in Java is a Resizable-array implementation of the List interface.

2. Internally ArrayList class uses an array of Object class to store its elements.

3. When initializing an ArrayList you can provide initial capacity then the array would be of the
size provided as initial capacity.

4. If initial capacity is not specified then default capacity is used to create an array. Default
capacity is 10.

5. When an element is added to an ArrayList it first verifies whether it can accommodate the new
element or it needs to grow, in case capacity has to be increased then the new capacity is
calculated which is 50% more than the old capacity and the array is increased by that much
capacity.

6. When elements are removed from an ArrayList space created by the removal of an element
has to be filled in the underlying array. That is done by Shifting any subsequent elements to the
left.

The add operation runs in amortized constant time, that is, adding n elements requires O(n) time.
All of the other operations run in linear time.

The size, isEmpty, get, set, iterator, and listIterator operations run in constant time O(1).

HashMap :-

HashMap Changes in Java 8

As we know now that in case of hash collision entry objects are stored as a node in a linked-list
and equals() method is used to compare keys. That comparison to find the correct key with in a
linked-list is a linear operation so in a worst case scenario the complexity becomes O(n).

To address this issue, Java 8 hash elements use balanced trees instead of linked lists after a
certain threshold is reached. Which means HashMap starts with storing Entry objects in linked
list but after the number of items in a hash becomes larger than a certain threshold, the hash will
change from using a linked list to a balanced tree, which will improve the worst case
performance from O(n) to O(log n).

Important Points

1. Time complexity is almost constant for put and get method until rehashing is not done.

2. In case of collision, i.e. index of two or more nodes are same, nodes are joined by link list i.e.
second node is referenced by first node and third by second and so on.

3. If key given already exists in HashMap, the value is replaced with new value.

4. hash code of null key is 0.

5. When getting an object with its key, the linked list is traversed until the key matches or null is
found on next field.
]

ELK :-

"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.
Elasticsearch is a search and analytics engine. Logstash is a server side data processing pipeline
that ingests data from multiple sources simultaneously, transforms it, and then sends it to a
"stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in
Elasticsearch.

ELK Often referred to as Elasticsearch, the ELK stack gives us the ability to aggregate logs from
all our systems and applications, analyze these logs, and create visualizations for application
and infrastructure monitoring, faster troubleshooting, security analytics, and more.

ELK provides centralized logging that be useful when attempting to identify problems with
servers or applications. It allows ymnou to search all your logs in a single place. It also helps to
find issues that occur in multiple servers by connecting their logs during a specific time frame

E = Elasticsearch

Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on


Apache Lucene. Support for various languages, high performance, and schema-free JSON
documents makes Elasticsearch an ideal choice for various log analytics and is commonly used
for log analytics, full-text search, security intelligence, business analytics, and operational
intelligence use cases.

How does Elasticsearch work? :-

we can send data in the form of JSON documents to Elasticsearch using the API or ingestion
tools such as Logstash. Elasticsearch automatically stores the original document and adds a
searchable reference to the document in the cluster’s index. we can then search and retrieve the
document using the Elasticsearch API. we can also use Kibana, an open-source visualization
tool, with Elasticsearch to visualize your data and build interactive dashboards.

L = Logstash

Logstash is a light-weight, open-source, server-side data processing pipeline that allows us to


collect data from a variety of sources, transform it, and send it to our desired destination. It is
most often used as a data pipeline for Elasticsearch, an open-source analytics and search
engine. Because of its tight integration with Elasticsearch, powerful log processing capabilities,
and over 200 pre-built open-source plugins that can help you easily index your data, Logstash is a
popular choice for loading data into Elasticsearch.
Logstash benefits

1. EASILY LOAD UNSTRUCTURED DATA

2. PRE-BUILT FILTERS

3. FLEXIBLE PLUGIN ARCHITECTURE

K = Kibana

Kibana is an open-source data visualization and exploration tool used for log and time-series
analytics, application monitoring, and operational intelligence use cases. It offers powerful and
easy-to-use features such as histograms, line graphs, pie charts, heat maps, and built-in
geospatial support. Also, it provides tight integration with Elasticsearch, a popular analytics and
search engine, which makes Kibana the default choice for visualizing data stored in
Elasticsearch.

ELK Stack Architecture

Here is the simple architecture of ELK stack

log -> Data Processing(Logstash) -> storage(elasticsearch) -> visualize(kibana)

Logs: Server logs that need to be analyzed are identified

Logstash: Collect logs and events data. It even parses and transforms data

ElasticSearch: The transformed data from Logstash is Store, Search, and indexed.

Kibana: Kibana uses Elasticsearch DB to Explore, Visualize, and Share

However, one more component is needed or Data collection called Beats. This led Elastic to
rename ELK as the Elastic Stack.

log -> Data Collection(beats) -> Data Processing(Logstash) -> storage(elasticsearch) ->
visualize(kibana)

What are the important features of Elasticsearch? :-


Here are important features of Elasticsearch:

An open-source search server written using Java.

Used to index any kind of heterogeneous data

Has REST API web-interface with JSON output

Full-Text Search

Near Real-Time (NRT) search

Sharded, replicated searchable, JSON document store.

Schema-free, REST & JSON based distributed document store

Multi-language & Geolocation support

What is a Cluster? :-

A cluster is a collection of nodes which together holds data and provides joined indexing and
search capabilities.

A node is an elastic search Instance. It is created when an elasticsearch instance begins.

How you can delete an index in Elastic search?

DELETE /index name.

What is cat API in Elasticsearch?

These commands accept a query string parameter. This helps to see all info and headers and
info they provide and the /_cat command, which allows you to lists all the available commands.

What are the various commands available in Elasticsearch cat API?

Command using with cat API are:

Cat aliases, cat allocation, cat count, cat fielddata

Cat health, cat indices, cat master, pending tasks, cat plugins, cat recovery
cat repositories, cat snapshots, cat templates.

What is Docker?

Answer

Docker is a containerization platform which packages your application and all its dependencies
together in the form of containers so as to ensure that your application works seamlessly in any
environment be it development or test or production.

Docker containers, wrap a piece of software in a complete filesystem that contains everything
needed to run: code, runtime, system tools, system libraries etc. anything that can be installed on
a server.

This guarantees that the software will always run the same, regardless of its environment.

factory pattern :-

factory pattern is used to create instances/objects of different classes of the same type.

Factory pattern introduces loose coupling between classes which is the most important principle
one should consider and apply while designing the application architecture. Loose coupling can
be introduced in application architecture by programming against abstract entities rather than
concrete implementations. This not only makes our architecture more flexible but also less
fragile.

Builder pattern :-

Builder pattern aims to “Separate the construction of a complex object from its representation so
that the same construction process can create different representations.”

Optimistic and Pessimistic Locking

Optimistic and Pessimistic Locking

Traditional locking mechanisms, e.g. using synchronized keyword in java, is said to be


pessimistic technique of locking or multi-threading. It asks you to first guarantee that no other
thread will interfere in between certain operation (i.e. lock the object), and then only allow you
access to any instance/method.

It’s much like saying “please close the door first; otherwise some other crook will come in and
rearrange your stuff”.
Though above approach is safe and it does work, but it put a significant penalty on your
application in terms of performance. Reason is simple that waiting threads can not do anything
unless they also get a chance and perform the guarded operation.

There exists one more approach which is more efficient in performance, and it optimistic in
nature. In this approach, you proceed with an update, being hopeful that you can complete it
without interference.

The optimistic approach is like the old saying, “It is easier to obtain forgiveness than permission”,
where “easier” here means “more efficient”.

1.1. Benefits of Executor framework

The framework mainly separates task creation and execution. Task creation is mainly boiler plate
code and is easily replaceable.

With an executor, we have to create tasks which implement either Runnable or Callable interface
and send them to the executor.

Executor internally maintain a (configurable) thread pool to improve application performance by


avoiding the continuous spawning of threads.

Executor is responsible for executing the tasks, running them with the necessary threads from
the pool.

1.2. Callable and Future

Another important advantage of the Executor framework was the Callable interface. It’s similar to
the Runnable interface with two benefits:

It’s call() method returns a result after the thread execution is complete.

When we send a Callable object to an executor, we get a Future object’s reference. We can use
this object to query the status of thread and the result of the Callable object.

CountDownLatch :-

CountDownLatch was introduced with JDK 1.5. This class enables a java thread to wait until
other set of threads completes their tasks.e.g. Application’s main thread want to wait, till other
service threads which are responsible for starting framework services have completed started all
services.
CountDownLatch works by having a counter initialized with number of threads, which is
decremented each time a thread complete its execution. When count reaches to zero, it means
all threads have completed their execution, and thread waiting on latch resume the execution.

public CountDownLatch(int count) {...}

This count is essentially the number of threads, for which latch should wait. This value can be set
only once, and CountDownLatch provides no other mechanism to reset this count.

ForkJoin :-

Basically the Fork-Join breaks the task at hand into mini-tasks until the mini-task is simple
enough that it can be solved without further breakups. It’s like a divide-and-conquer algorithm.
One important concept to note in this framework is that ideally no worker thread is idle. They
implement a work-stealing algorithm in that idle workers steal the work from those workers who
are busy.

The core classes supporting the Fork-Join mechanism are ForkJoinPool and ForkJoinTask.

CyclicBarrier :-

CyclicBarrier is used when multiple thread carry out different sub tasks and the output of these
sub tasks need to be combined to form the final output. After completing its execution, threads
call await() method and wait for other threads to reach the barrier. Once all the threads have
reached, the barriers then give the way for threads to proceed.

diff bw CyclicBarrier CountDownLatch

Sr. No. Key CyclicBarrier CountDownLatch

Basic

A synchronization aid that allows a set of threads to all wait for each other to reach a common
barrier point.

A synchronization aid that allows one or more threads to wait until a set of operations being
performed in other threads completes.
2

Ruunable

It has a constructor where Runnable can be provided.

It don’t have such constructor

Thread /Task

It maintains a count of threads

It maintains a count of tasks

4.

A dvanceable

It is not advancable

It is advancable.

Exception

If one thread is interrupted while waiting then all other waiting threads will throw
BrokenBarrierException

Only current thread will throw

InterruptedException. It will not impact other threads

Volatile :-

Volatile keyword is used to modify the value of a variable by different threads. The volatile
keyword does not cache the value of the variable and always read the variable from the main
memory. The volatile keyword cannot be used with classes or methods. However, it is used with
variables. It also guarantees visibility. It reduces the risk of memory consistency error. The
volatile variables are always visible to other threads.

If you declared variable as volatile, Read and Writes are atomic


ThreadLocal :-

The Java ThreadLocal class enables you to create variables that can only be read and written by
the same thread. Thus, even if two threads are executing the same code, and the code has a
reference to the same ThreadLocal variable, the two threads cannot see each other's
ThreadLocal variables. Thus, the Java ThreadLocal class provides a simple way to make code
thread safe that would not otherwise be so.

Docker File :-

from :- It tells docker, from which base image you want to base your image .

Add keyword :-

The ADD command is used to copy files/directories into a Docker image.Copy files from the local
storage to a destination in the Docker image.

EXPOSE :- The EXPOSE instruction informs Docker that the container listens on the specified
network ports at runtime.

ENTRYPOINT :- ENTRYPOINT is the other instruction used to configure how the container will run.
you need to specify a command and parameters.

SAGA :-

A saga is a sequence of local transactions. Each local transaction updates the database and
publishes a message or event to trigger the next local transaction in the saga. If a local
transaction fails because it violates a business rule then the saga executes a series of
compensating transactions that undo the changes that were made by the preceding local
transactions.

There are two ways of coordination sagas:

Choreography - each local transaction publishes domain events that trigger local transactions in
other services

Orchestration - an orchestrator (object) tells the participants what local transactions to execute

You might also like