0% found this document useful (0 votes)
34 views

KB Java Microservics Int

Microservices interview questions

Uploaded by

Bhargav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

KB Java Microservics Int

Microservices interview questions

Uploaded by

Bhargav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 9

Service Discovery : Eureka (This will be used when one micro service calls to

another micro service using Eureka. Each micro service is Eureka client and there
will be one Eureka server)
@EnableEurekaClient
@EnableEurekaServer
Circuit Breaker Pattern : Hystrix (When micro service is down, you call
"getFallbackMethod" method is executed)
@EnableCircuitBreaker
@HystrixCommand(fallbackMethod= "getFallbackMethod")
@HystrixProperty

invocation numbers(5), time out(30sec) , sleep(5 mins)

@value for setting the value from application.properties


@configurationProperties to set the properties based on the prefixes

spring boot actuator : This will expose all the endpoints when some prop enables in
application.properties

spring profile : It will pick default profile by default and it will pick specific
profile with "application-<profile>.extension"

spring.profiles.active: <profile>

@profile("dev")

@profile("production")

How do you externalize your config?

@EnableConfigServer ---> spring cloud config server

@RefereshScope --> Refreshes the spring cloud server config file from git repo

Reactive Programming or Async Programming


Eureka - Service discovery and Load balancer
AMQP VS MQTT
Micro services
JWT(Json Web Token)
HashMap works internally?
Data collections
Linked list
Load Balancer
What is NodePort
Lifecycle of Docker Container : Created, Started, Paused, Exited, Dead
Lifecycle of Thread : New, Runnable (Start), Running (run), Waiting (sleep, wait),
Dead (End of execution)
What is Kubernetes and Docker and how they relate?

Kubernetes :
1.High Avaialbility or no downtime
2.Scalability or High performance
3.Disaster Recovery : Backup and restore

Worker Node : Many docker containers will run


For each worker node there will be kubelets which schedules and maintains the
worker node
1 pod : one or many containers
Usually 1 pod per 1 application
pod is abstraction layer for the container
Service : It manages the individual pod, whenever pod dies it recreates the pod (It
acts as a Service Registry)
1. It provides permanent ip address
2. It acts a load balancer
2. Whenever pod dies, it recreated the pod and provides permanent ip address
Master Node (Edge Node) : Entry point to Kubectl cluster

NodePort :

A NodePort is an open port on every node of your cluster. Kubernetes transparently


routes incoming traffic on the NodePort to your service, even if your application
is running on a different node.
It also acts as load balancer and it uses ingress conf.

What is Hibernate and JPA?


Hibernate methods? save(return id), persist, update, merge, saveOrUpdate, flush,
delete
Hibernate Caching? 1. Session level, 2. Global Level
Functional interface?
Functional programming?
Reactive programming?
Method reference in Java 8? className :: methodName
Java 8 features?
Executor Services?
difference between jar, war and ear
what is dynamic programming?
what is containerization and why it is required?
Fibonacci program?
what is pod, service?

Service Capability Exposure Function (SCEF)

services capability server/application server (SCS/AS)

Dynamic Programming :

It is typically used to optimize recursive algorithms, as they tend to scale


exponentially. The main idea is to break down complex problems (with many recursive
calls) into smaller subproblems and then save them into memory so that we don't
have to recalculate them each time we use them.

Solid principles
Thread Local
when to use static and when to use immutable class
what is ?
what is polymorphism?

Polymorphism in Java has two types: Compile time polymorphism (static binding) and
Runtime polymorphism (dynamic binding). Method overloading is an example of static
polymorphism, while method overriding is an example of dynamic polymorphism
what is shallow copy and deep copy in clone method?
shallow copy : Copies the reference and if original value changes, the copied value
also changes for obhjects
and it copies the exact value of the primitives and not objects
deep copy : creates objects with new operator so the values will never change after
orginal value changes or it copies all the fields

Try with resource --> Which all objects eligible : Closable interface

Builder pattern : Used to create complex objects


create static class
create method for each prop and return the main object
Builder().build()

Diff bet Callable and Runnable

Callable returns object and Runnable doesn't return anything

Call() and run() methods

Fail fast and Fail safe :


Fail-safe iterators means they will not throw any exception even if the collection
is modified while iterating over it. Whereas Fail-fast iterators throw an
exception(ConcurrentModificationException) if the collection is modified while
iterating over it.

Fail fast : ArrayList, HashMap


Fail safe : CopyOnWriteArrayList, ConcurrentHashMap

prdicate interface? : Predicate<T> is a generic functional interface that


represents a single argument function that returns a boolean value (true or false)

Bifunctional interface? :

@FunctionalInterface
public interface BiFunction<T, U, R> {

R apply(T t, U u);

Solid principles:

1. Single responsibility principle : Each class should be responsible for a single


part or functionality of the system.

2. Open-Closed Principle : Software components should be open for extension, but


not for modification.

Car --> Maruti , Maruti extends Car

3. Liskov Substitution Principle : Objects of a superclass should be replaceable


with objects of its subclasses without breaking the system.

Car --> Maruti , Maruti extends Car

4. Interface Segregation Principle : No client should be forced to depend on


methods that it does not use.

Example :

public interface Vehicle {


public void drive();
public void stop();
public void refuel();
public void openDoors();
}
public class Bike implements Vehicle {

// Can be implemented
public void drive() {...}
public void stop() {...}
public void refuel() {...}

// Can not be implemented


public void openDoors() {...}
}

5. Dependency Inversion Principle : High-level modules should not depend on low-


level modules, both should depend on abstractions.

ThreadLocal :
It is used for individual thread and it acts as a local thread for each thread and
the values of one thread cannot be read and modified by other thread
2 threads cannot see the value of each other. They can set and get different
values.

What is ConfigMap in kubernetes?

What is spring security?

What is Promotheus and Grafana?

Design Pattern :

1. Singleton
2. Micro Service
3. Circuit Breaker
4. Service Discovery
5. Iterator Pattern
6. Dependency Injection Pattern
7. Factory Design Pattern
8. Observer Design Pattern
9. MVC Pattern
10. Builder Pattern
What is bean?

https://app.coderpad.io/PRPC4TWA

Map<id, Employee>

Employee
id, name, slaary

docker run -p 8088:8088 kbellad/catalogmanagement

kubectl apply -f catalog.yaml

https://www.youtube.com/watch?v=SzbeDqBSRkc&t=540s

ADEP : Advance data enablement platform

Ques ,Stack, Tree, LinkedList

Searching algorithm, Sorting algorithm

Garbage collection, parallel GC, GC1

Difference between spring annotations -> @service, @component,


@configuration, @autoconfiguration @component etc

Difference between HashMap and LinkedHashMap

How spring boot loads?

Lazy load and Eager Load? - Proxy Pattern

Factorial of number program?

Binary search program?


--------------------------------------------Searching
Algorithm----------------------------------

Searching Algorithm :

1. Linear Search : Search element one by one till last element

The time complexity of the above algorithm is O(n)

2. Binary Search : Search element based on left and right index. Check if the
element is present in the first half or second half.

The time complexity of Binary Search can be written as

T(n) = T(n/2) + c

3. Jump Search : The basic idea is to check fewer elements (than linear search) by
jumping ahead by fixed steps or skipping some elements in place of searching all
elements.

Let’s consider the following array: (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144,
233, 377, 610). Length of the array is 16. Jump search will find the value of 55
with the following steps assuming that the block size to be jumped is 4.
STEP 1: Jump from index 0 to index 4;
STEP 2: Jump from index 4 to index 8;
STEP 3: Jump from index 8 to index 12;
STEP 4: Since the element at index 12 is greater than 55 we will jump back a step
to come to index 8.
STEP 5: Perform linear search from index 8 to get the element 55.

Time Complexity : O(√n)


Auxiliary Space : O(1)

Linear Search finds the element in O(n) time, Jump Search takes O(√ n) time and
Binary Search take O(Log n) time.
4. Interpolation Search :

The Interpolation Search is an improvement over Binary Search for instances, where
the values in a sorted array are uniformly distributed. Binary Search always goes
to the middle element to check. On the other hand, interpolation search may go to
different locations according to the value of the key being searched. For example,
if the value of the key is closer to the last element, interpolation search is
likely to start search toward the end side.

// The idea of formula is to return higher value of pos


// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
pos = lo + [ (x-arr[lo])*(hi-lo) / (arr[hi]-arr[Lo]) ]

arr[] ==> Array where elements need to be searched


x ==> Element to be searched
lo ==> Starting index in arr[]
hi ==> Ending index in arr[]

5. Exponential Search : The name of this searching algorithm may be misleading as


it works in O(Log n) time. The name comes from the way it searches an element.

We have discussed, linear search, binary search for this problem.


Exponential search involves two steps:

Find range where element is present


Do Binary Search in above found range.

Time Complexity : O(Log n)


Auxiliary Space : The above implementation of Binary Search is recursive and
requires O(Log n) space. With iterative Binary Search, we need only O(1) space.

------------------------------------------------Sorting
Algorithm-------------------------------

1. Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the minimum
element (considering ascending order) from unsorted part and putting it at the
beginning. The algorithm maintains two subarrays in a given array.

1) The subarray which is already sorted.


2) Remaining subarray which is unsorted.

In every iteration of selection sort, the minimum element (considering ascending


order) from the unsorted subarray is picked and moved to the sorted subarray.

arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

2. Bubble Sort

Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in wrong order.

Example:
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements,
and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 >
5), algorithm does not swap them.

Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is
completed. The algorithm needs one whole pass without any swap to know it is
sorted.

Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

3. Insertion Sort :

Insertion sort is a simple sorting algorithm that works similar to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an
unsorted part. Values from the unsorted part are picked and placed at the correct
position in the sorted part.
Algorithm
To sort an array of size n in ascending order:
1: Iterate from arr[1] to arr[n] over the array.
2: Compare the current element (key) to its predecessor.
3: If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the swapped
element.

4. Merge Sort

Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides the input
array into two halves, calls itself for the two halves, and then merges the two
sorted halves. The merge() function is used for merging two halves. The merge(arr,
l, m, r) is a key process that assumes that arr[l..m] and arr[m+1..r] are sorted
and merges the two sorted sub-arrays into one

You might also like