MICROSERVICE Questions and Answer
MICROSERVICE Questions and Answer
MICROSERVICE Questions and Answer
UNIT-I
1. With the suitable diagram, explain the microservice design model ..
ANS:A microservice design model consist of five parts: Service, Solution, Process and
Tools, Organization and Culture.
UNIT 2
2.Explain API design for microservices.
Ans:Microservices become more effective and more valuable when their code
and their components are able to communicate with each other.
• A good API Design is important in a microservices architecture and because of
that, all data exchange between services happens, either through message
passing method or API Calls.
• There are two ways to create an API for microservices, they are as follows
Message-Oriented:
Most of the time it is needed to address this necessary thing by taking a
message-oriented approach.
Example: Netflix follows message formats to communicate internally via TCP/IP
Protocol and for external customers it uses the JSON over the HTTP by using
mobile phones, browsers. o TCP/IP protocol is a connection-oriented protocol
and it communicates with each other by using the IP Address and port number.
Message passing can happen at both the sides at the same time
Hypermedia-Driven:
Hypermedia API design is based on the way as that HTML works in a web
browser and HTTP message are transmitted over the internet, it contents data
and actions encoded in HTML format.
Hypermedia provides links to navigate a workflows and template input to
request information. In the same ways as we use the links to navigate the web
and forms to provide the input.
Hypermedia API message contains data and actions which provides necessary
elements for it to work dynamically with client services and application.
Example: When we use google chrome or other application on a laptop, our
system processor makes a queue and it’s works on the fundamentals of
FIFO(First in first out). To run the application on the system, CPU allocate its
process id and port number.
HTTP message are sent to an Ip Address and a port number i.e. 80/ 443/88 and
message contains the data and action which includes data in a HTML format
Here are some key considerations for managing data in a microservices architecture:
Data Ownership: In a microservices architecture, each service owns its own data and is
responsible for ensuring data consistency and integrity. This means that each service needs to be
able to access and manipulate its own data without interfering with other services' data.
Data Consistency: Because data is often distributed across multiple microservices, it's
important to ensure that data remains consistent across the system. This can be achieved
through the use of distributed transactions, event-driven architectures, or other approaches to
ensure that data updates are propagated correctly across the system.
Data Integration: Despite the fact that each service manages its own data, there may still be a
need to integrate data across services in order to provide a complete picture of the system. This
can be achieved through the use of APIs, data pipelines, or other integration mechanisms.
Data Storage: Each microservice typically has its own data storage requirements, which may
include relational databases, NoSQL databases, or other storage solutions. The choice of data
storage depends on the specific needs of the service, as well as the scalability and performance
requirements of the system.
Overall, data is a critical component of microservices architecture, and it's important to design
data management strategies that enable the system to scale, remain resilient, and deliver the
required performance and functionality.
4. What do you mean by independent deployability?
Ans:Independent deployabilty is one of the primary principles of the microservices
architectural style.
• Independent deployability means each microservices should be deployable
completely different to each other.
• It is the idea that we can make a change to a microservices and deploy it into the
production stage without having to use of any other services.
• Scaling hardware resources on the premises would be very costly to manage and
monitor. To reduce the cost of the resources, we need to buy the resources so that
we could monitor, maintain and configure the resources on time, there is no need to
pay money for the monitoring, maintaining and scaling the resources.
• The shipping Inc. is confident that their security team will easily allow deployment
of safe microservices to a private and public cloud.
• There is another benefit of using the independent deployment in the
microservices as an operational cost and flexibility.
• For Customer Management and Shipping Management, there would be two
different teams for deployment of separate microservices.
• Customer Management is used to create, edits, enables, and disables customers
details and provide a view of a representation of a customer.
• Shipment Management is used to responsible for creating an entire lifecycle of a
package from pick-up and drop facility from origin to destination.
UNIT-III
1. What is ASP.net core? List components of ASP.net core.
Ans:ASP.NET Core is the open-source version of ASP.NET that runs on MacOS, Linux,
Windows and Docker.It is a collection of small, modular components that can be plugged
into the application to build web applications and Microservices. Within ASP.NET there are
APIs for routing, JSON serialization and MVC controllers. ASP.NET Core is available on
GitHub. The cross-platform web server used in ASP.NET Core is Kestrel. It is included by
default in ASP.NET Core project templates.
Here are some of the key components of ASP.NET Core
1.Middleware: Middleware components are used to process HTTP requests and responses.
ASP.NET Core provides a set of built-in middleware components, such as authentication
middleware, routing middleware, and error handling middleware, as well as the ability to create
custom middleware.
2.MVC Framework: The Model-View-Controller (MVC) framework is a pattern for designing web
applications, and it's supported by ASP.NET Core. The MVC framework provides a way to
separate the application logic into three distinct components: the model, which represents the
data and business logic; the view, which displays the data to the user; and the controller, which
handles user input and updates the model and view accordingly.
3.Razor Pages: Razor Pages is a new feature in ASP.NET Core that provides a simpler alternative
to the MVC framework. Razor Pages allow developers to create web pages with a minimal
amount of ceremony, and they're well-suited to building small to medium-sized web
applications.
4.Web APIs: ASP.NET Core provides support for building Web APIs that can be used to expose
functionality to other applications. Web APIs can be created using the MVC framework or using a
lightweight framework called "ASP.NET Core Web API".
SignalR: SignalR is a real-time messaging framework that enables server-side code to push
content to connected clients in real-time. It can be used to build features such as chat
applications, real-time notifications, and live updates.
Docker Terminology:
- 1. Images – Images are the blueprints of the web application which form the basis of
containers.
2.Containers – Containers are created from Docker images and run the actual
application.
3. Docker Daemon - Docker Daemon are the background service running on the host
that manages building, running and distributing Docker containers.
4. Docker Client - Docker Client is the command line tool that allows the user to interact
with the daemon.
5. Docker Hub - Docker Hub is a registry of Docker images. It is a directory of all available
Docker images. Anyone can host their own Docker registries and use them for pulling
images.
5.Explain the steps to build a console application using ASP.NET Core ?
Ans:
Step 1:.NET Core is made up of two components - runtime and SDK. The runtime is
used to run a
Step 2:Open the command prompt and go to the desired folder and type the dotnet
new console -o myFirstMvcApp
Step 3:The folder myFirstConsoleApp consists of two files: the project file which
defaults to .csproj which in our case is called myFirstConsoleApp.csproj and
Program.cs.
Step4:The Program.cs file contains the method Main, which is the entry point of the
ASP.NET Core applications. All the .NET Core applications basically designed as
console applications. The Program.cs file
2. Add a wercker.yml file to your application’s codebase – Fig. 5.18 depicts the code
of wercker.yml file:-
3. Choose how to package and where to deploy successful builds - There is a
script within the application 5.19 .
This will execute the Wercker build exactly as it executes in the cloud, all
within the confines of a container image. A bunch of messages will be seen
from the Wercker pipeline,
Here are some of the key ways that Team Services can support microservices architecture:
Source Control: Team Services provides a powerful source control system that can be used
to manage the code for each microservice. This allows multiple developers to work on the
same codebase, while also providing version control and change tracking capabilities.
Build and Release Management: Team Services includes a build and release management
system that can be used to automate the process of building, testing, and deploying
microservices. This can help to ensure that each microservice is built and deployed
consistently across environments, and can also help to reduce the risk of errors or downtime
during deployment.
Testing: Team Services includes a range of testing tools, including unit testing, integration
testing, and load testing, that can be used to validate the functionality and performance of
individual microservices as well as the system as a whole.
Overall, Team Services can provide a comprehensive set of tools and services that can help
to streamline the development, testing, and deployment of microservices, while also
supporting the coordination of multiple microservices within a larger system.
UNIT 4
1. Write short note on RabbitMQ stores messages in queues.
Answer:RabbitMQ is a message broker that allows applications to communicate with each
other using messages. Messages are sent from a producer to a consumer through a queue. A
queue is a temporary storage location where messages are stored until they are consumed by a
consumer.
When a producer sends a message to RabbitMQ, the message is first routed to an exchange. The
exchange then routes the message to one or more queues based on the routing rules defined by
the producer. Once the message is routed to a queue, it is stored in the queue until a consumer
retrieves it.
RabbitMQ stores messages in queues using a mechanism called message buffering. Message
buffering is the process of temporarily storing messages in memory or on disk until they can be
processed by a consumer. RabbitMQ uses message buffering to ensure that messages are not
lost if a consumer is not available to process them.When a message is stored in a queue,
RabbitMQ assigns a delivery tag to the message. The delivery tag is a unique identifier that is
used to acknowledge the delivery of the message by the consumer. Once a message is delivered
to a consumer and the delivery is acknowledged, RabbitMQ removes the message from the
queue.Queues in RabbitMQ can be configured to have different properties such as the maximum
number of messages they can hold, the maximum size of each message, and the maximum
amount of time a message can be stored in the queue. These properties help ensure that the
queue does not become overloaded with messages and that messages are processed in a timely
manner.
In summary, RabbitMQ stores messages in queues using message buffering to ensure that
messages are not lost and are processed in a timely manner. Queues in RabbitMQ can be
configured with various properties to control their behavior and prevent overloading.
Install PostgreSQL: Begin by installing PostgreSQL on the server or container where you plan to
host your database. You can download the appropriate installer or use a package manager based
on your operating system.
Create the Database: Once PostgreSQL is installed, connect to the database server and create a
new database using the createdb command or a graphical tool like pgAdmin. For example, you
can run createdb mydatabase to create a database named "mydatabase".
Define Database Schema: Design your database schema by creating tables, defining relationships,
and setting up indexes or constraints. This step involves modeling your data based on the
requirements of your microservices.
Create Database Users and Roles: Determine the access privileges for different microservices and
create dedicated database users and roles accordingly. Grant the necessary permissions to each
user to ensure proper data segregation.
Secure the Database: PostgreSQL offers various security features to protect your data. Configure
appropriate authentication methods, such as password-based authentication or certificate-based
authentication, to ensure secure access to the database. Additionally, consider setting up a firewall
to restrict access to the database server.
Expose the Database Endpoint: Determine how your microservices will connect to the
PostgreSQL database. Typically, you'll expose the database endpoint as an environment variable or
configuration parameter that can be read by your microservices during startup. This allows each
microservice to connect to the database using the appropriate credentials.
Handle Database Migrations: As your microservices evolve, you may need to modify your
database schema. Use database migration tools like Flyway or Liquibase to manage schema
changes while ensuring backward compatibility. These tools enable you to version your database
schema and apply migrations as needed.
Implement Data Access Layer: In each microservice, implement the data access layer using a
PostgreSQL client library or an ORM (Object-Relational Mapping) framework. Use the appropriate
database connection configuration, including the database endpoint and credentials, to establish a
connection to the PostgreSQL database.
Test and Monitor: Thoroughly test your microservices to ensure they can read and write data to
the PostgreSQL database correctly. Implement proper logging and monitoring mechanisms to
track database-related issues and performance metrics.
By following these steps, you can configure PostgreSQL as a data service in your microservices
architecture, allowing your services to interact with the database efficiently and securely.
.
5. Comment on One service can consume other services.
Ans.In a microservices architecture, it is common for one service to consume or interact with
other services. This is often done to leverage the functionalities or data provided by other
services in a decoupled and modular manner. Here are a few points to consider regarding
services consuming other services:
Loose Coupling: Services consuming other services should follow the principle of loose coupling.
Each service should have well-defined interfaces and communicate through APIs or contracts,
allowing them to evolve independently. This enables services to be developed, tested, and
deployed independently, making the system more flexible and maintainable.
Service Discovery: For a service to consume another service, it needs to be aware of the
endpoints or locations of the service it wants to consume. Service discovery mechanisms, such as
service registries or service meshes, can help in dynamically discovering and locating other
services in the system. This allows services to find and connect to the required services without
hardcoding their addresses.
API Gateway: An API gateway can act as a centralized entry point for all client requests and
handle interactions with various services. It can provide a unified interface, handle
authentication and authorization, and route requests to the appropriate services. The API
gateway abstracts the underlying service structure and provides a simplified interface to clients,
allowing them to consume multiple services seamlessly.
Data Sharing: Services can consume other services to access or manipulate shared data. For
example, a user management service may consume an authentication service to validate user
credentials. By consuming the authentication service, the user management service can ensure
that only authenticated users can access or modify user-related data.
Asynchronous Communication: Services can consume other services through asynchronous
communication mechanisms such as message queues or event streams. This allows for
decoupled and scalable communication between services. For instance, a service can publish an
event to a message broker, and other services can consume those events and take appropriate
actions based on them.
Service Versioning: When consuming other services, it is crucial to consider versioning. Services
evolve over time, and changes in service interfaces or data formats can potentially impact
consuming services. By maintaining versioning strategies, such as API versioning or backward
compatibility, you can ensure smooth communication between services even as they evolve.
窗体底端
Auditability: Because the event log contains a complete history of all changes, it is possible
to audit the system and determine exactly what has happened and when.
Scalability: Because the event log is append-only, it can be implemented using a high-
performance log-based storage system such as Apache Kafka or RabbitMQ.
Reversibility: Because the event log contains a complete history of changes, it is possible to
roll back to a previous state of the object by simply replaying the events up to that point.
Consistency: Because the event log represents the complete history of the object, it is
possible to implement complex business logic that takes into account the entire history of
the object.
Event sourcing is often used in combination with CQRS (Command Query Responsibility
Segregation), which is a pattern that separates the read and write models of an application.
In a CQRS system, write operations are handled by the event sourcing component, while
read operations are handled by a separate component that reads from a denormalized view
of the event log.
Event sourcing is particularly well-suited for systems that require high availability, scalability,
and auditability, such as financial systems, e-commerce systems, and IoT systems. However,
implementing event sourcing can be complex and may require a significant investment in
terms of time and resources.
DNS can also be used to map URLs to IP addresses. URLs (Uniform Resource
Locators) are used to identify resources on the internet, such as web pages, images, or
files. A URL consists of several parts, including the protocol (such as HTTP or
HTTPS), the domain name, and the path to the resource.
When a user enters a URL in their web browser, the browser first extracts the domain
name from the URL. The browser then sends a request to a DNS resolver to find the
IP address associated with the domain name. Once the IP address is obtained, the
browser establishes a connection to the web server at that IP address and sends a
request for the resource identified by the URL.
In summary, DNS is used to map domain names to IP addresses, which allows web
browsers to connect to web servers and retrieve resources identified by URLs.
UNIT 5
1. How to Secure a Service with Client credentials and Bearer Tokens?
Ans:Securing a Service with Client Credentials
The customer qualifications design is perhaps the least difficult approaches to make
sure about an assistance. Initially, you speak with the administration just by means
of SSL, and second, the code expending the administration is liable for transmitting
certifications. These accreditations are typically just called a username and secret
key, or, increasingly suitable for situations that don’t include human collaboration, a
customer key and a customer mystery. Whenever you’re taking a gander at an open
API facilitated in the cloud that expects you to flexibly a customer key and mystery,
you’re taking a gander at an execution of the customer accreditations design.
It is likewise genuinely regular to see the customer key and furtively transmitted as
custom HTTP headers that start with the X-prefix; e.g., X-MyApp-ClientSecret and
X-MyApp-ClientKey.
the most startling situation is this: what occurs if a customer mystery and key
are undermined and the purchaser accesses secret data without setting off any
conduct cautions that may get them prohibited? What we need is something that
consolidates the straightforwardness of versatile qualifications that don’t require
correspondence with an outsider for approval with a portion of the more reasonable
security highlights of OpenID
Associate, similar to the approval of backers, approval of crowd (target), terminating
tokens, and the sky is the limit
Securing a Service with Bearer Tokens
Through our investigation of OpenID Connect, we’ve just observed that the
capacity to transmit convenient, autonomously certain tokens is the key innovation
supporting the entirety of its validation streams. Conveyor tokens, explicitly those
holding fast to the JSON Web Token a determination can likewise be utilized freely
of OIDC to make sure about administrations without including any program diverts
or the verifiable suspicion of human shoppers.
The OIDC middleware we utilized before in the part expands on the head of JWT
middleware that we get from the Microsoft.AspNetCore.Authentication.JwtBearer
NuGet bundle. To utilize this middleware to make sure about our administration,
we would first be able to make an unfilled administration utilizing any of the past
models in this book as reference material or platform. Next, we add a reference to
the JWT carrier confirmation NuGet bundle.
Example . Startup.cs
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
IssuerSigningKey = signingKey,
ValidateIssuer = false,
ValidIssuer = “https://fake.issuer.com”,
ValidateAudience = false,
ValidAudience = “https://sampleservice.example.com”,
ValidateLifetime = true,
}
});
We can control the entirety of the various sorts of things we approve while
accepting carrier tokens, including the guarantor’s marking key, the backer, the
crowd, and the symbolic lifetime. Approval of things like a symbolic lifetime as a
rule additionally expects us to set up choices like permitting some range to oblige
clock slant between a symbolic backer and the made sure about help.
In the previous code, we have approval killed for guarantor and crowd, yet these
are both genuinely straightforward string examination checks. At the point when
we approve these, the backer and the crowd must match precisely the guarantor and
crowd contained in the token.
2-Every developer who are using ASP.NET web Application should be aware with
form authentications. The way of authentication is where an web application shows a
custom form to ready to users for their authorizations. Authorizations are converted
directly to the application and verification by the application. The cookies marks as
authenticated for some period of time, when users successfully log in
No matter where running our application on PaaS that is essentially good or bad for
cookie authentication. Conversely it produce extra load on our application.
The main aim of forms authentication need our application to maintain and check
authentication. That is we have to deal with safety, encoding, and storing private
information.
Event-driven architecture is a pattern where services communicate with each other through
a messaging system using events. Events are discrete pieces of information that represent
something that has happened in the system, such as a new user registration or an order
being placed. Services can subscribe to events of interest and respond accordingly.
Stream processing is a pattern that allows for real-time processing of large volumes of data.
Data is processed in small chunks or streams rather than in batches. This allows for low-
latency processing of data as it is received.
To ensure real-time performance, it is important to design services that are lightweight and
can be easily scaled horizontally to handle increased load. Additionally, services should be
designed to handle failures gracefully, as the failure of one service can have a cascading
effect on the rest of the system.