0% found this document useful (0 votes)
161 views23 pages

MICROSERVICE Questions and Answer

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 23

MICROSERVICE

UNIT-I
1. With the suitable diagram, explain the microservice design model ..
ANS:A microservice design model consist of five parts: Service, Solution, Process and
Tools, Organization and Culture.

2.2.1 Service: It is essential to implement well-designed microservices and APIs in


microservice system. the services form the atomic building blocks from which the
entire microservice system is built. The complex behaviour of the set of the
component in microservice system can be understood if one can get the design,
scope, and granularity of the service.
2.2.2 Solution: A solution architecture represents a macro view of the solution. It is
distinct from the individual service design. In the individual service designing, the
decisions are bounded by the need to produce a single output- the service itself.
Whereas, in the solution architecture designing, the decisions are bounded by the
need to coordinate all the inputs and outputs of multiple services. The designer can
induce more desirable system behaviour in the macro-level view of the system.
2.2.3 Process & Tools: The behaviour of the system depends on the process and
tools used to build the microservice system. Tools and processes are related to
software development, code deployment, maintenance, and product management
in the microservice system. It is important to choose the right processes and tools for
producing good microservice system behaviour.
2.2.4 Organization :Often how we work is a product of with whom we work and how
we communicate. Organizational designs include the structure, direction of
authority, granularity and composition of teams from the microservice system
perspective. Organizational design is context sensitive. If you try to model 500+
employee enterprise structure after a 10-person start-up (and vice versa), you may
be in terrible situation. It is important for the microservice system designer to
understand the implications of changing these organizational properties.
Microservice system designers should also know that good organizational designs
lead to good service design.
2.2.5 Culture: Culture can be defined as a set of values, beliefs, or ideals that are
shared by all of the workers within an organization. Culture of your organization
shapes all of the atomic decisions that people within the system will make. It makes
it powerful tool in the system design endeavour. Culture of an organization is
context-sensitive feature of your system. It is difficult to measure an organization’s
culture.
2.Write a short note on standardization and coordination.
ANS:In organizations, most people work within constraints because of the wrong type of
system behaviour or organization failing as a result of particularly bad behaviourThe system
designer decides on some behaviour or expectation of the system. This behaviour or
expectation may be universally applied to the actors within the system.Control act as system
influencers that increases the likelihood of the expected results. The system designers
should develop the right standards, apply these standards and measure the results of the
changes. This will help in mastering the system design and making the system work as
expected. However, control of the system comes at the cost. Standardization and
adaptability are enemy of each other. If too many parts of the system are standardized, you
risk creating something that is costly and difficult to change.
Standardizing process
The system behaviour can be more predictable by standardizing the way people work and
tools they use. For example, the component deployment time may be reduced by
standardizing a deployment process. This may help in improving the overall changeability of
the system as the cost the new deployment decreases. Standardizing the way people work in
the organization implies that standardization of the work produced, kind of people hired and
culture of an organization. The Agile methodology is an example of process standardization.
Agile helps to introduce the changes in small increments and allows the organization to
handle change easier.
Standardizing outputs
Team is defined as group of people who takes a set of inputs and transform them into one or
more outputs. The way of setting a universal standard for what output should look like is
known as output standardization. When the expected output does not meet the standard
output is known as failure. A team takes a set of requirements and turn those requirements
into a microservice. In the microservice system, service is the output. API in the microservice
system is the face of that output provides access to the features and data. The consumer has
no visibility of the implementation behind API. Output standardization in the microservices
context, means developing some standards for the APIs that expose the services. In an effort
to improve the usability, changeability and overall experience
Standardizing people
The types of people work within the organization can also be standardize. For
example, the minimum skill requirement can be introduced for the people who want
to work in the microservice team. An effective way of introducing more autonomy
into the microservices system by standardizing skills or talent. People implementing
services should have required skills with the help of which they can make better
decisions and create the expected system behaviour.
Standardization trade-offs
Standardizing helps exert influence over the system. Organization should not choose
just one of the standard to utilize. Introducing different modes of standardization
can create unintended consequences in parts of system as they are not mutually
exclusive. For example, the APIs that all microservices expose are standardize as the
organization wants to reduce the cost of the connecting elements in the solution
architecture. A set of rule for the types of APIs that developers are allowed to
develop should be defined. Establish a review process to police this standardization.
For example, some organizations standardize a way of documenting the interfaces
created by the developers. Swagger is one of the popular example of an interface
description language
5.Write a short note on “Goals for the microservices way”.
ANS:While making decision, it is good idea to have a set of high-level goals. These
goals guide us about what to do and how to go about doing it. The ultimate goal in
building application in the microservices way: finding the right harmony of speed and
safety at scale. This goal allows to build a system that hits the right notes for the
organization when given enough time, iterations, and persistence. it might take a
very long time for you to find that perfect harmony of speed and safety at scale if
you are starting from scratch. We have access to proven methods for boosting both
speed and safety and so no need to reinvent established software development
practices. Instead, you can experiment with the parameters of those practices.
Reduce cost:
The ability to reduce the cost of designing, implementing, and deploying services
allows you more flexibility when deciding whether to create a service at all. Reducing
costs can increase your agility because it makes it more likely that you will
experiment with new ideas.
Increase release speed:
The common goal is increasing the speed of the “from design to deploy” cycle. One
can also view this goal as shortening the time between the idea and deployment.
The ability to increase speed can also lower the risk for attempting new product
ideas or even things as simple as new, more efficient data-handling routines. In
deployment process the speed can be increased by automating important elements
of the deployment cycle and speeding up the whole process of getting services into
production.
Improve resilience
The system should not crash when errors occur. The resilient system can be created
when you focus on the overall system and not on single component or solution.
DevOps focuses on improving resilience by automating testing. The tests are
constantly run against checked-in code by making the testing part of the build
process. This increases the chances of finding errors that could occur at runtime.
Enable visibility
Enabling runtime visibility means to improve the ability of the stakeholders to see
and understand what is going in the system. Good set of tools for enabling visibility
during the coding process which gives reports on the coding backlog, how many
builds were created, the number of bugs in the system versus bug completed, and so
on. We also need the visibility into the runtime system like monitoring and logging of
the operation-level metrics. Some monitoring tools can act when things go bad.
Trade-offs
Trade-offs also needs to be considered. Reduction in the cost may affect runtime
resilience or speeding up the deployment might mean that you to lose track of what
services are running in production and reduce the visibility into the larger service
network. It is important to balance various goals and find the right mix for the
organization.
7.Enlist and explain the operating principles that can act as starter material for the
company.
ANS:It is important to have a set of principles along with a set of goals. Principles offer more
concrete guidance on how to act in order to achieve those goals. Principles do not set out
required elements
1.Netflix( Netflix’s cloud architecture and operating principles are):
Antifragility: The internal systems is strengthened to withstand the unexpected problems. It
promotes this by including “Simian Army” set of tools which “enforce architectural
principles, induce various kinds of failures, and test our ability to survive them”. Software
has bugs, operators make mistakes, and hardware fails. Developers are incentivized to learn
to build robust system by creating failures in production under controlled conditions.
Immutability: The principle of immutability assert autoscaled groups of service
instances is stateless and identical. This enables Netflix’s system to “scale
horizontally”. Each released component is immutable. A new version of the service is
introduced alongside the old version, on new instances, then traffic is redirected
from old to new.
Separation of Concerns: Each team owns a group of services. They own building, operating,
and evolving those services, and present a stable agreed interface and service level
agreement to the consumers of those services.
2. Unix
1. Create a program that performs one task well. Build a new program to do the new task.
Do not complicate the old program by adding new features.
2. The output of the program should be input of another program. Do not add extra
information to the output. Try to avoid columnar or binary input format. Keep less user
interaction for the input.
3. Within weeks try to design and build the software/ operating systems. Do not hesitate to
throw away the clumsy parts and rebuild them.
4. To lighten a programming task, use tools in preference to unskilled help. Even if you have
to detour to build the tools and expect to throw some of them out after you have finished
using them.
Suggested principles:
 Build afresh: Create a collection of powerful tools that are predictable and
consistent over a long period of time. It may be better to build a new
microservice
 Expect output to become input: The output of one program should be the input
of another program.
 Do not insist on interactive input: Do not engage human in every step. The
script should handle both the input and output on their own. Reducing the need
for human interaction increases the likelihood that the component can be used
in unexpected ways.
 Try early: Microservice components should be “tired early” fits well with the
notion of continuous delivery and the desire to have speed as a goal for your
implementations
 Toolmaking: The “right tool” should be used for building a solution. One should
be able create tools to reach a goal. Tools are a means and not an end.
8. What are shared and local capabilities of the platform?
Ans:The capabilities that are selected and maintained at the team or group level is
called local capabilities.. It also reduces the number of blocking factors that a team
encounter while they work to accomplish their goals.Team should be allowed to
make their own determination on which developer tools, frameworks, support
libraries, config utilities, etc are best for their assigned jobs. Sometimes these tools
are created inhouse or even they are even created in-house or even open-source,
community projects.
1.General tooling: A power local capability to automate the process of rolling out,
monitoring, and managing VMs and deployment packages. Jenkins is the popular
open-source tool. Netflix created Asgard and Animator for this.
2.Runtime configuration: Many organizations who are using microservices have
found pattern in rolling out new features in a series of controlled stages. It allows
team to assess a new release’s impact on the rest of the system. Twitter’s Decider
configuration tool is an example of this capability.
3.Service discovery: Service discovery tools make it possible to build and release
services that, upon install, register themselves with a central source. It then allows
other service to discover the exact address/location of each other during runtime
because of which various team can make changes to the location of their own
service deployments without fear of breaking some other team’s existing running
code. Apache Zookeeper, CoreOS’ etcd, HashiCorp’s Consul are some popular service
discovery tools.
4.Request routing: The actual process of handling request begins once you have
machines and deployments up and running and discovering services. Request-
routing technology is used by systems for converting external calls into internal code
execution. Netty and Twitter’s Finagle are popular opensource services.
5.System observability: Distributed environments is getting a view of the running
instances-seeing their failure/success rates, spotting bottlenecks in the system, etc.
Twitter’s Zipkin, Netflix’s Hystrix are examples of tools for this task.

UNIT 2
2.Explain API design for microservices.
Ans:Microservices become more effective and more valuable when their code
and their components are able to communicate with each other.
• A good API Design is important in a microservices architecture and because of
that, all data exchange between services happens, either through message
passing method or API Calls.
• There are two ways to create an API for microservices, they are as follows
Message-Oriented:
 Most of the time it is needed to address this necessary thing by taking a
message-oriented approach.
 Example: Netflix follows message formats to communicate internally via TCP/IP
Protocol and for external customers it uses the JSON over the HTTP by using
mobile phones, browsers. o TCP/IP protocol is a connection-oriented protocol
 and it communicates with each other by using the IP Address and port number.
Message passing can happen at both the sides at the same time
Hypermedia-Driven:
 Hypermedia API design is based on the way as that HTML works in a web
browser and HTTP message are transmitted over the internet, it contents data
and actions encoded in HTML format.
 Hypermedia provides links to navigate a workflows and template input to
request information. In the same ways as we use the links to navigate the web
and forms to provide the input.
 Hypermedia API message contains data and actions which provides necessary
elements for it to work dynamically with client services and application.
 Example: When we use google chrome or other application on a laptop, our
system processor makes a queue and it’s works on the fundamentals of
FIFO(First in first out). To run the application on the system, CPU allocate its
process id and port number.
 HTTP message are sent to an Ip Address and a port number i.e. 80/ 443/88 and
message contains the data and action which includes data in a HTML format

3. Discuss relation between data and microservices.


Ans:Data is a critical aspect of microservices architecture, as each microservice typically has its
own data storage and management requirements. In a microservices architecture, data is often
distributed across multiple services, and each service is responsible for managing its own data,
which can include storing, retrieving, and updating data.

Here are some key considerations for managing data in a microservices architecture:

Data Ownership: In a microservices architecture, each service owns its own data and is
responsible for ensuring data consistency and integrity. This means that each service needs to be
able to access and manipulate its own data without interfering with other services' data.
Data Consistency: Because data is often distributed across multiple microservices, it's
important to ensure that data remains consistent across the system. This can be achieved
through the use of distributed transactions, event-driven architectures, or other approaches to
ensure that data updates are propagated correctly across the system.
Data Integration: Despite the fact that each service manages its own data, there may still be a
need to integrate data across services in order to provide a complete picture of the system. This
can be achieved through the use of APIs, data pipelines, or other integration mechanisms.
Data Storage: Each microservice typically has its own data storage requirements, which may
include relational databases, NoSQL databases, or other storage solutions. The choice of data
storage depends on the specific needs of the service, as well as the scalability and performance
requirements of the system.

Overall, data is a critical component of microservices architecture, and it's important to design
data management strategies that enable the system to scale, remain resilient, and deliver the
required performance and functionality.
4. What do you mean by independent deployability?
Ans:Independent deployabilty is one of the primary principles of the microservices
architectural style.
• Independent deployability means each microservices should be deployable
completely different to each other.
• It is the idea that we can make a change to a microservices and deploy it into the
production stage without having to use of any other services.
• Scaling hardware resources on the premises would be very costly to manage and
monitor. To reduce the cost of the resources, we need to buy the resources so that
we could monitor, maintain and configure the resources on time, there is no need to
pay money for the monitoring, maintaining and scaling the resources.
• The shipping Inc. is confident that their security team will easily allow deployment
of safe microservices to a private and public cloud.
• There is another benefit of using the independent deployment in the
microservices as an operational cost and flexibility.
• For Customer Management and Shipping Management, there would be two
different teams for deployment of separate microservices.
• Customer Management is used to create, edits, enables, and disables customers
details and provide a view of a representation of a customer.
• Shipment Management is used to responsible for creating an entire lifecycle of a
package from pick-up and drop facility from origin to destination.

5. List and explain the role of service discovery?


Ans.• If you are using the docker containers for your package and deployment of
your microservices then you can simply use simple configuration for multiple
microservices.
• This will enable and help us to discover and communicate with each other and
which is used in local development and fast prototyping.
• You can deployment at least three or more than three docker host or machine a
number of container on each of them.
• Each number of instances of docker containers contains different number of shapes, size
and color as described as the above picture.
• There might be a possibility that many of the services are hosted on the same machine so
that we can identify the host by just the IP Address.
• If we allocate the IP as per the microservices then after assigning the IP address to the
host, assigning that will become very difficult to us as well as to understand.

6. What are the Need for an API Gateway?


Ans:API Gateway includes security, transformation, orchestration and routing to
secure the service and product.

• Security: We develop the microservices architecture in a way that it has high


degree of freedom. There are lots of other services we develop for more moving
path in single application.
 Mature microservices include more complex community and enterprise
application and more of the microservices are deployed, more dangerous
security is found in the application.
 To secure our microservices application, API endpoints provided by various
microservices are secured using a capable API gateway
 API gateway is capable for providing a unique entry point for external
customers, independent of the number so that external customers will be able
to interact with out microservices application
• Transformation and Orchestration:
As we know that microservices has a single capability and in microservices we
develop the application in such a way that it should be small, reliable, portable and
secure.
 The UNIX has some statement about the features and algorithms that do one
thing and do it well. It means that Unix follows and uses the single capability
approach to work smoothly and facilitate the orchestration by piping the input
and output.
 To make microservices useful and needful, always use the orchestration
framework like Unix and Linux piping, like the web API.
 For example, Framework is something like if we consider an example in which
we have a photo and we want to setup the photo in such a way that it should
look good. To keep and to make the photo look good, we need a photo frame of
wood. So, here the rectangle frame of wood defines the framework for the
photo that we have.
Routing:
Discovery system is used to find the system services to find out the system
microservices architecture.
• Routing is the process of forwarding the packets from one network to another
network based on the network architecture
. • Consul and etcd skyDNS provide a DNS based interface to discovery. DNS is
required to convert IP to name and vice versa. It can be used for debugging and to
find out the bugs.
• DNS queries only look for local based zone domain and IP mapping and
microservices domain mapping with an IP and Port combination.
7. How many bug fixes/features should be included in a single release?
Ans:
• When the microservices are ready for production, but before the release of
microservices on platform, first of all we need to identify the bug, need to release
the alpha and beta code for the user and programmer to check whether there are
any bugs in that or not
• Alpha release is provided for the user level interface, so that every user can
interact and see and image and find out the bug in the microservices.
• Beta version are provided for the programmers to identify the bug and solve these
bugs as soon as possible before the production stage. • The big reason to choose
limiting the number of changes in the microservices is to reduce uncertainty and to
find out the bug in the code.
• If we release the system component for the production release and if there are
multiple bugs caught and uncertainty, then there is no use of that and it will increase
number of interactions between those things.
• if you release a component that contains 5 changes and it causes problems in
production, you know that there are 10 possible ways in which these 5 changes
could interact to cause a problem.

UNIT-III
1. What is ASP.net core? List components of ASP.net core.
Ans:ASP.NET Core is the open-source version of ASP.NET that runs on MacOS, Linux,
Windows and Docker.It is a collection of small, modular components that can be plugged
into the application to build web applications and Microservices. Within ASP.NET there are
APIs for routing, JSON serialization and MVC controllers. ASP.NET Core is available on
GitHub. The cross-platform web server used in ASP.NET Core is Kestrel. It is included by
default in ASP.NET Core project templates.
Here are some of the key components of ASP.NET Core

1.Middleware: Middleware components are used to process HTTP requests and responses.
ASP.NET Core provides a set of built-in middleware components, such as authentication
middleware, routing middleware, and error handling middleware, as well as the ability to create
custom middleware.

2.MVC Framework: The Model-View-Controller (MVC) framework is a pattern for designing web
applications, and it's supported by ASP.NET Core. The MVC framework provides a way to
separate the application logic into three distinct components: the model, which represents the
data and business logic; the view, which displays the data to the user; and the controller, which
handles user input and updates the model and view accordingly.

3.Razor Pages: Razor Pages is a new feature in ASP.NET Core that provides a simpler alternative
to the MVC framework. Razor Pages allow developers to create web pages with a minimal
amount of ceremony, and they're well-suited to building small to medium-sized web
applications.

4.Web APIs: ASP.NET Core provides support for building Web APIs that can be used to expose
functionality to other applications. Web APIs can be created using the MVC framework or using a
lightweight framework called "ASP.NET Core Web API".

Entity Framework Core: Entity Framework Core is an Object-Relational Mapping (ORM)


framework that provides a way to map database tables to classes in your
application.supportsmultiple database providers and can be used to perform CRUD (Create,
Read, Update, Delete) operations on your data.

SignalR: SignalR is a real-time messaging framework that enables server-side code to push
content to connected clients in real-time. It can be used to build features such as chat
applications, real-time notifications, and live updates.

2. What is Docker? Explain the Docker terminology.?


Ans:Docker is a software containerization platform which is used for building an
application along with packaging them along with their dependencies into a container.
These containers can then be easily shipped to run on other machines. Containerization
is considered as an evolved version of Virtualization. The same task can also be achieved
using Virtual Machines. However it is not very efficient. Compared to virtual machines
containers do not have high overhead and hence enable more efficient usage of the
underlying system and resources. Virtual Machines run applications inside a guest
Operating System, which runs on virtual hardware powered by the server’s host
Operating System. Containers leverage the low-level mechanics of the host operating
system by providing most of the isolation of virtual machines at a fraction of the
computing power. There are countless platforms and frameworks that either support or
integrate tightly with Docker. Docker images can be deployed to AWS (Amazon Web
Services), GCP (Google Cloud Platform), Azure, virtual machines, and combinations of
those running orchestration platforms like Kubernetes, Docker Swarm, CoreOS Fleet,
Mesosphere Marathon, Cloud Foundry etc. The main advantage of Docker is that it
works in all of the above environments without changing the container format.

Docker Terminology:
- 1. Images – Images are the blueprints of the web application which form the basis of
containers.
2.Containers – Containers are created from Docker images and run the actual
application.
3. Docker Daemon - Docker Daemon are the background service running on the host
that manages building, running and distributing Docker containers.
4. Docker Client - Docker Client is the command line tool that allows the user to interact
with the daemon.
5. Docker Hub - Docker Hub is a registry of Docker images. It is a directory of all available
Docker images. Anyone can host their own Docker registries and use them for pulling
images.
5.Explain the steps to build a console application using ASP.NET Core ?
Ans:
Step 1:.NET Core is made up of two components - runtime and SDK. The runtime is
used to run a
Step 2:Open the command prompt and go to the desired folder and type the dotnet
new console -o myFirstMvcApp

Step 3:The folder myFirstConsoleApp consists of two files: the project file which
defaults to .csproj which in our case is called myFirstConsoleApp.csproj and
Program.cs.

Step4:The Program.cs file contains the method Main, which is the entry point of the
ASP.NET Core applications. All the .NET Core applications basically designed as
console applications. The Program.cs file

Step5:Change the directory to the newly created project folder and


run the dotnet run command to view the output

3. Explain the process of continuous integration using Wercker.?


Ans:Wercker is a Docker based continuous delivery platform which is used by
software developers to build and deploy their applications and Microservices. Using
Wercker developers can create Docker containers on their desktop, automate their
build and deploy processes, testing them on their desktop, and then deploy them to
various cloud platforms like Heroku, AWS and Rackspace. The command-line
interface of Wercker is open-sourced. The business behind Wercker, also called
Wercker,
After installing Docker and Docker-machine, you have to create a new virtual
machine that will run Docker by using the following command: docker-machine
create -- driver virtualbox dev Once the VM is created, you have to export some
variables to your environment: eval “$(docker-machine env dev)” The environment
is now set up, you can install wercker using brew by using the following commands

There are three basic steps for using Wercker:-


1. Create an application in Wercker using the website - First sign up for an account
by logging in with your existing GitHub account. Once you’ve got an account and
you’re logged in, click the Create link in the top menu. This will bring up a wizard
asThe wizard will prompt you to choose a GitHub repository as the source for your
build. It will then ask you whether you want the owner of this application

2. Add a wercker.yml file to your application’s codebase – Fig. 5.18 depicts the code
of wercker.yml file:-
3. Choose how to package and where to deploy successful builds - There is a
script within the application 5.19 .

This will execute the Wercker build exactly as it executes in the cloud, all
within the confines of a container image. A bunch of messages will be seen
from the Wercker pipeline,

8. Write short note on MicroService Ecosystem


Ans:shows a simple Microservice ecosystem where service A depends on
B which in turn depends on C. The hierarchy in this scenario is very
clear and also unrealistic. Organizations must never assume that
there is going to be a clear dependency chain or hierarchy of
Microservices.

Instead organizations must plan for a hierarchy which looks something


like as shown in Fig. 6.13. In this ecosystem, we have a better
representation of reality where no proper hierarchy is followed. Any
Microservice can depend on other Microservice irrespective of any
hierarchy.

5. Write short note on Team Service.


Ans:Team Services (also known as Azure DevOps) is a cloud-based service that provides a
set of tools for managing the software development lifecycle, including source control, build
and release management, testing, and project management. In a microservices architecture,
Team Services can be used to support the development and deployment of individual
microservices as well as the coordination of multiple microservices within a larger system.

Here are some of the key ways that Team Services can support microservices architecture:

Source Control: Team Services provides a powerful source control system that can be used
to manage the code for each microservice. This allows multiple developers to work on the
same codebase, while also providing version control and change tracking capabilities.

Build and Release Management: Team Services includes a build and release management
system that can be used to automate the process of building, testing, and deploying
microservices. This can help to ensure that each microservice is built and deployed
consistently across environments, and can also help to reduce the risk of errors or downtime
during deployment.

Testing: Team Services includes a range of testing tools, including unit testing, integration
testing, and load testing, that can be used to validate the functionality and performance of
individual microservices as well as the system as a whole.

Project Management: Team Services provides a range of project management tools,


including agile planning, backlog management, and team collaboration features, that can be
used to manage the development of individual microservices as well as the coordination of
multiple microservices within a larger system.

Overall, Team Services can provide a comprehensive set of tools and services that can help
to streamline the development, testing, and deployment of microservices, while also
supporting the coordination of multiple microservices within a larger system.

UNIT 4
1. Write short note on RabbitMQ stores messages in queues.
Answer:RabbitMQ is a message broker that allows applications to communicate with each
other using messages. Messages are sent from a producer to a consumer through a queue. A
queue is a temporary storage location where messages are stored until they are consumed by a
consumer.

When a producer sends a message to RabbitMQ, the message is first routed to an exchange. The
exchange then routes the message to one or more queues based on the routing rules defined by
the producer. Once the message is routed to a queue, it is stored in the queue until a consumer
retrieves it.

RabbitMQ stores messages in queues using a mechanism called message buffering. Message
buffering is the process of temporarily storing messages in memory or on disk until they can be
processed by a consumer. RabbitMQ uses message buffering to ensure that messages are not
lost if a consumer is not available to process them.When a message is stored in a queue,
RabbitMQ assigns a delivery tag to the message. The delivery tag is a unique identifier that is
used to acknowledge the delivery of the message by the consumer. Once a message is delivered
to a consumer and the delivery is acknowledged, RabbitMQ removes the message from the
queue.Queues in RabbitMQ can be configured to have different properties such as the maximum
number of messages they can hold, the maximum size of each message, and the maximum
amount of time a message can be stored in the queue. These properties help ensure that the
queue does not become overloaded with messages and that messages are processed in a timely
manner.

In summary, RabbitMQ stores messages in queues using message buffering to ensure that
messages are not lost and are processed in a timely manner. Queues in RabbitMQ can be
configured with various properties to control their behavior and prevent overloading.

4. How to configure Postgres as data service in our microservice?


Ans:To configure PostgreSQL as a data service in your microservices architecture, you'll need to
follow a few steps:

Install PostgreSQL: Begin by installing PostgreSQL on the server or container where you plan to
host your database. You can download the appropriate installer or use a package manager based
on your operating system.

Create the Database: Once PostgreSQL is installed, connect to the database server and create a
new database using the createdb command or a graphical tool like pgAdmin. For example, you
can run createdb mydatabase to create a database named "mydatabase".

Define Database Schema: Design your database schema by creating tables, defining relationships,
and setting up indexes or constraints. This step involves modeling your data based on the
requirements of your microservices.

Create Database Users and Roles: Determine the access privileges for different microservices and
create dedicated database users and roles accordingly. Grant the necessary permissions to each
user to ensure proper data segregation.

Configure Connection Pooling: In a microservices architecture, it's common to use connection


pooling to manage database connections efficiently. You can use libraries like PgBouncer or
connection poolers provided by frameworks like Spring Boot or Django to handle connection
pooling.

Secure the Database: PostgreSQL offers various security features to protect your data. Configure
appropriate authentication methods, such as password-based authentication or certificate-based
authentication, to ensure secure access to the database. Additionally, consider setting up a firewall
to restrict access to the database server.

Expose the Database Endpoint: Determine how your microservices will connect to the
PostgreSQL database. Typically, you'll expose the database endpoint as an environment variable or
configuration parameter that can be read by your microservices during startup. This allows each
microservice to connect to the database using the appropriate credentials.

Handle Database Migrations: As your microservices evolve, you may need to modify your
database schema. Use database migration tools like Flyway or Liquibase to manage schema
changes while ensuring backward compatibility. These tools enable you to version your database
schema and apply migrations as needed.
Implement Data Access Layer: In each microservice, implement the data access layer using a
PostgreSQL client library or an ORM (Object-Relational Mapping) framework. Use the appropriate
database connection configuration, including the database endpoint and credentials, to establish a
connection to the PostgreSQL database.
Test and Monitor: Thoroughly test your microservices to ensure they can read and write data to
the PostgreSQL database correctly. Implement proper logging and monitoring mechanisms to
track database-related issues and performance metrics.

By following these steps, you can configure PostgreSQL as a data service in your microservices
architecture, allowing your services to interact with the database efficiently and securely.

.
5. Comment on One service can consume other services.
Ans.In a microservices architecture, it is common for one service to consume or interact with
other services. This is often done to leverage the functionalities or data provided by other
services in a decoupled and modular manner. Here are a few points to consider regarding
services consuming other services:

Service Composition: Services can be composed together to form complex workflows or


business processes. For example, a service responsible for processing orders may consume a
payment service to handle payment transactions or a notification service to send order status
updates to customers. By consuming these specialized services, the order processing service can
focus on its core responsibilities while delegating related tasks to dedicated services.

Loose Coupling: Services consuming other services should follow the principle of loose coupling.
Each service should have well-defined interfaces and communicate through APIs or contracts,
allowing them to evolve independently. This enables services to be developed, tested, and
deployed independently, making the system more flexible and maintainable.

Service Discovery: For a service to consume another service, it needs to be aware of the
endpoints or locations of the service it wants to consume. Service discovery mechanisms, such as
service registries or service meshes, can help in dynamically discovering and locating other
services in the system. This allows services to find and connect to the required services without
hardcoding their addresses.

API Gateway: An API gateway can act as a centralized entry point for all client requests and
handle interactions with various services. It can provide a unified interface, handle
authentication and authorization, and route requests to the appropriate services. The API
gateway abstracts the underlying service structure and provides a simplified interface to clients,
allowing them to consume multiple services seamlessly.

Data Sharing: Services can consume other services to access or manipulate shared data. For
example, a user management service may consume an authentication service to validate user
credentials. By consuming the authentication service, the user management service can ensure
that only authenticated users can access or modify user-related data.
Asynchronous Communication: Services can consume other services through asynchronous
communication mechanisms such as message queues or event streams. This allows for
decoupled and scalable communication between services. For instance, a service can publish an
event to a message broker, and other services can consume those events and take appropriate
actions based on them.

Service Versioning: When consuming other services, it is crucial to consider versioning. Services
evolve over time, and changes in service interfaces or data formats can potentially impact
consuming services. By maintaining versioning strategies, such as API versioning or backward
compatibility, you can ensure smooth communication between services even as they evolve.

Overall, services consuming other services is a fundamental aspect of a microservices


architecture. It enables modular development, promotes code reuse, and allows for scalability
and flexibility in building complex systems. However, it's important to design service interactions
carefully, considering loose coupling, service discovery, and versioning to maintain a resilient and
maintainable architecture.
窗体顶端

窗体底端

6. Write short note on Event Sourcing.


Ans:Event sourcing is a software design pattern in which the state of an application is
derived by processing a sequence of events that have occurred in the past. Instead of
persisting the current state of an object, event sourcing persists a log of events that have
occurred over time, which can be used to recreate the current state of the object.

In event sourcing, each change to an object is captured as an event and appended to an


event log. The event log represents the complete history of the object, and can be used to
recreate the current state of the object at any point in time. This approach has several
advantages:

Auditability: Because the event log contains a complete history of all changes, it is possible
to audit the system and determine exactly what has happened and when.

Scalability: Because the event log is append-only, it can be implemented using a high-
performance log-based storage system such as Apache Kafka or RabbitMQ.

Reversibility: Because the event log contains a complete history of changes, it is possible to
roll back to a previous state of the object by simply replaying the events up to that point.

Consistency: Because the event log represents the complete history of the object, it is
possible to implement complex business logic that takes into account the entire history of
the object.

Event sourcing is often used in combination with CQRS (Command Query Responsibility
Segregation), which is a pattern that separates the read and write models of an application.
In a CQRS system, write operations are handled by the event sourcing component, while
read operations are handled by a separate component that reads from a denormalized view
of the event log.
Event sourcing is particularly well-suited for systems that require high availability, scalability,
and auditability, such as financial systems, e-commerce systems, and IoT systems. However,
implementing event sourcing can be complex and may require a significant investment in
terms of time and resources.

7. How DNS service can be used to map URLs.


Ans:DNS (Domain Name System) is a service that is used to map domain names
(such as example.com) to IP addresses (such as 192.0.2.1). When you enter a domain
name in your web browser, the browser sends a request to a DNS resolver to find the
IP address associated with the domain name. The DNS resolver then looks up the IP
address associated with the domain name in a DNS server and returns it to the
browser.

DNS can also be used to map URLs to IP addresses. URLs (Uniform Resource
Locators) are used to identify resources on the internet, such as web pages, images, or
files. A URL consists of several parts, including the protocol (such as HTTP or
HTTPS), the domain name, and the path to the resource.

When a user enters a URL in their web browser, the browser first extracts the domain
name from the URL. The browser then sends a request to a DNS resolver to find the
IP address associated with the domain name. Once the IP address is obtained, the
browser establishes a connection to the web server at that IP address and sends a
request for the resource identified by the URL.

For example, let's say you enter the URL https://www.example.com/index.html in


your web browser. The browser first extracts the domain name "www.example.com"
from the URL. It then sends a request to a DNS resolver to find the IP address
associated with the domain name "www.example.com". The DNS resolver looks up
the IP address associated with the domain name in a DNS server and returns it to the
browser. Once the IP address is obtained, the browser establishes a connection to the
web server at that IP address and sends a request for the resource "index.html".

In summary, DNS is used to map domain names to IP addresses, which allows web
browsers to connect to web servers and retrieve resources identified by URLs.

UNIT 5
1. How to Secure a Service with Client credentials and Bearer Tokens?
Ans:Securing a Service with Client Credentials
The customer qualifications design is perhaps the least difficult approaches to make
sure about an assistance. Initially, you speak with the administration just by means
of SSL, and second, the code expending the administration is liable for transmitting
certifications. These accreditations are typically just called a username and secret
key, or, increasingly suitable for situations that don’t include human collaboration, a
customer key and a customer mystery. Whenever you’re taking a gander at an open
API facilitated in the cloud that expects you to flexibly a customer key and mystery,
you’re taking a gander at an execution of the customer accreditations design.
It is likewise genuinely regular to see the customer key and furtively transmitted as
custom HTTP headers that start with the X-prefix; e.g., X-MyApp-ClientSecret and
X-MyApp-ClientKey.
the most startling situation is this: what occurs if a customer mystery and key
are undermined and the purchaser accesses secret data without setting off any
conduct cautions that may get them prohibited? What we need is something that
consolidates the straightforwardness of versatile qualifications that don’t require
correspondence with an outsider for approval with a portion of the more reasonable
security highlights of OpenID
Associate, similar to the approval of backers, approval of crowd (target), terminating
tokens, and the sky is the limit
Securing a Service with Bearer Tokens
Through our investigation of OpenID Connect, we’ve just observed that the
capacity to transmit convenient, autonomously certain tokens is the key innovation
supporting the entirety of its validation streams. Conveyor tokens, explicitly those
holding fast to the JSON Web Token a determination can likewise be utilized freely
of OIDC to make sure about administrations without including any program diverts
or the verifiable suspicion of human shoppers.
The OIDC middleware we utilized before in the part expands on the head of JWT
middleware that we get from the Microsoft.AspNetCore.Authentication.JwtBearer
NuGet bundle. To utilize this middleware to make sure about our administration,
we would first be able to make an unfilled administration utilizing any of the past
models in this book as reference material or platform. Next, we add a reference to
the JWT carrier confirmation NuGet bundle.
Example . Startup.cs
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
IssuerSigningKey = signingKey,
ValidateIssuer = false,
ValidIssuer = “https://fake.issuer.com”,
ValidateAudience = false,
ValidAudience = “https://sampleservice.example.com”,
ValidateLifetime = true,
}
});
We can control the entirety of the various sorts of things we approve while
accepting carrier tokens, including the guarantor’s marking key, the backer, the
crowd, and the symbolic lifetime. Approval of things like a symbolic lifetime as a
rule additionally expects us to set up choices like permitting some range to oblige
clock slant between a symbolic backer and the made sure about help.
In the previous code, we have approval killed for guarantor and crowd, yet these
are both genuinely straightforward string examination checks. At the point when
we approve these, the backer and the crowd must match precisely the guarantor and
crowd contained in the token.

2. Write short note on:


a) Intranet Applications
b) Cookie and Forms Authentication
Ans:1-When consumers face complexity using internet application allover. To solve
that problem organizations build these applications executing on a PaaS on top of
accessible, cloud infrastructure, some of our old designs.
Main important point is incapability to do authentication of Windows. Application
developers are have been damaged by a long history of built-in support for securing
web applications with windows authorizations. Web applications are browser based
issues answers with details of recently logged-in users, then server knows how face
with information and the users are logged in. It is very effective and easy to build
apps safe against a organization internal Active Directory.
PaaS is platform that controls very differently then traditional physical or virtual
machine windows developments for running in a public cloud or our own on
premise.
Operating systems that supports our application needs to be measured temporary.
We can not undertake that it will have capability to contain a domain. In various
cases ,operating system support our cloud application .

2-Every developer who are using ASP.NET web Application should be aware with
form authentications. The way of authentication is where an web application shows a
custom form to ready to users for their authorizations. Authorizations are converted
directly to the application and verification by the application. The cookies marks as
authenticated for some period of time, when users successfully log in
No matter where running our application on PaaS that is essentially good or bad for
cookie authentication. Conversely it produce extra load on our application.
The main aim of forms authentication need our application to maintain and check
authentication. That is we have to deal with safety, encoding, and storing private
information.

3. What do you mean by Real-Time Application?


Ans:Real-time applications are those that require immediate processing and delivery of
data, often within milliseconds or seconds. In a microservice architecture, real-time
applications can be implemented using a combination of event-driven architecture, stream
processing, and real-time data processing.

Event-driven architecture is a pattern where services communicate with each other through
a messaging system using events. Events are discrete pieces of information that represent
something that has happened in the system, such as a new user registration or an order
being placed. Services can subscribe to events of interest and respond accordingly.

Stream processing is a pattern that allows for real-time processing of large volumes of data.
Data is processed in small chunks or streams rather than in batches. This allows for low-
latency processing of data as it is received.

Real-time data processing is the processing of data as it is generated, without delay or


batching. This requires a distributed data processing framework that can handle large
volumes of data in real-time, such as Apache Flink or Apache Spark.

In a microservice architecture, real-time applications can be implemented by breaking down


the application into smaller services that can process events in real-time. For example, a
real-time chat application might have a service responsible for handling user authentication,
another service responsible for processing chat messages, and another service responsible
for managing chat room subscriptions. Each service would communicate with each other
through events, allowing for real-time processing of chat messages and user events.

To ensure real-time performance, it is important to design services that are lightweight and
can be easily scaled horizontally to handle increased load. Additionally, services should be
designed to handle failures gracefully, as the failure of one service can have a cascading
effect on the rest of the system.

In summary, real-time applications in microservice architecture can be implemented using a


combination of event-driven architecture, stream processing, and real-time data processing.
By breaking down the application into smaller, lightweight services that communicate
through events, real-time performance can be achieved.

4. Write short note on websocket in the cloud.


Ans:At
the point when engineers consider constant applications, one thing that frequently
strikes a chord is the utilization of websockets to push information and warnings to
a live (ongoing) online UI
While not the entirety of this usefulness is upheld expressly through websockets,
its greater part was a couple of years back and a lot of it is as yet bolstered through
either websockets or something intended to seem like a websocket to designers.
10.3.1 The WebSocket Protocol
The WebSocket convention appeared around 2008 and characterizes the methods
by which a diligent, bidirectional attachment association can be made between a
program and a server. This permits information to be sent to a server from a web
application running in the program, and it permits a server to send information down
without requiring the application to “survey” (occasionally check for refreshes,
ordinarily on a sliding/exponential tumble off scal
At a low level, the program demands an association overhaul of the server. When
the handshake completes, the program and server change to a different, parallel
TCP association for bidirectional correspondence. From the particular, and the
relating Wikipedia page, a HTTP a solicitation requesting an association update
looks something like this:
GET /chat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13
10.3.2 Arrangement Models
What does any of this have to do with the cloud? In a conventional organization
model, you turn up a server (physical or virtual), you introduce your facilitating
item (an IIS web server or some J2EE compartment like WebSphere, for instance)
and afterward, you send your application. In the event that your application is
versatile and chips away at a ranch, you at that point rehash this procedure over for
very server in your homestead or bunch.
At the point when a client associates with a page on your webpage that opens a
WebSocket association, that association remains open with whatever server was
picked to deal with the underlying solicitation. Until the client hits invigorate or
clicks another connection, that WebSocket should work fine and dandy, however,
there are different issues that may concoct intermediaries and firewalls.
Suppose since the entirety of your servers are running on EC2 occasions in AWS.
At the point when a cloud-based framework is facilitating your virtual machines,
they are dependent upon movement, obliteration, and remaking at any second.

5. How to recognize and fixe the Anti-patterns?


Ans:Each writer needs to walk the scarcely discernible difference between giving true
examples and giving examples that are little and straightforward enough to process
in the generally short vehicle of a solitary book or part.
This is the reason there are such huge numbers of “hi world” examples in books:
on the grounds that else you’d have 30 pages of exposition and 1,000 pages of code
postings. Parity must be struck, and bargains must be made so as to concentrate the
peruser’s consideration on taking care of each issue in turn.
All through the book, we’ve made a few trade-offsso asto keep up this equalization,
yet I need to return now and return to certain thoughts and ways of thinking to assist
better with advising your dynamic procedure since you’ve got an opportunity to
construct, run, and tinker with the entirety of the code tests.
In this increasingly included situation, we start with cell phones presenting the
GPS directions of colleagues to the area journalist administration. From that point,
these orders are changed over into occasions with enlarged information from the

The data at that point moves through the framework, in the


end causing notices of vicinity occasions (colleagues who move inside the scope
of one another) to show up at some purchaser confronting interfaces like a website
page or cell phone.
From the outset, this looks decent, and it effectively demonstrated the code we
needed to appear. Yet, on the off chance that we look somewhat nearer, we’ll
see that the occasion processor and the truth administration are really sharing an
information store. For our example, this was a Redis reserve.
One of the standards of microservices regularly cited during the engineering and
plan gatherings is “never utilize a database as a mix layer.” It is a branch of the
offer nothing rule. We frequently talk about this standard yet we infrequently invest
enough energy examining the reasons why it’s a standard.
Another streamlining is to permit the truth administration to keep up its own private
information, however to likewise keep up an outside store. The outside reserve
would fit in with a notable determination that ought to be dealt with like an open
API (e.g., breaking changes have downstream results). A representation of this is
We probably won’t need this improvement, yet it is only one of numerous ways
around the issue of utilizing an information store as an incorporation layer between
administrations. There’s nothing amiss with utilizing a store to give a subset of

You might also like