The document discusses Google Anthos, a technology stack that allows for a hybrid cloud and multi-cloud environment. It includes products like GKE, GKE On-Prem, Anthos Config Management, and Istio that provide a unified control plane and consistency across on-premise and cloud resources. The stack aims to enable modern application development that can span different environments.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
100 views2 pages
01 - Anthos Technology Stack - en
The document discusses Google Anthos, a technology stack that allows for a hybrid cloud and multi-cloud environment. It includes products like GKE, GKE On-Prem, Anthos Config Management, and Istio that provide a unified control plane and consistency across on-premise and cloud resources. The stack aims to enable modern application development that can span different environments.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2
So let's talk about
Anthos technology stack. So what basically provides
all of the solutions for us. So Anthos basically is a modern application management platform. It consists of all these products that you see now on the board and possibly there will be more products. They will be renamed at some point might be. But more likely than others that you have Kubernetes Engine, we have GKE On-Prem, we have Anthos Config management which is really an interesting product that makes sure that your configuration across different clusters are in sync. We have Istio migrate or Anthos and marketplace. Basically all of these are a technology stack. Anthos is not really the way that I like to think about it. It's not really a product you can go and do Anthos. Anthos is more of a technology stack that you adopt and pick and choose from to create that hybrid connection. Hybrid and multi-cloud maybe later on in the year. So it is basically something that provides you with that consistency. Fun fact Anthos is flower in Greek. The reason why they chose that is because flowers grow on premise but they need rain from the Cloud to flourish. Anthos benefits: So we have a technology stack that runs in data centers next to your enterprise workloads and it basically allows us to take the Cloud, reach out into your on-premise, work there a little bit while you modernize the rest of your workloads. So you can have some consistency across the different environments and you can keep your workloads in their own box in wherever they run, and you can modernize them little by little inside of your on-premise and have that bridge into your on-premise and have the best of both worlds. A single application model that empowers developers and operators to adopt modern technologies. I think part of the challenges that a lot of companies see is the cultural change that needs to happen when you want to adopt into modern technologies. Having that technology in your on-premise and learning it once, and having it applicable to both your on-premise and the cloud, makes a huge difference because you'd then caught training time and turn around and all this kind of things. A central control plane basically that allows you to see all of these environments and manage them in one place which is another important aspect. So that is a very high level of Anthos. So on the compute layer, you have a container orchestration and hopefully you will see that there is a lot of consistency across the different environments and they keep in sync. On top of that, we have the containers and then we have the service mesh. The service mesh basically allows me to create that connection between the different services that are running in different environments so they can collaborate and they can discover each other and they can speak nice to each other. On top of that, we have the configuration management, and it makes sure that all of our configuration is enforced and in sync across the different environments. So we have this seamless experience across your on-premise and in the Cloud. Lastly, you can see here that there is this cute little enterprise workloads that could be a mainframe that can be anything that you want to keep private in your on-premise and will have nothing to do with Anthos in general. The thing that does not get water from the sky in your on- premise. That's generally the idea behind him. Diving a bit deeper into the nitty-gritty here, we have your on-premise data center and we have here on the left Google Cloud Platform. So we first talked about GK On-prem and Google Kubernetes Engine. On top of that, you can also use GCP Marketplace, and that will allow you to install applications from a centralized place into your on-premise or into your Cloud. You might want to utilize a Cloud Interconnect or any kind of a connection directly into Google. So the connection between your on-premise and the Cloud is secure and private, so you don't have to egress to the Internet in any formal way. But that is optional, and we will talk about how you can create that connection between GKE On-prem and GKE in the Cloud later on. The interesting bit here that you have GKE dashboard and the GKE dashboard is a centralized place where you can manage and control both of these products, and you can see them in a consistent way. On top of that sits Istio, and we have Istio open-source and we have Istio on GKE and that creates the service mesh. The connection between the services across the different environments. So the environment itself, the cluster itself works by itself. If you remember it's sovereign island almost, a cluster. But we want the services, the business-logic to communicate with each other on top of these clusters, and therefore we need to find a way to use that service mesh to our advantage to create that connection. That is what Istio provides us with. Lastly, you can see here that we have a git repository somewhere. Maybe it's on-premise, maybe it's in the Cloud, and that is the one source of truth for all of your configuration. So if you have any configuration that you would like to enforce consistently across all of your clusters, on-premise, in the Cloud, that will be the source of truth and it will propagate into your clusters and make sure that that's so. If you remember we have in Kubernetes the idea of a declarative syntax. I want the world to look like this. That is exactly how it works with that configuration management. So you say I want all of my SRE team to have a role binding on this names pace. It will propagate to all of your clusters. It will also make sure that no one messes up with that configuration. So if by mistake somebody deleted that policy, it will reinforce it in a declarative syntax as you would have expected from a Kubernetes standpoint.