0% found this document useful (0 votes)
372 views298 pages

Transcript Oci

- Oracle Cloud Infrastructure operates 16 regions globally across the Americas, Europe, Asia, and government regions, with plans to expand to 36 regions over the next year. - Each region contains isolated availability domains which provide redundancy, with traditional regions containing three availability domains and new regions sometimes containing one availability domain. - Oracle's network architecture uses custom silicon for "off-box network virtualization" allowing bare metal instances and other services with near-zero performance overhead.

Uploaded by

ginggers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
372 views298 pages

Transcript Oci

- Oracle Cloud Infrastructure operates 16 regions globally across the Americas, Europe, Asia, and government regions, with plans to expand to 36 regions over the next year. - Each region contains isolated availability domains which provide redundancy, with traditional regions containing three availability domains and new regions sometimes containing one availability domain. - Oracle's network architecture uses custom silicon for "off-box network virtualization" allowing bare metal instances and other services with near-zero performance overhead.

Uploaded by

ginggers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 298

COURSE:

Oracle Cloud Infrastructure Architect Associate Workshop


9h 39m

GETTING STARTED WITH OCI


Hello, everyone. My name is Rohit Rahi. And I'm part of the
Oracle Cloud Infrastructure team. In this video, we are going
to look into Oracle Cloud Infrastructure at a very high level.

So first, let's look at the Oracle Cloud Infrastructure global


footprint. Currently, Oracle operates 16 regions globally. This
includes 11 commercial regions and five government regions.

You can see the regions listed here. In Americas, we have four
regions. In Europe, we have three regions. In Asia, we have
four regions. And then for government, we have two US
government regions and three US DoD regions. Over the next
13 months or so, we are planning to operate 20 new regions,
which includes 17 commercial regions and three US
government regions.

There are three main reasons why we are doing this. First, we
really want to give our customers a truly geodistributed
footprint so they can run their application closest to their
users. The second is for meeting regional compliance needs.
And the third is for giving and providing customers an option
for in-country disaster recovery solutions. So as planned, 11
of the countries are jurisdictions served by local cloud regions
will have two or more regions to facilitate this in-country or
in-jurisdiction disaster recovery capabilities.

So as I was saying, if you see right here on the screen, Japan,


today we have one region, but in future, next 13 months or
so, will have a second region to provide this in-country DR
capability. The same thing holds for India. The same thing
holds for Brazil and many other countries throughout the
world.

So as we were explaining on the previous slide, by the end of


next year, calendar year 2020, we will end up with 36 Oracle
regions. And these regions are listed here along with some of
the government regions. And one thing I did talk about is we
also have an interconnect with Azure. So today we have
interconnect with Azure in the US, in the Ashburn region,
and in the London region. Over the next 13 months or so, we
are going to expand that in multiple places in the US and also
in Asia and Europe, just to give customers an extra option to
connect to Azure regions if they're running applications
across Azure and Oracle Cloud Infrastructure in a truly
multi-cloud fashion.

So let's look at the core concepts of a region. A region is


comprised of isolated, completely independent data centers
called availability domains. And as you can see here, each
region, traditional Oracle Cloud Infrastructure regions [? that
are ?] comprised of three different availability domains.

Now, within an availability domains, we group hardware and


infrastructure together into this construct called a fault
domain. A fault domain is a failure isolation boundary within
an availability domain. Each availability domain, as you can
see here, has at least three fault domains.

This number is well-suited for hosting quorum-based


replicated storage systems and consensus algorithms, which
are some of the basic primitives or fault-tolerant systems. For
example, if your application system uses groups of three
nodes, then place each node in separate fault domain. If your
system uses larger groups, then distribute the nodes from
each group as evenly as possible across all the four domains
in an area.

Now, I said, traditional Oracle Cloud Infrastructure regions,


because the regions we have operated until now or opened
until now, you can see here, always comprised of three
availability domains. Now, going forward, we have chosen to
launch regions in new geographies with one AD. Why are we
doing this? To increase our global reach quickly. So for any
region with AD, like these regions listed here, a second AD or
region in the same country our geopolitical area will be made
available within a year to enable further options for disaster
recovery and data residency.

Now let's look inside an AD for the high-scale, high-


performance network. So as you can see here, we have a
physical network, like any other cloud providers. It's a non-
oversubscribed network, so we don't run into things like
noisy neighbor problems. And so it operates at a very high
scale. You can see some numbers here. And we have
predictable low-latency and high-speed interconnect between
hosts running-- if you have multiple availability domains
within a region.

Now, we have made some drastic changes into how virtual


networking is done over this physical network with Oracle
Cloud Infrastructure. We call this capability off-box network
virtualization. As the name implies, we put all the
virtualization out into the network using custom silicon
cards.

So this includes all the storage and network I/O


virtualization. So this gives us nearly zero performance
overhead. Generally, this enables the next layer up. So we
can take any physical form factor and plug that into our
virtual network.

So as you can see here, this is the basis that lets us do bare
metal instances, and engineer systems like Exadata, and plug
them into our environment without making any changes. If
this was not the case, we would need to slap a hypervisor
here on Exadata to make it work. We don't have to do it
because of this capability called off-box network
virtualization. It is a massive enabler for us to deliver the
classes of services and meet our goals around performance
and security.

So until now, we were talking about the global footprint, our


physical network, and of our virtual network. The thing which
really makes the cloud shine are the infrastructure services
which run on top of this global infrastructure. So as you can
see here, we have a very broad and deep platform, starting
with identity, different classes of networking, different
capabilities of networking, different classes and form factors
for compute, whether it's bare metal, whether it's virtual
machines, whether it's dedicated host, Kubernetes servers,
various classes of storage, local storage, block storage, file
storage, object storage, archive storage, of course various
flavors of databases, bare metal, virtual machines, Exadata,
autonomous databases, server lists offerings, analytics
offerings, a bunch of next layer services, a bunch of security
services, and so on, and so forth.

The whole idea is, every cloud is becoming a platform. And


you need these different services, very broad set of services.
And you also need a lot of functionality within each of these
services.
So this slide just talks about each of these various services at
a very high level. Over the next few lectures, modules, we will
be diving deep into each of these services. And you can see
that each of them have pretty detailed, rich functionality.

And if you go to this URL, you can see that we have


something like 50-plus services today in Oracle Cloud
infrastructure. And over next one year or so, we have a very
aggressive roadmap. And we are on a very fast release feature
and service velocity.

So what is our differentiation? Because this comes up all the


time when we talk about Oracle Cloud Infrastructure. So it's
always good to think about differentiation on two dimensions,
on the technical side and on the business side.

So on the technical side, we already talked about


performance. Enterprises require the scale and the
performance. And we believe that Oracle Cloud Infrastructure
is truly differentiated here. We talked about how we use the
custom silicon cards to give you zero overhead through this
off-box network virtualization.

We were the first cloud to launch bare metla instances. And


our compute service actually is run on top of this. So it gets
all the benefits of the bare metal offering with, you know, like,
the best performance possible, et cetera. We use local NVMe
storage with that to give you millions of IOPs. And again, we'll
talk about this in more details later.

Everything storage-wise invoice on Oracle Cloud


Infrastructure is SSD-based. And we use the newer protocols
like NVMe to give you really fast performance. There is no
network CPU or memory over subscription anywhere. So this,
again, ties very well to our performance story.
Oracle Cloud Infrastructure is also battle tested. Some of our
internal assets and offerings like, particularly on the SaaS
side, like NetSuite, are running on Oracle Cloud
Infrastructure. They are running on a massive scale.

And so what you are using as customers has really been


battle tested, whether it's operational activities, whether it's
scale, whether it's resilience, whether it's reliability, whether
it's security. So we have sort of battle tested these ourselves
by running our own apps on top of Oracle Cloud
Infrastructure. Whether it comes to database options,
whether it's bare metal, whether it's virtual machines,
Exadata, rack, none of these exist anywhere else outside
Oracle Cloud Infrastructure. So if you are an Oracle
customer, you are using some of these offerings today on
premises, you have to look nowhere else, because you could
run the same offerings and get better price performance
running on Oracle Cloud Infrastructure.

And again, the last point here, we are truly an enterprise


cloud, because we are supporting all these enterprise apps
which no other cloud supports today. On the business side,
we have a very aggressive and predictable pricing. It's a very
simple pricing.

Globally, we have the same pricing everywhere. The pricing


model is easier to understand. And finally, it's cheaper than
some of the cloud providers out there. We have SLAs on
performance, management, and availability.

Most of the other cloud providers give you SLAs only on


availability. We have three dimensions, because again, we
believe customers need those different dimensions as well.
We have some of the licensing innovations like bring your
own license so you could-- if you have licenses on prem, you
could just use them in the cloud, and things like universal
cloud credits.

And finally-- this is not a small one-- you get support through
one org, which is a reality that most of the enterprises are
going to run in on a hybrid environment. So if you're running
something on premises, you're running something in the
cloud, you have one support model whether it runs in the
cloud or on prem. And you could get support through one
channel, one mechanism.

So that's all for a quick introduction on Oracle Cloud


Infrastructure. Here are some of the links on always free tier,
some of the trainings, labs, and YouTube locations where you
can watch some of these videos. If you have some time, please
do join me in the next lecture where we talk about Oracle
Cloud Infrastructure identity and access management. Thank
you.

IDENTITY AND ACCESS MANAGEMENT


1. IAM
Hi, everyone. Welcome to this lecture on OCI Identity and
Access Management. My name is Rohit Rahi, and I'm part of
the Oracle Cloud Infrastructure Team.

So first look at, what is the Identity and Access Management


Service? Identity and Access Management service, or the IAM
service, enables you to control what type of access a group of
users have and to which specific resources-- so what type of
access, a group of users, and to which specific resources.
Now let's look into some of these terminologies. A resource is
a cloud object that you create and use in Oracle Cloud
Infrastructure. So compute instances, block storage volumes,
virtual cloud network, each of them are represented as
resources, as cloud objects.

Each OCI resource has a unique identifier, Oracle's assigned


identifier called an Oracle Cloud ID, sometimes also called as
OCID. Now, the service uses traditional identity concepts,
such as Principals, Users, Groups, Authentication, and
Authorization, and there is a new capability called
Compartment. We'll look into each of these in greater details.

So this graphic here, the visual here, tries to show the main
components of the service. So as we talked about the Identity
and Access Management Service, the main things to keep in
mind are Principals, basically, Principals, you can think
about them as groups of users, or instances-- we'll talk about
why instances are here-- which access a set of resources.

And these resources, for these principles, they need a specific


kind of permissions. Basically, you can think about what is
this requested? What are the permissions requested by
principals? We represent them in OCI using this construct
called policies. And the policies work on this new construct
called compartments.

So you can think about it this way-- resources have a logical


place where they live, which is the compartment, and the
policies act on the compartment, and you basically attach the
policies to the groups or instances so that these users can
access these resources.

So this graphic here tries to show the various components of


the Identity and Access Management Service in a very visual
manner. And don't worry if you don't understand all the
details. We'll look into them in the subsequent slides and
demos.

So let's look into each of these in a little bit more detail. So a


Principal is an IM entity that is allowed to interact with OCI
resources. And like we said, everything you do in OCI is a
resource. So whether it's compute instances, block volumes,
or virtual cloud networks, each of them are represented as
resources.

Now, there are two kinds of Principal. One is called the IAM
users, which are your users who access the cloud
environments, and others are instances, and we call them
Instance Principals to distinguish them from just normal
compute instances or database instances.

Now, users are persistent identities which you set up through


this service to represent individual people. Or it can be
applications as well. Now, when you sign up for an OCI
account, the first IAM user is the default administrator. And
the default administrator sets up other IAM users and also
groups. It seems very logical.

Now, there is this security principle of least privilege enforced


by the users. What does that mean? It means two things.
Number one, users have no permission until they are placed
in a group. It can be one group, or the same user can appear
in multiple groups. That's number one.

Number two, groups have at least one policy with permission


to the tenancy-- which means the whole account-- or to a
specific compartment, which is a sub-portion, a section of
your tenancy. And we'll look into what, exactly,
compartments are. But those two conditions have to be valid.
Otherwise, users by themselves cannot do anything within
Oracle Cloud Infrastructure.

Now, what is a group? As it seems very logical, a group is a


collection of users who all need the same type of access to a
particular set of resources. So you can create any kind of
group. You can create a group for database admins, storage
admins, virtual cloud network admins, or even you can create
groups for tied to your tenancy, or compartments, or regions.
Again, you have the complete flexibility.

But "group" basically means-- it's a collection of users who


need the same type of access to a particular set of resources.
That's the guideline. The same user, like I said, can be a
member of multiple groups.

Now, there is a special kind of a Principal which is called


Instance Principal. And we'll discuss this more in the level-
200 video and the modules, but just to give you an idea,
Instance Principal basically let instances and applications
which are running on those instances to make API calls
against other OCI services.

So for example, compute, you have an application running. It


needs to go and access the storage layer, the Object Storage.
So Instance Principal lets you make those API calls without
the need to configure user credentials or keep a configuration
file on the instance, because the storage service needs to
authenticate the application, right?

So you make the instance a principle, and so it can make API


calls without really needing the user credentials or
configuration files. Otherwise, you run into issues like
rotating your credentials, and it's not a very secure
mechanism. And again, we'll talk a little bit more about
Instance Principals in our Identity and Access Management
level 200 modules.

So let's look at the different authentication mechanisms


which are provided by OCI. The first one is very simple,
something which you guys are all familiar using various web
services where you provide-- you authenticate a Principal by
providing the user name and password.

So you use the password to sign in using, let's say, a web


console. You get a one-time password when you set up your
account. At your first log-in, you are prompted to reset your
password. It seems very logical, very familiar with how you
use some of the web properties.

The second mechanism of authentication is something called


API Signing Key. And you use this-- the use cases is when
you are using the OCI API, Oracle Cloud Infrastructure API,
in conjunction with the SDK or the CLI. So when you're
running a command line interface and you're running some
commands, you are basically using the API signing keys to
authenticate who you are as from where you are running the
CLI.

And as you can see here, keys is an RSA key pair in the PEM
format, and you have some restrictions on the length, et
cetera. In the OCI console, you copy and paste the contents of
this public key file, and the private you file you keep with the
SDK or with your own client to sign your API request. Again,
very similar to how some of the other web services operate.

The third one is something specific to Oracle Cloud


Infrastructure, and it's authentication using auth token,
authentication tokens. Now, these are Oracle-generated token
strings to authenticate with third party APIs that do not
support the OCI signature-based authentication we just
looked into-- the API signing keys.

So what are good examples? Well, a good example is our own


autonomous offering doesn't support the OCI signature-based
authentication. So for example, if you are using an
autonomous data warehouse and you want to authenticate--
you want to, let's say, pull data from object storage, you
would have to write this small code here so that you're
autonomous data warehouse can authenticate against object
storage.

So you have a username here, and instead of the password,


you provide your authentication token, which are provided by
OCI. One thing to keep in mind is authentication tokens do
not expire.

OK, all right, so we talked about authentication. Let's talk


about authorization. Now, authorization specifies various
actions an authenticated principal can perform. Now, in OCI,
authorization is defined by providing specific privileges in
these things called policies, and then you associate them--
these policies-- with the Principals.

Like we saw with the users, policies also support the security
principle of least privilege. By default, users are not allowed
to perform any actions. Policies cannot be attached to the
users themselves but only to the groups. And we'll look into
what that, exactly, means.

Now, policies are comprised of one or more statements which


are a very human-readable format. So what do these policies
look like? In a very simplest manner, the simplest policies
would be written around something like this.
So you allow a group-- so as we said, the policies operate at
the group level. It's not at the user level. You provide the
group name here, and then you provide, too, there is a verb--
and we'll talk about in the next module what this means.
Resource type-- what kind of access you want, and you can
be very granular here-- and whether you want access in a
tenancy or you want in a subtenancy, in a compartment. And
then, you can also make it more complex by adding things
like conditions.

So one thing you're seeing here is there is not "deny" policy.


Everything is denied by default. So that's, again, the security
principle of least privilege. You have to really write a policy
saying, allow this. Otherwise, if you don't write a policy,
nothing can be done by your users.

And then, there is also concept of policy attachment and


policy inheritance. And we'll, again, talk about this
subsequently in the next modules.

So with that, let me just log into the Oracle Cloud


Infrastructure, create a user, create a group, and not write a
policy, and see what you can-- the user can do that with just
that.

So as you can see here, I'm trying to log into the Oracle Cloud
Infrastructure. So first thing you do here is provide the URL,
console.us-ashburn-1.oraclecloud.com. Now I'm trying to log
into the Ashburn region, and you could log in to Phoenix or
some other region, and your URL might be different.

So the first thing it's asking me is, what's my cloud tenancy?


And cloud tenancy, the simplest way to think about this is,
this is your cloud account. This is my internal Oracle
account, which I use for myself and my team. And as I
provided that, I'm logged in, because I previously had
provided my user name and password, so it authenticates me
into the system.

Now, as you can see here-- this is the Oracle Cloud


Infrastructure-- you can see various regions here. And this is
just a partial set of regions. If I click on Manage Regions, I
can subscribe to the other regions which are operational right
now.

So as you can see here, first thing I want to show you is there
is something called a home region. This is where you signed
your contract. This is where you probably got started. I have
been here 3 and 1/2 years. So this was our first region-- US
West (Phoenix). This is my home region. And right now, I'm
logging into US East (Ashburn). And you can see all these
different regions.

Now, you see these buttons saying, Subscribe to This Region.


So I click here, and now I'm subscribed to Australia East
region. And I'll subscribe to a couple of other regions as well--
Brazil, and Mumbai, and Seoul, and Zurich. So in total, I
should have 11 regions, because these are 11 regions which
are operational today, and then we have five regions which
are US government and Department of Defense regions.

So this is where you subscribe your regions. Right now you


can see, let me change to Ashburn, because this is where we
logged in, right? So I'm in the Ashburn region.

So to get to-- on the left-hand side, you can see the menu.
And the menu shows the various services we have available
in OCI. So there is core infrastructure. There are databases
there is on data in AI, solutions and platform, and then there
are services around governance and administration.

We'll look into many of those services in subsequent modules.


Right now, let's look into Identity and Access Management.
So click on Identity here, and you can see various tabs
appear, right? So there's users, groups, and dynamic groups.
This is how you make use of instance. You define Instance
Principals. There is Policies, Compartments, and so on, and
so forth.

So first thing I'm going to do is I'm going to click on Users.


And you can see here a bunch of people who are on my team
and myself. We are all here, listed here as users. And so let's
go ahead and create a new user here.

Because we are doing this training videos, I'll call this


Traininguser1. And-- sorry Traininguser1, and I would use
the description here, Traininguser. And I need to provide an
email ID for password recovery. So let me just provide my
Oracle ID here. All right, and then I create this user here.

And now this particular user has been created. You can see
Traininguser1. Now, if I log into this user, right, I could do
things like multi-factor authentication. it provides me, my--
you know, like you can see auth tokens here. I can generate
an auth token. As we were talking, let's say we want to do an
auth token for Autonomous Data Warehouse. I could do this,
right? And I have to copy this from my own records.

But the thing which I can do here is I created a user, but that
is nothing which I have created beyond it, right? So if I log in
through this user, I would not be able to do anything.
So let me first create a password. And this is a first-time, one-
time password. So I'll copy this, and let me go ahead and
create a group here as well. So I go into my Identity menu,
and I Create a Group.

And you can see some groups here like there is


administrators' group. When you create your account for the
first time, you get this group. Let me create a training group
here.

So I'll create a training group, and this is a group for training


users. And I'll create this group.

And now what I can do is I can add my user into this group.
So I could-- the user which I just created, Traininguser1, I
could come here, and I could add the user to the group.

, Now let me open an Incognito window. And until now, I have


just created a user. I have created a group. I have not created
anything beyond that. I have not written a policy for that
particular group.

So let me log in here using that particular user we just


created. So first thing it will ask me to do is it will ask me for
a username and password, which makes sense because we
just created that user, and hopefully, I have the password
with me correct.

And so as I did that, it was a one-time password, so it's


asking me to change my password. And there are certain
restrictions around what I can do as far as a new password is
concerned. And let me just make sure that I have the same
password here.
And now, you can see I changed my password. And I am
logged in as the Traininguser1 here, right? If I take click here
on my profile, you can see that I am Traininguser1.

So the thing I want to show you is, if I come here-- and of


course, we have not talked about virtual cloud networks, et
cetera, et cetera-- but you can see that I don't see any ability
to create any networks. And first thing it says is, create--
choose a compartment.

So I chose this root compartment. What does that mean?


We'll talk about that in the next module. But you can see
here that it says that "resource not found" or "authorization
failed." And if I click on this, and I try to create a network, the
request would fail because I have not written a policy and I
have not really created-- I have not authorized this particular
user, Traininguser1, to do anything. So that is why you can
see that it says, "Authorization failed or requested resource
could not be found."

So in the next module, what we are going to do is talk about


policies, write a policy, and then see what different kind of
activities this particular user can perform.

Thank you for joining for this lecture. If you have time, please
join the next lecture on IAM policies. Thank you.

2. IAM POLICIES
Hello, everyone. Welcome to this module on IAM policies. My
name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure team.
In the simplest format, the policy syntax looks something like
this-- allow subject to do something-- verb-- on specific
resource types in location, which can be tenancy or
compartment. And you can also add a condition here to make
it more complex. But let's break it down into simpler terms,
and then we can look into each in greater details.

So as you can see here, first thing is there is no deny syntax


because everything is denied by default. So you have to really
explicitly allow. Otherwise, as we saw in the previous module,
if you don't write a policy, that's as good as writing a deny
policy. You are basically locked out. You cannot do-- your
users cannot do anything in the Oracle Cloud Infrastructure
environment.

So subject here is your group, your group name. And the verb
here, we have, basically, four types of verb, going all the way
from inspect, read, use, to manage. Inspect basically means
you can list your resources. Read and inspect, most of the
cases are very similar. Read gives you some extra capabilities,
like you can read, get the metadata for the actual resources.

And use, when you write the verb called "use," you have the
ability to read plus the ability to work with existing resources,
like you could update the resource, et cetera, depending on
the type of resource you are trying to use. And then, manage
includes all the permissions for the resource.

So if you're not really sure which verb to use, you can go with
manage, or you could go with use, or you could even, if you
want to, just restrict the access. You could go with something
like inspect or read, depending on your use cases.

Now, resource types, basically are two kinds of resource types


in OCI. One is the aggregate resource type, and then the
second one is the individual resource type. As you can guess,
aggregate resource types are tied to the various resources you
have in OCI. So the simplest-- again, the simplest way to look
at it is, if you want to give somebody access to everything in
OCI, just go to this resource type called all-resources, which
means every resource in OCI, all resources in OCI.

If that is not what you want to give access to your users, you
could go granular. So if you just want to restrict access to
database, you could say database family, instance family,
virtual network family, and so on, and so forth.

If you want to go very granular, within larger network family,


you could have VCNs, subnets, route tables, so on, and so
forth. The whole idea is you can be very granular and provide
role-based access control. And you can do it more on the
documentation as to what specific resource types are
available, what are the individual resource types, and how
you combine the various verbs with these resource types.
There are good documentation pages on those.

Now, in reality, what is happening is, when you write a policy


giving a group access to a particular verb and resource type,
you're actually giving that group access to one or more
predefined permissions. We're just making it simpler for you.
So what does that look like?

So for example, look at something like a volumes family,


block volume family. So the various verbs which are
possible-- again, shouldn't be a surprise-- is inspect, read,
use, manage-- the four tiers we had on the previous slide.
And behind the scenes, you have permissions tied to each of
these verbs, and again, behind the scenes, the permissions
are tied to the API operations.
So for example, as you go from INSPECT, to READ, to USE, to
MANAGE, the level of access generally increases. So you see
something like READ plus. So USE has everything here on
INSPECT and READ, plus ability to update, as I was saying in
the previous slide, ability to write, but not delete, for example,
or create. So if you go to MANAGE, you could use everything
in USE, plus you get two capabilities-- to create volumes and
delete volumes.

Now everything in the cloud is you're calling APIs, right? It's


all the web services. Now, each API operation requires the
caller to have access to one or more permissions. So for
example, if you want to ListVolumes GetVolumes these two
APIs you need you need to have this permission call
VOLUME_INSPECT. If you want to create greater volume, you
need the CREATE_VOLUME API needs access to
VOLUME_CREATE permission. And how do you get that
permission? By writing a policy which says, allow specific
group to manage volumes-family in either a tenancy, or
account, or compartment.

So this is how, behind the scenes, policies work. This portion


here, on permissions, API operations are all abstracted. So
you don't have to go this granular and figure out which APIs
are tied to which permissions and which permissions are tied
to which verbs, because it can get very complex, very soon.
So it just makes life easier by giving you verbs. But
remember, behind the scenes, there are permissions
predefined, and then the APIS are tied-- require those specific
permissions in order to be executed.

OK, so all right, let's look at some of the common policies.


Very simple policy-- let's start with you want network admins
to manage a cloud network, right? So you would write
something like, allow group-- let's say this is a group here,
group name-- to manage-- this is your verb-- highest level of
access, virtual-network-family.

It means everything which is provided by virtual networks--


your subnets, your route tables, your gateways, your different
kinds of constructs, all part of the virtual-network-family.
And you could write this policy either in a compartment or a
tenancy, right? In this case, I'm doing it in a tendency, which
means it's at account level, but you could also go very
granular.

But if you want users to launch compute instances, it gets a


little bit more complex, right? First thing when you launch a
compute instance is you need virtual-network-family because
a instance is launched within a subnet. So you need to write
a policy here.

Now, you see in this case, I don't need manage because this
user is just using a subnet. It's not creating a new subnet in
most of the cases, right? So use is fine because it doesn't
need to create or delete. And then delete the subnet or the
virtual cloud network. And then, you need these other verbs,
right?

So you need a manage here because you can create an


instance, you can delete an instance, right? So you need that
manage. And then you need a couple of other policies. For
example, if instances-- probably you need to have block
volumes if you're creating any application.

So you would use the volume-family. And again, see volume-


family you are not creating a new volume. You're-- probably,
somebody has created that volume. You are using it so you
say use here.
So you can see how you can write simple policies and you can
write complex policies. And the link here, if you go to this
link, you can see a collection of various policies. There's a
whole list of something like 50 or 100 policies, different
scenarios which are all listed there.

Lastly, there's the concept of advanced policy syntax, where


you can write a conditional policy. Now, we, again, in the
level-200 videos, we go into more details, but let me just
quickly cover this here. So when you write a condition, you
need to add variables. And when you are adding variables,
there are two kinds of variables you can use. One is called
request. One is called target.

So request is something which is relevant to the request


itself. OK, so what does that mean? If you have an operation,
you are listing, you are doing some API operation, like you're
trying to list users, you could say something like
request.operations.

This represents the API operation which you are executing--


for example, list-users. Target.group.name represents the
name of the group. So you have target here, group.name is
the name of the group, and to specify that, you would use a
keyword like target.group.name.

So a good example is-- and as you see here, a variable name


is prefixed accordingly with either the request,
request.operation, or target, followed by a period. So
request.operation with target dot something group name.

Now, you can see an example here. Allow group-- this is a


group name-- to manage all resources-- meaning they have
permissions to do everything; basically, they are
administrators for this particular account-- where
request.region-- meaning this is the region where the request
is being mads-- is Phoenix. So this means you have an admin
for Phoenix, but they cannot do anything in the other 10
regions which are there.

So again, if you go to this link, you can see the policy


differences, and you can read a lot about advanced policies.
Let me close this module by quickly showing you the demo
where we left in the last module.

So in the last module we created a user, Traininguser1 and,


we created a group called TrainingGroup we added that user
to the training group. And that's all we did. We didn't write a
policy. We didn't touch anything on the compartments. None
of that, right?

So first thing let's do here is write a policy. Now, you can see
there are three policies here. And I'm still in the root
compartment. What does that mean? We'll talk more in the
next module, so don't worry. But policies need to live
somewhere. You need to attach it either to a compartment or
a tenancy. You just cannot keep-- leave it hanging
somewhere. You have to attach it.

Otherwise, if you create a compartment, for example, don't


attach a policy. It's not useful. So let's create a policy here,
training policy. And this is my training policy. And I need to
write a statement here, right?

So the first segment I want to write is allow group-- we've


created this group just a while back in the previous-- to
manage-- now, I don't want to do all resources. I just wanted
to do virtual network family. And as I said, let's allow this in
the whole tenancy, because-- it shouldn't matter, but let's
just do this.

So we created a policy here, right? So what it's saying is this


particular group and all the users which belong to that group
can create and manage a virtual network family. So they can
create one, they can delete one, they can update one, and all
that, right?

So I'm logged in as traininguser1 here, and earlier in the


previous video, you saw that this user could not do anything
in this account because they had no policies written for their
specific group. But as you can see here, now I see various
networks. And if I click here and I say, I want to create a test
VCN, and root is fine, I want to just create a network and
click Create Virtual Cloud Network. You could see that I'm
able to create a virtual cloud network here. It's that simple,
right? The policy basically unlocks my permissions so I can
execute it and I can create a network.

But if I go to the compute instance and I want to, let's say,


create a compute instance, you'll see it's grayed out here. I
cannot click on Create Instance and create one. Even though
it will show you a menu or something, you probably would
not be able to create an instance here, right? If I click on that,
it will not let me do that. Why? Because I really haven't
written any permissions to allow me to create instances.

Let's change that. If I go back, I can add a policy statement


here. And I have one which I just wrote before I was doing
this class. So I create this policy. It says, allow group
TrainingGroup to manage instance Family. And again,
remember, instance Family, you could create an instance,
you can delete an instance. You could basically do everything
around the instances.
Now, for the first display, just look here. Earlier, it said I don't
have any permissions. If I refresh this page, you can see that
I would start seeing instances there, and I can actually go
and Create Instance now, right? It would let me do that.

So let me just quickly run it here. Give it a default name,


choose an AD. I'm not even using some of these values there.
This is the network we just created. I'm not giving any SSH
keys or anything. It's just a dummy example. And I click
Create. And you would see, within a few seconds, my
instance would be up and running.

So this is a quick example we wrote two policies, first to


unlock the virtual cloud network, give the permissions to
create a VCN, and then we couldn't do anything beyond VCN.
We wrote another policy to give me access to create instances,
and now I am able to create instances.

So this is how policies work. It gets more complex with


conditional policies, where you attach the policies, and all
that. We'll cover those in the subsequent lectures.

Thanks for joining this particular lecture. If you have some


time, please join the next lecture, where we talk about
compartments. Thank you. [WHOOSH]

3. IAM COMPARTMENT
My name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure Team.

So a compartment is a collection of related resources-- like


VCN, compute instances, database instances-- that can be
accessed only by groups that have been given permissions
by-- for example, by an admin in your organization.
Compartments-- the whole idea of compartments is to help
you organize and control access to your resources.

So what are some of the design considerations? Now, as we


saw at the beginning of this lecture series, each resource
belongs to a single compartment. So you cannot have a
resource living in two compartments as a logical place where
the resource lives. But resources can be connected, shared
across compartments, right? So a VCN and the subnet, which
is a component of the VCN, can actually live in different
compartments. That's completely supported.

A compartment you can delete after you create, or you can


rename the compartments. A compartment can have
subcompartments, which can be up to six levels deep. Most
resources can be moved to a different compartment after they
are created, though some restrictions do apply. And again,
you can check documentation on what some of those
restrictions are.

After creating a compartment, you need to write at least one


policy for it. Otherwise, it cannot be accessed. Of course, the
admins who have created the compartment can access it, or
users who have permissions to the tenancy can access it. But
to really make it useful, you need to write a policy against the
compartment.

And subcompartments, which are-- we said can be six levels


deep-- inherit access permissions from compartments higher
up its hierarchy. And again, we'll look into a module on this,
specifically. And when you create a policy-- we looked into it
in the previous module-- you need to specify which
compartment you have to-- which compartment it attaches
to.

So what happens when you sign up for Oracle Cloud


Infrastructure for the first time, right? So there is this
concept of "tenancy," which is nothing but your account. And
you get this thing called a root compartment. And think
about root compartment as the parent compartment for all
your other compartments you are going to create yourself.

So root is something which we create, and we also create a


default administrator for the account. And there is also a
group called Administrator-- default group called
Administrators. You cannot delete this group, and there must
be at least one user in it. You can add users, you can remove
users, but at any given time, it needs at least one user. That
seems, again, very logical.

If you add more than one users in here, remember that they
are part of the Administrators group, so they have access--
just by virtue of being present there, they have full access to
all the resources.

Now, what kind of policy would you write? And remember, we


talked about users belong to groups, and they are useful
because they have a policy. What kind of policy would you
write for the Administrators group?

This is the policy you would write-- allow group


administrators to manage all resources in tenancy. Again,
you cannot delete this policy. It's by default there. You cannot
change it, also. And this is called a tenancy policy. And we'll
look into this in the console.
Now, as I said, root is something we create the compartment,
and think about this as a parent of every compartment out
there. Now, you could-- nothing stops you from putting all
your resources in the root compartment. You could do that.
But the best practice is to create dedicated compartments
when you need to isolate resources. Because remember, the
whole idea of compartment is to isolate your resources. So if
you put everything in root, it defeats the purpose of creating
compartments in the first place.

Now, there is this new capability called compartment quotas.


Now, compartment quotas us are similar to service limits. So
service limits basically means, when you create your
[INAUDIBLE] account-- this is, again, very common to all the
clouds-- we have specific limits in place, like many compute
instances you can create, and so on, and so forth. And you
can contact Oracle, and we will change-- if you have a valid
need, we can obviously increase your service limits.

But there is this concept called compartment quotas. Very


similar to service limit, but the difference is that service limits
are set by Oracle, like I said, and compartment quotas are set
by your own admins using policies.

Now, why would you do that? There's a good example here.


Suppose you want to use bare-metal instances, but these are
expensive, so you don't want this to be used in your
tenancy-- meaning, if you have this tenancy across multiple--
you have multiple usage in multiple regions worldwide, you
want to restrict the bare-metal usage, let's say, to a specific
region.

So what you could do is you could write compartment quota


policies here-- something like this-- which allows only people
in-- users in the Phoenix region to use bare-metal instances.
Other regions are zeroed out.

So how do you send these compartment quotas? Very simple.


You have policies, you use quota policies, and you have these
keywords, like Set-- you set the maximum number of
resources you want. Unset-- you reset the quarter to the
default service limits. And you zero it-- "zero" meaning you
remove access to a cloud resource for a compartment or a
tenancy.

So for example here, you say zero compute quotas, and you
have this keyword here, BM, bare-metal in tenancy-- meaning
you have-- you zeroed out bare metal instances, so nobody
could create a bare metal instance in your tenancy. But then,
you are overwriting this for the Phoenix region by setting a
compute quota to five for bare-metal instances where
request.region-- and we looked into this in the Advanced IAM
policy. If you want to restrict or make policies is more
conditional, you could have request and target.

So you could say request.region, and the region is us-


phoenix-1, and this way, only users in us-phoenix-1 can use
bare metal instances. Other users in other regions will have
access to no bare metal instances.

So before we go into the next module, let's jump into the


console, and let's look into some of the capabilities around
compartments. So if I come back here, we were using this
user. We had a specific user, and we created a group, and we
created a policy in the previous modules.

So let me just go and talk about compartments here. So first


thing, I'll click on Identity Access Management Menu. I see
the compartment here. So you can see here, this is my root
compartment. And remember, this is my tenancy, right-- [?
INT-Oracle-Rohit. ?]

And if you can see here, first thing it says, subcompartments


15. So what this means is I have 15 compartments here, and
these compartments are all part of my parent compartment
here, the root compartment, right? So that is what it's been
showing here.

Now, most of these have no subcompartments, but this one


here, training compartment, if I click on Training, you can see
it has training subcompartment level one, and it has level
two. So it has, you know, two levels deep. And remember, the
compartments can be six levels deep, right? So these
compartments are there.

Now, there is also this capability called Compartment


Explorer. So if I go into Governance and click on
Compartment Explorer, you can see the various
compartments appear on the left-hand side. And if I click on
this training compartment here, I can bring up specific
resources here.

Now, this is very useful because if I'm-- compartments can


have many resources-- hundreds or even thousands of
resources. This view here is really good because it lets me see
at a glance what is available in my compartments.

If I just click on, let's say, the training subcompartment one, I


can see that it has a virtual cloud network. It has three
subnets, a route table, and a security list, right? And it also
has the training-- the subcompartment two, right? And the
subcompartment two has no items here, right?
So this is sort of-- this view helps. Now, we wrote the policy
here in the previous module. We could actually extend this
policy, and we could write a policy here. The previous policies
were all written for the tenancies, but we could write a policy
in the compartment.

And I need to go back and-- OK, I have, probably, remember,


a particular compartment here. So we could say something
like, allow group TrainingGroup to manage instance-family in
compartment Production. And there's a compartment called
Production. And I can add this statement.

Now, if I log in as that particular user, the [? traininguser1, ?]


and I go back to this menu, just look at Compute here,
earlier, I could only see the root compartment, If you guys
remember from the previous module. But now, if I go here, I
can see all the compartments. And if I click on Production, I
could literally create an instance here.

Now, it's likely I don't have a virtual network here, but let's
see if I can see if there's a virtual network here, and click
Create. And this should let me, as-- bingo, there you go. It
lets me create an instance in the production compartment.
And if I remove that policy, I cannot create an instance in the
production compartment.

So this quickly shows you compartments. It's a logical place


where you keep your resources. You write your policies
against those compartments, and then your users can start
using this, the specific compartments.

Thank you so much for joining this lecture. If you have some
time, please join the next lecture where we look into more
complex scenarios around policy inheritance, and
attachment, and what happens when you move resources
across compartments. Thank you.
4. POLICY INHERITANCE AND ATTACHMENT
Hi, everyone. Welcome to this lecture on policy inheritance
and attachment for compartments. And what happens to the
policies when resources are moved or compartments are
moved? My name is Rohit Rahi, and I'm part of the Oracle
Cloud Infrastructure Team.

So there are two important concepts which you need to


understand when creating policies. And we discussed earlier
that policies have to be attached to compartments. So there
are two important concepts there. So one is the policy
inheritance. And what it simply states is, compartments
inherit any policies from their parent compartment.

So for example-- we saw this earlier-- OCI has a built-in


policy for administrators, which is allow group administrators
to manage all resources in tenancy. And again, if you recall
from the previous module just on IAM policies, manage is the
highest level of verb which is available, and all resources
mean admins can manage everything in OCI tenancies--
basically, the account.

So due to the policy inheritance, the administrators group


and users within that group can also do anything in any of
the compartments in the tenancy. In my previous example in
the demo, you saw my tenancy had 15 compartments. And
the users, as part of the Administrators group, could actually
manage resources in any of those compartments because of
this concept of inheritance.

Now let's look at an example. Let's say we have the root


compartment-- which again, as we discussed, is the parent
for all the other compartments in a tenancy. And there are
three other compartments-- A, B, and C. So policies that
apply to resources in compartment A also apply to resources
in compartments B and C.

So what it means is, if you write a policy which says, allow


network-- a group of network admins to manage virtual
network family in compartment A, basically, that also allows
the group network admins to manage networks in
compartments B and C. So you don't have to specifically write
policies specifically for compartments B and C, because they
are inheriting policies from their parent compartment.

Now, the second important concept you need to understand


is the concept of attachment. When you create a policy, you
must attach it to a compartment-- or, for that matter, to the
root compartment tenancy. Where you attach it is really
important because that controls who can modify it or delete
it.

So for example, if you attach it to the tenancy, also the root


compartment, then anyone who has access to manage
policies in the tenancy-- like administrators-- can change or
delete it. But if you attach it to a child compartment-- so
here, if it's in compartment A, B, or C-- then anyone with
access to manage the policies in that specific compartment--
you can also define compartment admins, for example-- can
change or delete it.

So you can have compartment A admins here. You can have


compartment B admins here, et cetera, et cetera, right? And
you can define your-- you can attach your policies to these
different compartments, and you can have different policies
for each of these admins.
now let's look at an example. So let's say you want to create a
policy to allow network admins to manage networks in
compartment C, right here. So you could attach the policy
either here, or you could attach policy here, and you could
write a policy like this, right? Group allow the network
admins to manage virtual network family in compartment C.

So you could definitely write it here. You can also write in B


and, because of inheritance, the policy will apply to VCNs in
compartment C, right? But if you want to write this policy in
compartment A, how would you do that? Because
compartment A has no-- it is part of a tree, but it really
doesn't know where C exists.

So the way you will do that is you will say, allow group
network admins to manage virtual network family in
compartment. And right here, you would provide a path-- B
colon C. So if you do B colon C, and you do this, you write--
you can put up the policy right here. If you don't do this, then
A has no idea of where the C compartment is, and it would
not be able to-- if you do that, the system will give an error
saying that this policy could not be attached to compartment.
A.

Now, let's get very-- let's get a bit clearer on this. If you write
a policy here, only compact A admins can modify it.
Compartment B admins and compartment C admins might
not be able to modify it. And network admins can still only
manage networks in compartment c because your policy says
B colon C, so it's going all the way to compartment C.

In the same example, if you want to attach this policy, you


want to keep things simpler in the root compartment
tenancy, you will have to give the full path. So it's A, colon B,
colon C, right? And so that way, you have a complete path,
and you could attach this policy right here in the root
compartment.

Now, what happens when you move a compartment to a


different parent compartment? There's certain restrictions.
You can do this-- when you move a compartment, all its
content, including the subcompartment and resources, are
moved with it.

Now, there are a couple of restrictions. You cannot move a


compartment to a destination compartment with the same
name as the compartment being moved. So the compartment
B here cannot be moved here because it shares the name
with the parent here. So it cannot be possible.

Two compartments within the same parent cannot have the


same name. So for example, if there was a compartment C
here, and let's say there was a compartment C here, you
could not move C right here, because then this and this
would have the same name. So there's a couple of restrictions
you have to keep in mind.

So let's look at a couple of examples on policy implications


when moving compartment. Now, this is really important
because this comes up on the exams, several exams we have,
so let's look at this really quickly.

So policies that specified the compartment hierarchy down to


the compartment being moved-- so this is first condition,
which has to be true-- will automatically be updated when
the policy is attached to a shared ancestor of the current and
target parent. So this is the second condition which has to be
true, right?
So let's look at an example. So in this case, the compartment
which is being moved is compartment A, all right? Now, the
policy has to specify the compartment hierarchy, which we
saw earlier slide under here, slide down to the compartment
being moved.

So you have to write a policy like this in the hierarchy-- test


clone A, right? So this is going all the way to this, to the
compartment being moved. Now, where is this policy being
defined? It's being defined at a shared ancestor. So in this
case, Ops is a shared ancestor. This compartment A has to be
moved here.

So Ops is a shared ancestor of both Test and Dev. So the


second condition is met, and the first condition says the
compartment hierarchy is down to the compartment being
moved. So test column A is this compartment is being moved.

So if you do this, and you write a policy right here, then the
policy automatically changes right here, and it gets updated
to dev colon A. And policy is automatically updated, and this
group G1 doesn't lose its permissions.

Now, let's look at another example. In this case, my shared


ancestor is the root, which is technically possible because the
root is the shared ancestors for all the other compartments
within the tenancyl now, in this case, you write a policy here,
like the previous example, but you only give the bot from Ops
to Test.

So if you do this, there's Ops to Test, the second condition


shared ancestor is met. Both of them are shared ancestor-- so
that's good-- But the first condition says that the
compartment being moved, you have to write the policy down
to that and give the whole hierarchy. In this case, are we
doing that? We are going from Ops to Test, but not to
compartment A here.

So if you do this, and there is a policy defined on this group


G1, G1 loses that policy when the compartment is being
moved, because it has no idea where this compartment A--
the tree, the path, the hierarchy path-- where it is, right?

But in this case, if you had done Ops colon Test colon A, and
you did the move, it would have changed to Ops Dev colon A
as in the previous example because there's a shared ancestor.
And if you had specified this right here, it would have done
that.

So it's, again, little bit tricky. You just have to make sure that
you understand the concept. With the compartment being
moved you have to give the whole hierarchy path. Otherwise,
the policies doesn't get updated.

Now, there's another thing which you have to keep in mind, is


if you write a policy and attach it directly to a compartment
moved-- or getting moved-- that policy is not automatically
updated. So we write a policy here on the Test compartment.
And so parent-- so this A inherits from Test. And you do this
right here on the Test. The policy is not automatically
updated and becomes invalid because it's directly attached to
the compartment.

Thank you for joining this lecture on policy inheritance and


attachment. If you have time, join the next lecture where we
talk about tax. Thank you.

5. IAM – TAGS
Hi, everyone. Welcome to this module on OCI tags. My name
as Rohit Rai, and I'm part of the Oracle Cloud Infrastructure
Team. So when it comes to tagging in OCI, there are two
kinds of tags which are supported today. The first category
would be familiar to folks who have worked with other cloud
windows, and that's the concept of free form tags. So this is a
basic implementation, and it basically consists of a key and a
value.

So right here, you have a compute instance, and you have a


couple of tags which you have applied. The first tag says the
key is environment. The value is production. The second box
is project. The value is alpha. And you can see these two tags
here, right? It's really simple and free-form. You can define
whatever tags you really like.

Now, with OCI, there's a differentiated feature which is called


Define Tags. And this gives you more features, more control.
And let's see how it's implemented.

So the first thing you do is you have this concept of


namespaces. So you have a namespace here which is
Operations. You have a namespace here which is Human
Resources. And within the namespace, you have sort of a
schema where it says, you can define your tag keys. So in
this case, it's an environment. You could say, what kind of
value is supported here? In the second tag keys project, you
could specify a value, and so on, and so forth.

The whole idea is, when you use defined tags, you have sort
of a schema, and you can secure them with policy. Later on,
I'll show you a slide where we talk about how you can secure
them using policies-- OCI policies.
So let's dive a little deeper into the tag namespace. As we
saw, tag namespace is nothing but a container for a set of
keys-- tag keys-- with tag key definitions. Now, what does
that look like? The tag key definition specifies its keys. So in
this case, we defined a namespace call Operations, and we
have a key which is called Environment. And we could specify
what kind of values are supported for this key, right?

So for right here, we could say that it's a string, or we could


say it's a text, or a number, and so on, and so forth, right?
The way you would specify the tags now is with a
namespace.t, and then you provide a value for that particular
tag.

Now, tag key definition or a tag namespace cannot be deleted,


but you can retire them. And once you retire these, either the
namespace or the tag keys, you cannot use them, but you
can again reactivate them, and you can use them again.

So let's see how you work with these defined tags. So as we


saw earlier, defined tags consist of a tag namespace, a key,
and a value. We just saw it in the previous two slides. These,
the tag namespace and the tag key definition, must be set up
in your tenancy by someone. Maybe it's a compartment
admin. Maybe it's a tenancy admin who sets it up. So you, as
users, can use them.

A tag key can have a tag value type of string, or it can also
have a list of values from which the user must choose. So
now, you also can provide options where the user only
chooses a specific set of values, gets an option to choose from
that specific set of values.

You can also use a variable to set the value of a tag. When
you add the tag to a resource, the variable resolves to the
data it represents. For example, if you have a namespace call
Operations and you have a key called Call Center, for the
value, you could specify something like this--
iam.principal.name at oci.datetime, where, when you add this
tag to a resource, the variable resolves to your username--
that's IAM principal name-- and the date, time-date stamp for
when you added the tag.

So this just makes-- again, gives you the flexibility to use tags
in a variety of ways. So let me just quickly go to-- and I'll be
using this, so let me just complete this quickly. Let me just
quickly go into our console and show a couple of these
things, how you could use them.

So the first thing is, you want to find-- you want to use, let's
say, the tag name spaces. Using just normal, free-form tags is
pretty straightforward. Let's use something which is under
namespaces we just looked at.

So let's say you want to create a namespace. You could-- you,


as an admin, could define that. So let's say this namespace
we want to create here, let's say it's called Marketing. And
this is a tag namespace for marketing purposes. You create a
namespace definition, right?

And as part of marketing, then you could create different tag


keys. So you could say "campaign." This is a key to define
that campaign. And right here, you could provide a static
value, or you could provide a list of values.

So let me provide a list of values here, and I could say this is


my campaign for North America, or this is my campaign for
EMEA, and so on, and so forth, right? I could actually define
these values. And now I created a tag key definition.
Now, let's say if a user already did this, right, and let's say if
a user now comes in here and wants to create for this
marketing arm, they're creating, let's say, a resource, so I go
into my training compartment which I've been using until
now. And I see a bunch of these virtual cloud networks here.

Let's say I want to create a marketing VCN, virtual cloud


network. Now I can come here. I can specify a CIDR block. I'm
just creating a simple network without getting into subnets
and route tables, et cetera, right?

Now, right here-- this is true with all the resources you have
in OCI-- have a place which shows the tag namespaces. And I
could use a free-form tag. I could do that, or I could pick one
from the namespaces we just have in the system which my
admins have created.

So the admins have created this-- the admin has created this
namespace for marketing. So pull that out, and right away,
you can see the tag keys here. And because it was a tag key
which had a set of values, specific values, I could pick these
values from here-- North America, EMEA, and APAC.

So let's say I choose North America. I could create a virtual


cloud network here, right? And if I come down here, I can see
that there is a tag which is marketing.campaign. This is my
namespace. This is my key. And the value is North America.

Now, I could add tags here, right? And let's go on to add a


freeform tag. It's really simple as well, right? You could just
come here, and you could specify, let's say, cost center is a
freeform tag.
You could specify a value, and you could add a tag like that,
right? So there are various ways you would use the tags
within OCI. Hopefully, you know, gives you a flavor of how
defined tags help you with-- gives you more flexibility.

Now, last thing I want to show here is how you secure these
with policies, because we talked about that, right?

So in the previous slides, we have talked about how you


define policies for different users. So in this case, let's say
there is a group called instance launchers, which, just by
name, it means that these are users who are creating
instances.

And you need to write policies where they can create and
delete instances, and so on, and so forth, right? And they are
using virtual network families, so just "use" here. They are
using the block volume, so there's the keyword "use" here.
But they're creating and deleting instances, so the is the
keyword "manage" here.

Now, you could secure your defined tags with policies as well,
right? So for example, if you want these users to use
namespaces, you could just leave it there, or you could make
it conditional and more powerful. So you could say that a
target namespace name-- remember, this is the example of a
complex policy. We talked about request.operation and
target.name. Those are the two keywords you use with
variables.

So you could say target.tag-namespace.name, so the name of


the tag namespace, is Operations. So now, if they launch the
users in this particular group, they launch an instance, they
can apply the operations namespace back. And anything
which comes under the Operations namespace, whether it's a
Cost Center key, or a Project key, or a Region key, or the
other keys you defined, they can apply those tags within that
namespace.

So again, it gives you a little bit more power because it's not
just free-form tags, but you could define tags. You can have
some consistency. Users can choose from certain values. And
you can also secure them using policies. So who can apply
the tags? You could control that.

Well, with that, this concludes our lecture series on OCI


identity and access management. I hope you found it useful.
Thanks for joining this lecture series. If you have time, please
join the next lecture series on Virtual Cloud Network. Thank
you.

VIRTUAL CLOUD NETWORKS


1. CIDR
Hi, everyone. Welcome to this lecture series on Virtual Cloud
Network-- VCN. My name is Rohit Rahi, and I'm part of the
Oracle Cloud Infrastructure team.

As part of this lecture series, we are going to cover several


topics, including discussion on CIDR-- what CIDR notation
means-- Virtual Cloud Network intro-- we'll look into the
basics of VCN. We'll look into IP addressing and how that is
done within OCI. We'll look into gateways and routing, we'll
look into peering, we'll look into this new feature we launched
a few months back, which is really exciting, transit routing.
We'll look into security, and then we'll finally put all these
pieces together.
So as the first topic in the virtual cloud network lecture
series, let's look into CIDR notations, and why this is
discussed-- why this is important as you are working with
OCI Virtual Cloud Network service. So CIDR stands for
Classless Inter-Domain Routing. And in this routing notation,
IP addresses are described as consisting of two groups of bits
in the address.

The most significant bits are the left-most bits-- are the
network prefix which identifies a whole network, or it can be
a subnet. And the least significant bits forms the host
identifier, which specifies a particular interface of a host on
that network. So simply put, an IP address has two
components, as we just talked about-- the network address
and the host address. So you could logically think about your
IP address as network and the host.

Now there is this concept called subnet mask, also sometimes


called as netmask. What does that mean? A subnet mask
separates the IP address into the network and host
addresses. The whole idea of network and host is actually
done using subnet masks.

And you don't just have to stop here. Subnetting further


divides the host part of an IP address into a subnet and host
address. So taking this concept further, you could have a
network here. And earlier we just had network and host, but
you could also do a subnet. Because in reality, sometimes
you need a bigger network, and sometimes you need to take
the bigger network and you need to subdivide into smaller
networks to give it to your customers.

Now how does this work? Subnet mask is made by setting


network bits to all 1's and setting the host bits to all 0's.
Within a given network, two host addresses cannot be
assigned to host, so it's very important to keep that in mind.
The 0 address is assigned to the network-- it's the network
address-- and 255 is assigned to the broadcast address.

So 0 and 255 you cannot use, but other than that, you could
use the other addresses for your host within your network or
your subnetwork. And next slide, we'll look into this in more
detail, but the notation is actually a pretty straightforward
notation. As you know, IP addresses are 32 bits long with for
octets-- octets meaning these are 8 bits. So you have 8 bits
here, you have 8 bits here, 8, 8.

And you specify the CIDR notation using this slash character
and a decimal number. So you could say something like
this-- 192.168.1.0/24. That slash 24 here is the subnet
mask. Now this is all good in theory-- how does this really
work in practice?

Let's look into a couple of slides, and we'll look into how you
can use this information. So examples of commonly used
netmasks-- subnet masks are class A networks, which all 8
bits-- the first octet being all 1's. Class B network which is
the first two octets being all 1's, and class C network which is
the first three octets all 1's.

And we'll look into a class C network, and we'll further divide
it into a subnetwork, and we'll see how this is all done in
reality. So first thing before we get into that is you will have
to have a grasp of the decimal and the binary notation. So
any time you use IP addresses, you use these decimal
numbers here.

So you have 192.168.1, et cetera, et cetera. Now the way this


works is, as we talked about, IP addresses are nothing but 32
bits long-- IPv4-- and these are four octets of 8 bits each. Now
for every octet, the way it's translated into binary is you have
the first position, you have the second position of the bits,
you have the third, you have the fourth, fifth, sixth, seventh,
and eighth.
So you start with a notation like this, where you have 2 to the
power 0 going all the way to 2 to the power of 7. Now if you
translate this into decimal, you get these values-- so 128, 64,
32, 16, 8, 4, 3, 2, and 1. So if you have to represent 192, you
do this mental math and say, OK, to get to 192, I need 128,
so that bit has to be turned 1. I also need the next bit, which
is 64. So if I add 128 and 64, I get to 192.

So the remaining bits can all be 0's because I don't need


those binary bits to be turned on in order to get to my
decimal value of 192. So that is how you translate between
decimal and binary. That's just basics, but it's important to
note.

So now if I ask you to represent 192.168.1.0 in binary, this is


what it would look like. So 1 and 1 here because you add
them-- you get to 192-- rest is 0. For 168, you have 1 which
is 128. The next bit can be 0-- I don't need that.

I definitely need 32. So 128 plus 32 is 160. The next bit is


16-- I don't need that, so that's a 0. But the next one I need
to get to 168. So 10101-- all 0's here. And then of course, 1 is
represented by all 0's here, 1 here.

And then 0 is all 0's. Now when we talk about this slash 24
subnet mask, remember, in the previous slide we talked
about this. Slash 24 basically means that you turn all these
network bits to 1.
So the first octet is all 1, the second octet is all 1, and the
third octet is all 1. So if you do the math, 8 plus 8, plus 8,
plus 8, you get to 24 bits, and that's basically the 24 subnet
mask here. Now what we do is we take the network and then
we take the subnet mask, and then we do a logical AND to get
the network and the host.

And logical AND basically says that if you have two bits, you
basically do a logical AND on them, meaning if the two bits
are 1, you get a 1. In all other cases, it's a 0. 0 and 0 is a 0. 1
and 0 is a 0. 0 and 1 is a 0.

So if you do a logical AND between these guys here, 1 and 1


is definitely 1. 1 and 1 is 1-- the rest, everything is 0 because
0 and 1, 0 and 1 is 0, and so on and so forth. You do the
same thing again-- you get again these bits here, and so on
and so forth.

So the idea is now you've got this-- you do a logical AND here,
and this is the range you get. So these are the hosts you can
use if I give you a network of 192.168.1.0. Now this first one
is the network address as we talked about, and this last one
is the broadcast.

So you would not be able to use those ones, but everything


from dot 1 all the way to dot 254 can be your hosts. So now
you saw how we took a network, and then using the subnet
mask of 24, we divided it into a network and hosts, and we
could have 254 hosts there. So that's pretty clean.

And these are, like I said, classful netmasks-- subnetworks--


class C networks-- slash 24. Now let's get into a more
complex scenario. Now let's say I have this network, but I
want to do a little bit more. I want to take this network, but I
want to subdivide into subnetworks because maybe I have
more customers and I want to divide the network into smaller
networks.

So how would that work? Now if I take the same network as


before, but now say that I have a slash 27 subnet mask. So is
it class A? Not really because class A is slash prefix 8. Class
B is prefix 16, and class C is prefix 24.

Now this is not a classful class A, class B, or class C


subnetwork. So how would we go about creating a network
and host addresses here? Now it's relatively straightforward,
as we did earlier. But the thing to realize is now you are going
to have 8 subnetworks with 32 hosts each, because of this 27
subnet mask.

So how did I get to that-- how did I do it? Let's look into it.
First thing is 192.168.1.0, that's the IP I got. The network is
still the same, so nothing changed here.

Now my subnet mask changed. If I look at the subnet mask


here, the first eight octets are 1, the second eight octets are 1,
the third eight octets are 1, but in the fourth octet, I also
need three more here to go from 8 plus 8, plus 8, plus 3 is
giving me 27. So I need to borrow three bits from this octet
for my network, because otherwise, I cannot get to 27.

So these three bits I have marked in red because I borrowed


them. Now it literally leaves me with five bits here for my
host, and three bits are taken by my subnetwork. Now if I do
a logical AND, I still get the first number-- the first network I
get is still the one which I got earlier.

So how do I get this number here, dot 31? From where did I
get that? So now the thing to realize is look at the bits we
borrowed here. And by the way, this 224 is coming because
these three bits are 1. So if you do the math-- 128 plus 64,
plus 32, because these three would be all turned to 1-- you
would get to this 224 number. That's how I got that 224.

So now if you see the first three bits I borrowed here, so that
means-- because this is binary-- I can have 2 into 2, into 2--
8 subnetworks, because these are now borrowed for my
subnetwork. I have subnetwork, and now I have host. And
now this piece here-- because again, these are binary-- five
bits are for my host. So I if I multiply, I get to 32.

Also another way to think about it is this number is 192--


sorry, is 128, this one is 64, and this one is 32. So because
my subnetwork is still here, it means I can have 32 hosts in
each of my subnetworks. So the way you would write this
now is the first network you had was 192.168.1.0. The
second one would be dot 32, the third one would dot 64, and
so on and so forth.

And now if I do this math, my networks would go from-- my


hosts would go from 192.168.1.0 all the way to dot 31 here. It
would go from dot 32 all the way to dot 63 here, and so on
and so forth. And again, it starts from 0, so that's why you
set 31 here. And each of these subnetworks have 32 hosts
each.

So as you can see here now, I took this bigger network, and
because of this 27 subnet mask, I have 1.0/27, I have
1.32/27 here, I have 1.64/27, and so on and so forth. So I
took that big network and now I divided it into eight smaller
networks. How did I get eight? I borrowed three bits here, so I
can have 2 into 2, into 2-- eight subnetworks. Five bits are
from my host. Remember, these are all binary, so I can have
32 hosts.
Now this is the basic information you need in order to
operate-- work with the OCI Virtual Cloud Network service,
because everything is sort of in the CIDR notation. So I hope
this is helpful. Thanks for joining this lecture. If you have
time, join the next lecture, where we introduce the Virtual
Cloud Network and some of the core concepts. Thank you.

2. INTRO VCN
[SOUND EFFECT]

Hi, everyone. Welcome to this lecture on Introduction to


Virtual Cloud Network. My name is Rohit Rahi, and I'm part
of the Oracle Cloud Infrastructure team.

So what is a virtual cloud network? OCI Virtual Cloud


Network is a private network that you set up in Oracle data
centers with firewall rules and specific type of communication
gateways that you can choose to use. a VCN covers a single
contiguous IPv4 CIDR block of your choice.

In the previous lecture module, we looked into CIDR and how


you can use CIDR notation. So if you have not watched that
video, please go back and take a look. But VCN covers this
single contiguous IPv4 CIDR block. Today, we just support
IPv4 and not IPv6.

A VCN resides within a single region. So how does this work?


We'll look in more detail, but first look into how is a VCN
represented. So a VCN is simply represented by CIDR range
here.
And the guidance here says you should use ranges which
don't overlap with your on-premises or other networks you
are using. So first things first here, this is the CIDR notation
we looked into in the previous lecture. This is what is called
RFC 1918 ranges. So our recommendation is to use private IP
address ranges specified in RFC 1918.
Now what is this RFC 1918 mean? RFC 1918 was used to
create the standards by which networking equipment assigns
IP addresses in a private network. So these are for private
internets. Now RFC 1918 resolves the following range of IP
addresses that cannot be routed on the internet.

So the first one is this slash 8 prefix. The second one is


172.16/12, and the third range is 192.168/16. Now we
looked into this in the previous module. So this one goes all
the way from .16.0.0 all the way to .16-- sorry, to .
31.255.255.

We looked into these in the previous module. This one goes


all the way from 192.168.0.0 all the way to 168.0-- to
168.255.255. And of course, the first one, 10.0.0.0/8 goes all
the way from 10.0.0.0 all the way to 10.255.255.255.

Now one thing to keep in mind is that, again, these are not
addressable on the public internet. You can assign these
ranges within a private network. Each address is unique
within that network, but not outside of it. Now one thing to
keep in mind, because this comes up a lot of times, is OCI
within Oracle Cloud Infrastructure VCN, the size we support
is slash 16 to slash 30.

So even though we say use these recommended RFC 1918


ranges, we don't support a slash 8 range, for example. So we
only support-- this will come up in the exam also-- slash 16
to slash 30. And remember, as your subnet mask becomes
bigger, your networks become smaller.

Now why don't we go all the way to slash 31, for example?
And the next bullet actually explains that. In VCN, the first
two IP addresses and the last one are reserved. In a typical
network, the first and the last are reserved. The first is
network, the last is broadcast.

In case of VCN, three IP addresses are reserved. So that is the


reason we stop at last 30 networks, and we don't get into
networks which are smaller than that. So remember again,
this is an exam question-- the three IP addresses are reserved
in an OCI VCN.

Now let's look into this in a little bit more detail. So first thing
is you see this Oracle Cloud region here. In typical regions,
we had the previous-- the first regions we launched, we
started with three availability domains.

So you have AD1 here, you have AD2, and you have AD3. So
we always had three availability domains. Now what it's
showing here is in a region, irrespective whether it has three
ADs or one AD, the VCN is a regional service. And how you
create a VCN, you just simply specify a CIDR range--
recommendation is to use RFC 1918.

So this one is in RFC 19 range, 10.0.0.0/16. You specify this


VCN-- as you can see, its a regional thing. It spans across the
ADs. Now we looked into this in the CIDR notation module.

You take a network and then you subdivide into


subnetworks. So you have this concept of a network, then
you have a subnetwork, and then you have hosts. We looked
into this in the previous module.

So similarly, you take a VCN and you divide this into


subnets. Now each subnet you create can either be an AD-
specific or regional. Now what do I mean by that? AD-specific
basically means your subnet is contained within the AD.
So you see subnet A, subnet B, and subnet C all are part of
their respective availability domains. They cannot span those
respective ADs. Now like I said, multi-AD region, you have
to-- if you create an AD-specific subnet, they are contained
within that AD.

Now there is also a concept of a regional subnet. This is


something which is relatively newer-- was launched a few
months back-- where you are subnets, if you create a regional
subnet, spans all the three ADs in a multi-AD region. So as
you can see here, this particular subnet, subnet D, spans all
three ADs here.

Whether its a regional or an AD-specific subnet, each subnet


has a contiguous range of IPs in IPv4 as in a VCN described
in the CIDR notation. An important thing to keep in mind is
that subnet IP ranges cannot overlap. So within a network, if
you create subnets, just keep in mind that they cannot
overlap. It seems very logical.

Now why do we create subnets in the first place-- if you ask.


Instances are placed in subnet. So you can see an instance
here, you can see an instance here, you can see an instance
here, you can see an instance here. And the instances draw
their internal IP address and the network configuration from
the subnet where they belong.
Now because of that, you can have two different
characteristics in a subnet. The first is you could designate
your subnets as private. What that means is instances
contain private IP address and no public IP addresses.

These can be your databases, or for security or other reasons,


you might choose to just use private subnets. The second
class of subnets are public subnets, which contain both
private and public IP addresses. And there is this concept--
we are talking about a Virtual Network Interface Cards--
VNICs. VNIC is a component that enables a compute instance
to connect to a VCN.

The VNIC determines how the instance connects with


endpoints inside and outside the VCNs. Specifically, if the
VNICs has only a private IP address or also has a public and
a private IP address. So literally, these instances here have a
network card here, and this network card is really important
because it determines how the instances communicate both
within the subnet and the VCN, and outside the VCN as well.

Now with this, let's quickly jump into the Console and show
you a quick demo of how you create a VCN within Oracle
Cloud Infrastructure. So I'm logged into my OCI Console. And
if you click on this sandwich or burger menu icon on the left-
hand side, you can see different tabs, and you can see the
Networking tab right. And within networking, the first link is
Virtual Cloud Networks-- VCN.

And there are a whole set of things here-- dynamic routing


gateways, IPsec, load balancer, FastConnect, public IPs, so
on and so forth. And we'll discuss each of them in
subsequent modules. So if I click on Virtual Cloud Network,
first thing you'll need to notice here, which we discussed in
our identity and access management module, is the
compartments. Now compartments are logical locations
where create your resources.

So you need to decide where you are going to create your


virtual cloud network. I have been using training
compartment for all my training demos, so we'll use that so
you can see the compartment here. And then there is a
button here which says Create Virtual Cloud Network.
So first thing I can do here is-- I'm in US East Ashburn
region, so I can use this kind of a naming convention, and I
can say this is my first VCN in US East. And there are two
options here-- one says create virtual cloud network only, and
the second one says create a virtual cloud network plus all
the resources. So because I don't know much, let me just
choose this.

And you can see here it's doing a bunch of things for me. And
I'll just click on that and I'll click on Create here. And within
a couple of seconds-- a second or less, you can see my virtual
cloud network is created. And you can see this is in US east,
and this is my first VCN.

Now within this VCN, a couple of things to notice. First is this


is the CIDR block-- pretty straightforward. It chose one for
me-- I didn't provide that. Next example, we'll actually do it
ourselves. And it created three subnets here.

Now US East has three availability domains. So you can see


that there is a subnet for each of availability domains. So
AD3 has a subnet, AD2 has a subnet, and AD1 has a subnet.
Everything is available. And you see that it also chose CIDR
blocks for subnet [? sites. ?]
So 10.0.0/24, 10.0.1.0/24, 10.0.2.0/24. So it took that big
network here and subdivided it into smaller networks. Pretty
straightforward. Now there are a bunch of things within a
virtual cloud network. Route tables, gateways, security lists,
different kinds of gateways, network security groups, et
cetera, et cetera.

We'll discuss all these in subsequent modules. So for now,


we'll skip that. So second thing I want to show here is this
was pretty straightforward. Think about this as a default
network. I really don't care about my IP addresses, I just
want something quick and easy, so I chose that option.

If I didn't want that, I could come here and I could say this is
my-- or let's call this production VCN in US East. It's
compartment is training. I want to create a virtual cloud
network only. I really don't want to go and create all the
subnets and all that, because I want to control what kind of
subnets and what kind of routing-- the CIDR rotations I can
use.

So I chose 192.168.1.0/24. This is the one we were using for


our slides earlier. You can see things like DNS label. I can
apply tags here if you remember from the identity and access
management, module. And I can create my network here.

Now it's really straightforward. I got my network here, but


there is no subnets here. You can see here no subnets, none
of the other things got created. So I can do that one by one.
So first thing I want to do is I want to create my subnets.

So I would say this is still US East, and this is my public


subnet. So I say public-- or actually, I can remove subnet
here-- just say public. That's fine. And now I can choose
between regional and availability domain specific subnet. I
could do that.

I can choose regional. That's fine-- doesn't matter. This region


has multiple ADs, so I could have chosen an AD-specific
subnet as well. And if you guys remember from the previous
module, these are the CIDR ranges we were using.

So 192.168.1.0/24, I broke it down into a smaller network.


You can see here 1.0 goes all the way to 1.31. Now it's asking
me for some other specific things. It's a route table. I'll just
choose a default route table.

And now it's saying is it public or private? So I will say it's a


public subnet. I could have chosen private here as well.
Because it's public, I'm going to use public. And then it says
choose a security list. And both route table security list we
can change later on. We'll talk about that in the subsequent
modules.

Let me just go ahead and create a private subnet as well. So I


say private regional is fine. I need to choose another CIDR
here. So remember from the-- sorry-- from the previous
module, we took a big network, we divided it into specific
subnetworks.

So in the first CIDR, I was using 1.0/27. This one is 1.32/27.


I could use 1.64, 1.92, so on and so forth. So I chose that
network. I'm using the same route table, but in reality, you
would be using a different route table because it's a private
subnet.

And I would choose a private subnet. And here I can choose


the same security list, but these, again, I can change
subsequently. And now I create a subnet. So very simply, I
created two subnets, private and public. And I created two
subnets.

In the next module, we'll talk a little bit about public and
private IP addresses, and then we'll spin up an instance in
both the subnets, and we'll get into a little bit more details on
how things work. Thanks for joining this lecture. If you have
some time, join the next lecture where we talk about IPv6
addressing within OCI VCN service. Thank you.
3. IP ADDRESSES
Hi everyone. Welcome to this module on IP Addressing within
the OCI Virtual Cloud Network Service. My name is Rohit
Rahi, and I'm part of the Oracle Cloud Infrastructure team.

So first look at the different kinds of IP addresses supported


by the service, starting with the private IP addresses. Now
each instance you place in a subnet has at least one primary
private IP address. That's mandatory.

Each instance can have two or more virtual network interface


cards, we looked into the previous module. Now the first one
is called a primary virtual network interface, the primary
VNIC. The additional ones are called secondary VNIC, or
secondary VNICs if you have more than two.

So how does this work? Remember we talked about this


earlier, you create a virtual cloud network site and rotation
and then you divide this network into the smaller networks
subnets. Why? Because you place your instances within the
subnets, and the instances draw their IP addressing and
network configuration from the subnets in which they are
placed.
So right here, what I'm showing you is this instance has two
network interface cards, this is the primary, let's sa, and this
is the secondary. So it has to network interface cards. And
the primary VNIC, as it's shown here, has a primary private
IP address. Remember it's mandatory, so if you put in place
instances they absolutely need to have a private IP address.

Now you are not just restricted to one private IP address for
an instance, you could have-- the first one is the primary
private IP. But you can have additional private IPs, and these
are called secondary private IPs. So you have a primary
private IP and you can have secondary private IPs.

How many? You can have 31 additional secondary private


IPs. And this pattern is repeated across the VNICs you have.
And in some cases, you have something like 52 VNICs here.
So you can have a whole set of primary and secondary private
IP addresses there.

Now private IP is mandatory, right, this is how the instances


are configured within VCN this is how they communicate, et
cetera, et cetera, and this is how we can reach them. Each of
these private IPs can have an optional public IP assigned to
it. Why optional? Most of the cases, you really don't need a
public IP, but in some cases if you do, you can assign a
public IP to a private IP address.

Now how does this work? One question you would ask is how
do I know what is primary and what is secondary, or VNIC
right? So every VM has one primary VNIC. How? You create
this when you launch the instance, and we'll go and look into
this in the demo.

And then you add a secondary VNIC, whether one or more.


New internet device is added and is recognized by the
instance operating system. So as you can see here in this
particular graphic, you have VM1, which has only one single
VNIC instance, just a primary VNIC. And you can see here,
VNIC 1 in Subnet A, just one of those.

Now in the virtual machine tool, VM2, it has two VNICs. So


there is VNIC 2 here and there's VNIC 3 here. And the
interesting thing here is the two VNICs are in two different
subnets within the same VCN.
So if you can see here, VNIC 2 is in Subnet A and VNIC 3 is
in Subnet B. Why would we do that? We would do that
because this particular virtual machine might be a virtual
networking appliance, which is sitting here and is sort of
monitoring those two subnets for security or intrusion
detection or any of the other purpose sites like those kind of
virtual appliance scenarios.

In case of VM 3, things get even more interesting. So like VM


2, it has two VNICs, right, these are connected here. But now
these two VNICs are living in two separate virtual networks
altogether. So this is Virtual Network 1 and this is Virtual
Network 2. So it's a completely different virtual cloud
networks.

Why would you do this? This particular virtual machine is


doing something around management. So if you have many
VCNs, you need a way to sort of manage some things in
there, you would use a configuration like this where you have
a leg, so to say, in each of these VCNs, and you can reach
them for-- sort of have management network for isolated
access.

Now we looked into private IPs. As we looked into network


interface cards, whether it's primary, whether it's secondary.
Public IPs, let's talk about public IP. Public IP is an IPv4
address that is reachable from the internet.

Now, as we said earlier, public IP is assigned to a private IP


object on the resource. So whether it's an instance where you
want to assign a public IP or it's a load balancer, those are
valid use cases, it's possible to assign a given resource
multiple public IPs across one or more VNICs.

So what does that look like? So the same example as we had


earlier, you have a VCN, you have multiple subnets, you have
one subnet here. And this has multiple VNICs, so there's a
primary VNIC, there's a secondary VNIC right?

Now earlier we saw it had a primary private IP address and


secondary private IPs, but each of these can also now have a
public IP. It's optional, but you could have an public IP. In
these cases, there is no public IP, all right. So these are all
just private. So you can have multiple scenarios like that
where you can have a really complex sort of private public IP
addressing scheme.

Now there are two kinds of public IP addresses within OCI


Networking Service, one is called ephemeral, the other one is
called result. The ephemeral is temporary and existing just
for the lifetime of the instance. So the instance dominates,
the IP address is gone.

Result is persistent and exists beyond the lifetime of the


instance it's assigned to. It can be unassigned and then
reassigned to another instance. So in cases where you want
to not change your public IPs because you might have
downstream load balancers or whatnot, applications using
them, you could assign reserved IP addresses. So even if your
instance dies and you have a channel taken care of, you can
still take that IP and assign to another instance you create
and your application is not affected.

Ephemeral IP can be assigned to primary private IP only. So


in a Virtual Network Interface Card, how many primary
private IPs we have? We have only one.

How many secondary? We have 31. So ephemeral can only go


to one, the primary private, but result, you can have
particular result, public IPs because one primary private plus
31 secondary private IPs can all have result public IPs.

There is no charge for using public IP, including when the


result public IP addresses are associated. Now with this, I
mean, you should be very careful when you create a public IP
because these are precious resources like this. Now we have
IPv6 because IPv4 is sort of running out of available address
spaces, et cetera.

So when should you use public IPs? Now in most of the


cases, you probably don't need public IPs. Public IPs can be
assigned to instances. It's not true commanded in most of the
cases.

In some cases, we have a public IP address. If you're using a


public load balancer, Oracle provides a public IP. You cannot
choose or edit, but you can definitely view, you can take a
look at what it looks like NAT Gateway-- we will talk about
gateways in the next module-- Dynamic Routing Gateways
and so on and so forth.

If you are manage Kubernetes, you get a public IP. You can
view, but you cannot choose or edit. In some cases, you
cannot even view them, but you definitely cannot choose or
edit but you cannot even view. So a good example is
somebody call an Internet Gateway. We'll talk about this in
the next module. You cannot see what public IP it has, and so
on and so forth. All right, they're from other services where
you might not-- you get a public IP but you cannot even view
those.

4. ROUTING AND GATEWAYS

In this module, we'll look through the various gateways


provided by the OCI Virtual Cloud Network Service. But
before we get into any of the gateways discussion, we need to
understand the concept of route tables. You saw this earlier
in the demo when we were talking and introducing Virtual
Cloud Network, we can create and use a route table in the
VCN service when we were using the console. But let's talk
about what a route table is.

A route table contains rules about how IP packets can travel


to different IP addresses out of the VCN. So right here you
can see there is a route table which is attached to the subnet.

Now what does the route table consist of? It consists of a set
of route rules. Each rule specifies destination CIDR block and
it specifies the route target, the next hub for the traffic that
matches that CIDR. So what exactly do we mean?

So if we look at this particular subnet, it's a public subnet. It


can be regional or it can be AD-specific. In this case, I'm just
using a regional public subnet.
And in the route table there is an entry which says 0.0.0.0/0,
which means any IP address, or all IP addresses. So this is
my destination CIDR, packets destined for any IP address.
They need to go to internet gateway. And this is what is being
been shown here, all traffic destined for internet gateway.

So what I do is I create an internet gateway to manage service


provided by the OCI Virtual Cloud Network Service. And right
here you can see, because of that particular entry in my route
table, my packets can actually go to the internet, and they
can also come from the internet. So somebody could actually
access-- if it's a web server, they could access my web server
running in the public subnet.

Now important considerations to keep in mind. Each subnet


uses a single route table. So each subnet can only have a
single route table. You can specify that when you're creating
the subnet or you can edit it later if you're not sure what kind
of route table to use, you could edit that later. Route table is
used only if the destination IP address is not within the VCN's
CIDR block. So what that means is you don't require any
route rules in order to enable traffic within the VCN itself.

So as you can see here in this particular graphic, there is no


rule, like a local rule, required here for routing traffic within
the VCN itself. It's actually done implicitly. You really need to
write a rule like that.

When you add a gateway, whether it's an Internet Gateway,


NAT Gateway, different kinds of gateways, you have to update
the route table for the subnets that uses these gateways.
Otherwise, you can create a gateway but the packets will
black hole. They have no way to go to the gateway then. So
again, this seems pretty logical, but that's how the route table
works.
So having looked at route tables briefly, let's talk about the
different gateways which are supported in the OCI VCN
Service. The first gateway is called Internet Gateway. And as
the name specifies, it's a gateway which takes traffic in and
out from a public subnet.

So as you can see here, as we had seen in the previous slides,


we have a public subnet here, it can be regional or AD-
specific, and there is an instance which has a public IP. Now
it can be a web server or a load balancer or something you
are running your own load balancer, those use cases. But it
has a public IP.

And of course, if it's a web server, we want users to access it


or if it's a load balancer we want users to access it. So we
create this thing called Internet gateway. It's a managed
service so you really don't need to care about the bandwidth
or you know [INAUDIBLE]. All those are taken care by Oracle.

You create this internet gateway. And then using that


gateway, the packets can go in and out to this instance in the
public subnet. Now important thing to keep in mind, you can
only have one internet gateway for a VCN.

So it means that if you have different public subnets, let's say


you have a public subnet where you are hosting your bastion
servers and you have one for web servers, one for something
else, all of that traffic goes through and all of them subnets
are part of your single VCN, all of them go through the one
internet gateway which is available for the VCN.

As we saw in the previous slide, after creating an internet


gateway, you'd need to add a rule in the VCNs route table,
which sends that packets to 0.0.0.0, zero meaning all IP
addresses, every IP address, need to go to the internet
gateway. And if you do create an internet gateway, add a rule
you have a web server then you can start communicating to
the web server. So that's the first use case, where you have,
let's say, a web server or of our Lord balancer and you need
to access it to the internet

Now there is another use case where if you have an instance


in a private subnet that does not allow traffic from the
internet to reach it, then there is no way for IP packets to
reach the internet. We need a mechanism for sending those
packets out. So for example, you have a database and you
need to get some patches. And then also route the replace
correctly.

In networking lingo it's called Network Address Translation.


In OCI, we do this through a managed service called
notgateway. Notgateway accepts any IP packets bound for the
internet, coming from the private subnets, send those
packets onto their destination and then sends the returning
packets back to the source.

So let's see how it works. A similar example set up as before,


but now, instead of a public subnet, we have a private subnet
here right so this can be hosting your database, for example.
And the database needs to be constantly, regularly, patched,
and updated to get patches from the internet.

Now because it's a private subnet as we have said, there is no


way for packets to go to the internet and get a response back
right because it's a private subnet you're not using an
internet gateway, you cannot reach internet so you have this
many service call that gateway, which gives us whole private
subnet access to the internet without assigning any public IP.
So this is all private IP you really don't need a public IP. You
don't need an internet gateway.

So what this means is host can initiate outbound


connections, and of course, those packets will come back.
But not just you inbound connection, so meaning if I'm
inactive here, I want to ping my database server, I cannot do
that. notgateway would block those responses.

And again, it's a managed service, so we take care of things


like [INAUDIBLE], things like bandwidth. So you really don't
have to manage those yourself. And as we have been seeing
in the previous slides, the rules here in the route table, you
it's basically saying all the packets destined for any IP
address should go through NAT Gateway. So basically you're
sending all the traffic from this private subnet to the NAT
Gateway. And then, if you're getting patches or updates or
something, those packets are coming back and that all is
managed by the NAT Gateway.

The important thing to keep in mind, you can have more than
one NAT Gateway on a VCN, though a given subnet can road
traffic to only a single NAT Gateway. So this is a little
different than the Internet Gateway.

The third use case is around this concept called Service


Gateway. Now what exactly do we mean by that? Let's say,
like in the previous example, you have a database server,
which is running here in a private subnet. So this is again a
DB that's running in a private subnet.

But now, instead of getting the patches, what do you want to


do is you want to do a backup. And the best place to do a
backup for, let's say our database, is Object Storage. But now
the Object Storage Service is a public service. It has a public
endpoint.

Now from this instance, you cannot reach under Object


Storage because you would need a public IP address. So
typically many work around many customers use is they
assign a public IP here and then they can access the Object
Storage. Now that's not a secure design, it should never have
a public IP assigned to a database server.

So how do you go about doing the backups, still leveraging


the benefits of the Object Storage? So what you do is you
create this managed service called a service gateway. Again,
we take care of [INAUDIBLE], again we take care of
bandwidth so you don't have to worry about those. And using
the Service Gateway, any traffic from the VCN that is destined
for any of the supported OCI, public service it uses the
instance private IP address. You don't need a public IP. So it
uses the private IP address for routing. The traffic goes or OCI
internal network fabric. It never traverses the internet even
though you are accessing the public OCI services. So it's a
very secure design. And you can still leverage all the benefits
of public services.

Now how does this work? Similar to the previous examples,


you have a route table here. Now instead of giving a specific
CIDR block here, you provide a service CIDR label.

So there are two kinds of labels which are available today. For
example, if you're going to do object storage, you could
specify object storage as a regional service or you could
specify OCI region object storage here or you could specify all
services. So in this case, in future, if you have other services
you want to access, you could actually do that because you
have access to all the OCI services through the link to your
service gateway.

The last design pattern is around use cases where you have a
private subnet here, it might be a database, but now instead
of going to the internet, you are going to your own customer
data centers. So this can be for-- let's say you have your DNS
running on-prem and you want to access that through your
database. Something in the cloud wants to access it or you
have your on-prem environment from where you want to
migrate data. So you need to connect to that.

So for those use cases, we have again a managed service


called our Dynamic Routing Gateway. It's a virtual router that
provides a bot for private traffic between the VCN and
destinations other than the internet. So you're not going to
internet, so you're not using Internet Gateway or NAT
Gateway or for that matter, Service Gateway going to on-prem
public service, but you're going to your on-prem
environments.

So in this case, you can use the Dynamic Routing Gateway to


establish a connection. And there are two different
mechanisms for doing that. One is through using site-to-site
VPN. And the second is a dedicated private connectivity called
FastConnect. We'll cover this in subsequent modules on
connectivity.

But as the graphic is showing here through the DRG, now


your database can communicate to your on-prem
environments. Now we haven't seen this earlier, you create
the DRG, you attach it to the VCN, you have to add a rule
here right? And the rule is very similar to what we had been
discussing earlier, all the traffics so all the packets destined
for any IP address, has to go through DRG for this particular
subnet since you're basically sending all the traffic through to
the DRG here.

Now the DRG is a little bit different than the other gateways
we have looked. DRG's a standalone object. You must attach
it to a VCN after you create the DRG, and VCN and DRG have
a one-to-one relationship, meaning a single VCN can only
have one DRG and one DRG can be attached to a single VCN
at a time.

So just let's quickly summarize all the network connectivity


options we saw in this module. So the first one is around
letting instances connect to the internet and receive
connections from it by directional, going to internet. So you
would use Internet Gateway.

If you want instances to reach the internet and of course get


those packets back for things like updates, but not have
inbound connections initiated from the internet, you have
NAT Gateway. And the network address basically is doing the
network address translation.

If you want your host and the VCN to privately connect to


OCI public services, for example object storage, but you're
bypassing the internet. Traffic is all going through Oracle's
network backbone. You would use service gateway and then
finally, if you want your host, your VCN mission to connect to
your on-prem environment. Again, for private traffic, you
could use something like Dynamic Routing Gateway.

So these are the four gateways, with different distinct design


patterns. In the next module, we'll look at a couple of these
demos where we create internet gateway, we create a NAT
Gateway. And then subsequently, we'll also talk about
building and transit routing, which completes the set of all
the network connectivity options available with OCI Virtual [?
Outlook ?] service. Thanks for watching this lecture. If you
have time, please join the next lecture where we do a couple
of demos on these gateways. Thank you.

5. VCN DEMO1

This is part one of the two part series Demo. In this one we
create a public subnet and then we create a launch and
compute instance in their installed web server on it, and then
communicate to the web using an internet gateway. So pretty
straightforward demo. Hopefully it could give you a sense of
how things work with the OCI Virtual Cloud Network Service.

So let me jump to the console. And until now, we have


created a couple of networks within OCI in the US East
region. So we're still in the US East region. Let me create
another network as per the slide I had. So it's a DemoVCN.
And I will choose a CIDR block of 10.0.0.0/16, as it was on
the slide. The remaining parameters, I'm not changing. And I
create a virtual cloud network.

Now I just created a VCN. There is no subnet, there are no


internet gateways, et cetera, right. So first thing I need to do
is create a subnet. So let me go ahead and create a subnet,
call it SubnetA as it was on the slide. For the CIDR notation,
the CIDR block, I'm going to choose a smaller block out of
that 10/16 range, which is 10.0.1.0/34 range.

Now right here, I'll choose the default route table. We just
discussed what route tables do, it has routing rules for
routing the packets of the VCN. And subnet access, I would
choose a public subnet because I'm going to create a host in
here, compute host instance, and then I'm going to run a web
server in it. All right, so public is fine.
And then down below, I'm going to choose a default security
list. Security lists are nothing but virtual firewalls which
determine what kind of traffic can flow in and out of the
subnets and the VCN. For this discussion, we have not really
gone and discussed what security lists do and what they look
like, but for now, let's just go in and create this particular
subnet.

So everything looks good. SubnetA, its a regional even though


it's a multi-AD region. That's fine. The CIDR block default
route table, it's a public subnet. And then there's a default
security list. So I click here and I create a subnet. So I just
created a simple subnet in a VCN.

Now let me go to the compute section of the console and


create an instance in this particular subnet. So right here I
can see I have one running instance and a couple of
instances that I have dominated. So I'll create an instance,
call it a web server, Web. I can choose different operating
systems. I'll choose Oracle Linux.

I'll choose 81, so multi-AD region. I'll choose a virtual


machine. I could choose a bare metal, we'll talk about all
those in the compute model. I'll choose a machine which has
one core. And right here, it's asking me to choose the VCN I
just created, it's DemoVCN. And the subnet we just created
was SubnetA. So that's all perfect.

Now right below, I could decide not to assign a public IP


address. But since it's a web server, I want to [INAUDIBLE]
into it, I want to install a web server on it. So I am OK with
keeping a public IP address. But in many cases, even if it's a
web server, you might put it behind a load balancer. You
might not need a public IP address.
So it's just a demo. So I'm going to assign a public IP address
here. If you didn't do that, you could come back later and you
could assign either an ephemeral or a reserved public IP, you
could do that. But I'm going to assign a public IP address
here.

There are some other options, I'm not going to touch them.
And right here, I have to paste SSH keys. Now I already have
my SSH keys, which I am using. So I just paste it there. And
then I can click Create here. And now my instance would be
created. It would take a few seconds and my instance would
be up and running.

As the instance is coming up, there is one thing which I need


to do because it's a web server and that's creating an internet
gateway. Remember we talked about the internet gateway, it
takes your traffic in and out of the VCN to the internet. So we
need to do that. And then we need to add a rule for all
packets to go to the internet gateway

So let me come to the VCN we just created. And first things


first, I need to create an internet gateway. You can see there
is no internet gateway here. So I click here and I provide a
name. I could say it's Internet Gateway, IGW. And there my
internet gateway is created.

And if I click on route table, remember I'm using a default


route table, but there are no rules which are here. So let's go
ahead and add a rule here. So first thing it says is what is my
target type? Remember we talked about four of these
gateways, Internet Gateway, NAT Gateway, Service Gateway,
and Dynamic Routing Gateway. We have not talked about
Local Peering and Private IP as destination. So let's pick
Internet Gateway as the destination.
And then in saying what kind of packets can go to this
internet gateway? So basically I want to out all packets for all
IP addresses to this internet gateway. So I will choose this
destination CIDR block. And now, I would choose the internet
gateway I just created. So rather straightforward, really
simple.

So now I've created an internet gateway I have added the rule


for sending all the packets to the internet gateway inside this
route table which my subnet is using, right. So hopefully by
now my compute instance will be up and running. If I come
here, I can see my web server is up and running, it's in the
running stage.

And right here I can see it has a public IP address. So I take


that public IP address-- I'm using Windows Subsystem for
Linux, it's Windows 10 machine. You could be running on
Linux or MAC, or even on Windows you could use something
like Kitbash.

So right here, I'm going to SSH into this machine. And


because it's an Oracle Linux instance, my username is OPEC.
And this is the IP address I just got, the public IP address.
And right here, it will do as SSH into my instance. Now let's
go ahead and install a web server here, Apache here. It
shouldn't take a lot of time.

Oops--- all right, now another thing which I need to do here is


open the port 80 because remember I'm going to reinstall
Apache here, so we need to open port 80 on my Linux
instance, otherwise, it would block the traffic to this
[INAUDIBLE] port. I will say open port 80, and that's TCP.
That's great. Then I will just reload it just to make sure. All
right.
So this part is all done, right. So I installed Apache, I started
the server, and then I opened port 80. Now, if I use this IP
address, the public IP address, I should be able to go into the
browser and bring up the web server. Now as I am doing this,
you can see that it is connecting and nothing is happening
there is a spinning wheel here and nothing is happening.

So one concept we have not yet talked about is the concept of


security lists. And as I said earlier, security list are nothing
but virtual firewalls which decide what kind of traffic is
allowed in and out of our subnet and our VCN.

And so for this example here, I was using a default security


list. Now, if I click on that. And we'll discuss this more in
subsequent models. You can see that it has a bunch of
Ingress and Egress rules.

Ingress basically says what kind of traffic is allowed in. And


you can see that for all IP addresses port 22 is open. So from
anywhere. I can SSH using port 22. And Egress, you could
see that all kind of traffic is allowed, so for all IP addresses,
for all protocols. So by default our traffic is always allowed
from the subnet, but you could decide to disable this rule or
delete this rule if you don't want that behavior.

Now for Ingress incoming, I see port 22 is open but other


ports are not open. There are a couple of rules for ICMP
traffic, so you could ping, et cetera. But there is no rule for
port 80. So let's go ahead and change that.

So first thing I want to do here is CIDR is fine. I want this


rule to apply to all IP addresses. And my source can be
anything, but my destination is port 80. So this basically says
that any kind of traffic coming from anywhere, I don't care,
every IP address or any IP address, so coming from any
source port going to port 80 as destination, I want to enable
that as incoming.

And I could also decide about stateful versus stateless, we'll


talk about that subsequently. But right now it's a stateful
rule, which means if the traffic is coming in, it remembers the
state. Traffic will always be allowed out from port 80, so I
don't have to explicitly open that.
So I'll click, Add Ingress Rule here. And now you will see that
my web server, I could bring it up in my browser. I could
reverse this, but you could see, and then change my
homepage, et cetera. But you could see that I am able to
access my web server.

So that's a very simple, quick demo where we created a VCN,


we created a public subnet, a VN. And we created this
compute instance, we installed Apache party on it. We
created an internet gateway. We also added a rule here using
a route table to send traffic from this web server to the
internet using the internet gateway. And last thing, which we
don't have these the slide, is we opened the virtual firewall
port, port 80, so the web server could talk to do outside to the
internet and back.

Thank you for watching this Demo. This is part one of the
Demo. In part 2, we will make it a little bit more advanced,
where we create a bastion host and then we create a private
subnet, install a database server, and using a NAT get we
tried to do to get some patches to the database server. Thank
you.

6. VCN DEMO2
[INAUDIBLE] look into installing a database on a private
instance in a private subnet, and using on NAT gateway get
that instance-- some patches from the internet. Now as we
had the setup done in our previous demo, we had a web
server running in a public subnet, subnet a. In this particular
demo, we are going to create a Bastion host, and we are going
to cheat a little bit because it's just a demo.

We're going to create the Bastion host in the same subnet as


the web server. In a production, real environment, you would
have Bastion in its own public subnet so you can secure it
better. But then we're going to have a Bastion, then we are
going to have a database running in a private subnet, and
we're going to showcase a NAT gateway.

So let me just jump to the OCI console. And until now, we


have seen that we have just a subnet in the VCN. So first
thing I'm going to do is I'm going to create a route table and a
security list for my private subnet. Now when creating a
subnet, you need to provide a route table and a security list.
And you don't need one at the time of creation-- you can
always update later on. But it's sort of easy for me to do that
because I'm creating a private subnet.

So right here, I'll come and provide the name as subnet b.


And the CIDR block is 10.0.2.0/24, as is on the slide. And
right here, I can choose my private route table. And I will
choose private subnet because that's basically what I'm
planning to use as my place for my database instance. So
private subnet is fine.

And then below here, I will choose a private security list. And
there you go, I just created my subnet b, a private subnet, to
host my database instance. Now I will go into the compute
console, and I will create a database instance here. So I
would call this as my db or database.

Oracle Linux is fine, AD1 is fine, virtual machine is OK. Right


here, you can see that this is the subnet I just created. It's a
private subnet, and it chose that private subnet. I don't need
a public IP because I'm going to install a database here, so I
really don't need a public IP.

So it's saying do not assign public IP. That's great. All these
options, I'm not going to touch. I need an SSH keys here. I
think I have it here in my-- let me just copy. [INAUDIBLE]
just make sure that I have the whole key copied.

And right below, I don't need any of the advanced options. I


click create instance. Now my database is getting provisioned.
In a few seconds, you will see that it just has a private IP
address-- it doesn't have a public IP as we design.

Now as that is happening, let me go to my network and do a


couple of things so that I can SSH into the private instance,
and show the NAT gateway in action. So first thing I'm going
to do is I created a security list here. You guys will recall that
I need to open certain ports in order for traffic to come to that
particular subnet.

So in this case, because I am connecting-- if you look at the


slides, I'm connecting to this private subnet through the
Bastion host. At least, I need to open this linkage from this
particular CIDR. Otherwise, I would not be able to connect
from Bastion to the database instance running here.

So because we are using this CIDR block, I really don't have a


10.0.3.0/24 as is on the slide here. Both Bastion and web are
running in the same subnet. So I would open it here. I could
just say TCP port 22 if I'm just doing SSH. But I would say
open for all protocols so at least I could do things like ping, et
cetera.

I would also do egress open for everything. That's fine. So


basically, I'm allowing all the traffic out here. So I just
changed my security list here, and if I go back to the compute
instance, hopefully, my database is up and running right
now.

So if I click on database, I can see that it has a private IP


address, and it doesn't have a public IP as we had discussed.
We don't want-- because it's running in a private subnet, so
we don't really want the public IP address here.

So right here, you can see that I'm using SSH proxy
command to go from my Bastion host-- so this Bastion host,
public IP 0.129.213.120.162 to my database instance. And
right here, I'm using a private IP 10.0.2.2. And I click yes,
and now you would see that I'm right inside my database.

So if I want to show you guys pinging Google, it doesn't let me


do that. Because it's in a private subnet, the packets have no
way to go to the internet. Even though my security list I
opened-- I said all the packets allowed-- it doesn't allow
pinging because there is no path for the packets. It's black
holing-- the packets cannot go out.

So this is where the NAT gateway comes in, so let me just go


back to my VCN. And first things first, I need to create NAT
gateway. So I click NAT gateway here, and I will give it a
name-- NAT gateway-- and create one. It's that simple.
Now you will see that the NAT gateway is created, it gets a
public IP address. So how do I associate this NAT gateway to
the subnet? That association is done in the route table which
we are using for the subnet.

So if I click on private route table, remember this one is the


one which is used by the private subnet, subnet b. So if I
click on that, I need to add a route rule. So first thing it asks
here is what is my destination. My destination is a NAT
gateway.

What kind of traffic do I want to send to NAT gateway? I want


to send traffic destined for all IP addresses. So I do that, and
then right here, my destination is NAT gateway, and I add
this route. And as soon as I do that, you will start seeing that
now I am able to ping Google here. I can ping because my
NAT gateway is working.

So literally, what is happening is we have this instance here


through Bastion host. I am connected here, but then this
path is the one I just created. And through this, the traffic is
flowing out. So hopefully, this gives you a good
understanding of how NAT gateway works and how you could
use it. In this case, I'm just pinging Google, but you could use
it to install a database-- let's say, like a MySQL or [? get ?]
batches from the internet.

Thank you so much for watching this demo. In the next


module, if you have some time, please join the next lecture
where we talk about peering and transit routing. Thank you.

7. PEERING
Hello, everyone. Welcome to this module on peering. My name
is Rohit Rahi, and I'm part of the Oracle Cloud Infrastructure
team.

In this module, we'll look into the two different kinds of


peerings supported by the OCI virtual cloud network service.
Let's start with local peering. As the name suggests there on
the slide, local peering basically means you are connecting
multiple VCNs within the same region.

So the graphic shows here you have an Oracle Cloud data


center region, and we have a VCN 1 with an address space of
10.0.0.0/16, and we have a VCN 2 with an address space of
192.168.0.0/16. And we want to connect these two VCNs.
That kind of peering connection is called local peering.

Why do we do this? The reason we do this is the resources


within these two VCNs-- both in the same region-- can
communicate using private IP addresses so they don't have to
go through your public IP addresses. Now there can be
multiple scenarios for this. You can have a management VCN
which talks to multiple VCNs within the company.

You could have a bastion host talking with multiple VCNs.


You could have a load balancer or a DMZ which is in a
public-- in a separate VCN, and then it could be talking to
other VCNs which are not in the DMZ. So there are various
scenarios why you would use local peering all within the
same region.

Now how does this work? The first thing we do is we create


this gateway, as we saw in the previous module, called a local
peering gateway which routes the traffic between these two
VCNs. So you can see here there's a local VCN gateway which
gets created on each of the individual VCNs.
And as we have seen in the previous modules, once you
create this local peering gateway, you have to specify the
routes for the IP packets, otherwise they don't know where to
go. So for this local peering gateway on this VCN, the target is
192.168/16, so all the packets should go there. And for the
other VCN and the local peering gateway, the address is
10.0/16, so the traffic goes onto the other VCN. Pretty logical.

And the other you would do here-- which is not on the slide
because we have not covered-- is you also need to open the
virtual firewalls so you can let the traffic into this VCN from
this VCN, and vise versa. There are a couple of things you
need to understand-- it comes up in the exams also. The first
one is the two VCNs in the peering relationship cannot have
overlapping CIDRs.

So if you have an overlapping CIDR here-- let's say 10/0/24


24 address space here, it would not work because slash 24 is
a subset of slash 16 there. So the peering-- you will not be
able to establish a peering connection, and so on and so
forth. So that's number one.

Number two is peering is not transitive. What that means is if


you have this VCN peered with this particular VCN, and let's
say there is a VCN 3 which is peered with VCN 2, it doesn't
mean that VCN is peered with VCN 3. It doesn't mean that
VCN 1 can be peered with VCN 3 through-- transiting
through VCN 2. That's transitive routing-- it's not supported.

So if you have three VCNs-- 1, 2, and 3-- 1 and 2 are peered,


2 and 3 are peered, and you want to peer 1 to 3, you have to
create an explicit peering connection between 1 and 3.
Transitive is not supported. Now what is remote peering?
Same concept, but now you're peering-- you're connecting
VCNs in different regions. Why would you do that? The most
obvious use case is disaster recovery. You want to do some
kind of DRs. You have your VCNs connected, and so you
could take backups, you could do DR between regions.

So same concept as before. Now you can see two different


regions, Ashburn and Phoenix, let's say, in the US. And you
have VCN 1 here-- same address space as before-- VCN 2, but
now they are in different regions. Now to do this, it's a little
bit more involved process, because remember, anytime your
traffic goes out of the VCN, you have different kinds of
gateways. This time, the gateway we use is the DRG, which
we saw earlier.

And what we do in the Dynamic Routing Gateway is we create


this capability called remote peering connection. So we create
this remote peering connection, and that acts as a connection
point for a remotely peered VCN. And same as before, you
need to provide the path for the IP packets in the route table.
So for this DRG, the path is to go to this particular VCN and
vise versa.

Like local peering, the two regions in the peering relationship


cannot have overlapping CIDRs. And again, transitive routing
is not supported also with remote peering. Well, with that, we
already looked at different gateways which supports today.
Internet gateway, directional going to internet, NAT gateway,
unidirectional going from private subnets during network
address translation going to the internet service gateway.

OCI going from a private subnet to OCI public services like


object storage. And then Dynamic Routing Gateway taking
the private traffic [? peer ?] on-prem environment. Now there
are two more additional connectivity options we just saw in
this module-- our local peering gateway where you connect to
VCNs in the same region, and remote peering connection
which is part of DRG, where we privately connect two VCNs
in different regions.

The idea here is you are connecting VCNs privately so


resources can communicate using private IP addresses. So
let's quickly look into a demo of local peering gateway. This is
the setup we had in the previous two demos, where we had a
web server, a bastion server, both in public subnets, and a
database in a private subnet.

And we were using internet gateway to go out, and we were


using a NAT gateway here to get some updates for this
database running here. Anyway, we are not going to touch
any of these subnets and the hosts running within those
subnets for this demo.

What we're going to do is, in the same region, we are going to


create another VCN and a private subnet, because I really
don't need a public here. And we are going to create an
instance within the private subnet, and then we are going to
do a local peering gateway so we can peer these two
networks. Now for illustration purposes and to save some
time, I already have this instance running.

And it's a public subnet, so I can SSH into it, but this one is
a private subnet. And this VCN and this particular subnet I
have already created, and I've already instantiated this
particular instance so we don't spend time doing these
things, which seem pretty logical-- the way the setup should
be.

So let me jump to the Console. And right here, you can see
DemoVCN is the one we just talked about. So this is 10/16,
and it has subnet A and subnet B as we were discussing in
the slides [INAUDIBLE] subnet A.

This is the web server. I'm going to SSH into the web server.
And then there is no local peering gateway created here, so
you can see there's nothing here. And if I go jump quickly to
the other VCN-- I just created DemoVCN2-- address space of
192.168.1.0/24.

It has a private subnet here, but again, you see no local


peering gateway-- nothing [? can ?] be created here. Now if I
quickly jump to my compute instances, you can see from the
previous demo, we have a web server running here. So this is
the web server we are going to use to SSH to test the private
connectivity and ping the other instance.

So let me just quickly SSH into this instance. And this is the
web server we have been using for the other demos. Now I
also have my instance running in this private subnet here. So
I already have it created.

You can see that it runs in this private subnet, and this is
part of the other VCN with this 192.168.1.0/24 address
space. So I picked this private IP here. You can see it doesn't
have a public IP.

And if I tried to ping this IP right now from this particular


server, I get no response because it seems logical-- I'm trying
to go from here to here. There is no local peering connection.
Here, there are no route tables, security lists opened, so of
course, this is not going to work.

So if I see here, ping is just black holing. The packet is black


holing-- I'm not able to ping that particular server. So let's
change that. So first things first, I'll go to my VCN, and for my
DemoVCN, I need to create-- I need to open my security list
and create a local peering gateway-- change my route table.

So let's do all that. So this is the default security list I'm


using. So let me add an ingress rule here, which says that all
the traffic coming from 168 I'm going to allow. And right here,
this is the address space from my second VCN, and I'll say all
protocol because I'm going to do ping. So let me just open it
here.

Pretty straightforward. If I don't open the firewall rules, I'm


not going to be able to ping the other VCN and the instance
running in the subnet there. Now next thing I'm going to do is
I'm going to create a local peering gateway here, because both
sides, you need local peering gateway in order for the peering
to work right.

So I said local peering gateway for DemoVCN. Pretty


straightforward. And then last thing I need to do is I need to
send the packets to that local peering gateway. So I already
have an entry in my route table here for the internet gateway.

And if you recall, this is basically taking the packets from


here to the internet gateway. So let's quickly go ahead and
add another rule for local peering gateway. So I say local
peering is my destination. And what is my destination CIDR
block I want to reach? Its 192.168.1.0/24.

So this is the path for packets destined for this IP-- I want
them to go through the local peering gateway, which is
straightforward. So I click Add Routes here, and then pretty
much, I am done with this particular VCN. So let me go to the
other reason and do the same kind of things.
So first things first, I need to open the security list here. So
for this one, I need to open the security list coming from the
other VCN. So this is the other reason address space. I could
actually limit it just to the subnet. I could do that, but right
now just for illustration purposes, let me just open the whole
VCN.

So I add it here, and then I need to create a local peering


gateway here. And then in my route table, you can see there
is no route here, so I need to make this a destination for all
the traffic's destined for this particular VCN. So I will pick
this.

And now a couple of things remaining-- I need to come to my


local peering gateway and establish a peering connection.
Because we created all these assets, we have not joined the
local peering gateways themselves. But I picked DemoVCN,
and then I picked the DemoVCN local peering gateway, and I
say establish peering connection.

And you will see it says pending. And within a few seconds,
this will change to peered. If I go back to my other DemoVCN,
you can see here that it says connected to a peer. And now if
I come here, bingo.

I can see that I am able to ping my instance using private IP


address, because my local peering gateway is connected here.
So you can see now I'm pinging from web to this instance
using this local peering gateway, and it's working here.

If I go ahead and if I remove the building connection-- let's


say I terminate one of these-- for terminating, I have to first
remove the rules. So if I go here and let's say just remove this
rule-- I say just stop the peering, you would see that my ping
stopped right away. So I lost that connection and I'm not able
to ping anymore.

So thank you for watching this demo. Hopefully, it gave you a


good flavor of how local peering works. Remote peering works
similarly, but it's a little bit more complex-- you have to
create the DRGs, et cetera, on both sides.
Thank you for watching this demo. If you have time, please
join the next lecture where we talk about security lists and
network security groups. Thank you.

8. SECURITY VCN

My name is Rohit Rahi, and I'm part of the Oracle Cloud


Infrastructure team. In this module, we're looking into
security lists and network security groups, the two
mechanisms by which you can enforce security within the
Virtual Cloud Network service. Now in the previous modules,
we have already looked at security lists.

When we were running a few demos, we actually went


through these and we opened certain ports for some of the
subnets and instances. But let's look into this in more details
now. So as you can see here, I have a VCN with 10.0.0.0/16
address space. And I have three subnets. These can be
regional, or if you're in a multi-AD region, these can be
specific to ADs.

To keep the picture clean, I don't have the ADs shown here,
but these definitely are running in-- whether it's a single AD
region or a multi-AD region. So what is a security list? A
security list is a common set of firewall rules associated with
a subnet and applied to all instances launched inside a
subnet.

Security lists consist of rules that specify the types of traffic


allowed in and out of the subnet. So if you see here-- let me
just use the highlighter here-- if you can see here, there's a
security list here, there's a security list here, and there's a
security list here. First thing to notice here is the security list
is applied at the subnet level. So that's important.

Second thing is if you see the rules itself, all three security
lists have the same rule-- it can be different rules. It basically
says ingress, meaning incoming traffic, I'm allowing all traffic
to come at port 80. And egress, meaning outgoing traffic, I'm
only allowing traffic to this particular subnet or port 1521.

This is, again, a sample ingress. And of course, your situation


will be different depending on what your requirements are.
Now to use a security list with a particular subnet, you
associate the security list with the subnet either during the
creation process or later.

We saw this again in several demos. When we were creating a


subnet, we could attach our security list if you already had
one, or you could use the default one. You can always change
that later on as well.

Security lists apply to a given instance, whether it's talking


with another instance in the VCN or a host outside the VCN.
So this is really important-- if these two guys want to talk to
each other, you would still need to open the security list here
and here. Otherwise, they cannot communicate with each
other, which totally makes sense because it's enforcing more
security. Otherwise, you would be talking about a completely
different security model here.
And then you can decide whether a given rule is stateful or
stateless. We'll talk about this in a little bit more detail in a
couple of slides. So that was all about security lists, and
something which we have seen in the demos. Now let's talk
about another security mechanism, which is called Network
Security Group, or NSG.
NSG provides a virtual firewall like security lists for a set of
cloud resources that all have the same security posture. So
what do I mean by that? Well, what I mean by that is NSG is
applied-- like security lists, it consists of a set of rules that
apply only to a set of virtual network interface cards of your
choice in a single VCN.

So as you can see here, there's an instance-- there's a VNIC


here and there's an instance, there's a VNIC here, there's an
instance, there's a VNIC here. This instance and this instance
have the same network security group. And you can see there
NSG-A which says that I'm allowing traffic incoming on port
80.

This guy here has a different network security group which


says that I'm allowing TCP port 22. So I can SSH into this
particular instance. So now what you are literally doing is
even though these two instances are in the same subnet,
literally, you have different security lists and routes for them.
And you can allow a different kind of traffic to these instances
even though they are in the same subnet.

How is it different than security lists? If it was a security list,


both these instances needed the same security posture.
Meaning either port 80 you would have to open, and it would
open for all instances, or you would have to lock down port
80. Or in case you wanted to open port 22, it would be open
or closed for both instances, because remember, security list
is applied at the subnet level, and all the instances in that
subnet share the same security posture.

So currently, a bunch of services support network security


group. And this list is always expanding, so always check the
documentation as to where we are. Now there is another
difference between this and NSG and security list. When
you're writing rules for an NSG, you can specify an NSG--
Network Security Group-- as the source or destination.

Contrast this with security list rules, where you can specify
only a CIDR, and you can obviously do service for both of
them in case you are going to like a service gateway. But in
this case, you typically would always go with CIDR. But in
case of NSG, you can specify another NSG as the source or
destination. So it just makes life a little bit easier, and it
leads to more complex scenarios you could support.

Now our recommendation is to use network security groups


because of the precise reason I just talked about. When you
are using network security groups, you can separate the
VCNs subnet architecture from your application security
requirements. Like I was saying here, both are instances on
the same subnet, but they have different security
requirements. So you could do that because it gives you a
little bit more flexibility.

Now you could use security lists alone, like we have done in
the demos. You could use network security groups alone, or
you could use both together, as you can see in this particular
picture here. So it has a couple of security lists, and it has a
couple of network security groups.

If you want security rules that you want to enforce for VNICs
in a VCN, all instances in a VCN, the easiest solution is to
put the rules in one security list, and then associate that
security list with all subnets in the VCN. Pretty
straightforward. We have done this in a couple of demos. If
you remember, we had a demo where we had a web host
instance and a bastion instance. And we said just for
simplicity of the demo, we wanted the same security rules for
both of those.
In real cases, you would separate them out, but in our demo
we did that, and we used the same security list for both of
those instances. Now if you choose to use both security lists
and network security groups-- this is very important-- the set
of rules that apply to a given VNIC is the union of these
items. It's very important-- it gets confusing.

It's always the union of these items, meaning union of


security list and network security group. So let's see what
that means. What this means is whatever security rules you
have in the security list associated with the VNICs subnet,
meaning the instance which is in the subnet. The security
rules in all the network security groups, they apply.

And a packet is allowed in or out if rule in any of the relevant


lists and groups allow the traffic. So for example, if security
list has port 80 open and network security group doesn't
have port 80 open, and you apply both of them, your traffic
would still be allowed because port 80 is open. So the easiest
way to do this is if you have both-- you're using both, and
you want traffic to be not allowed, then the easiest way to do
is not have rules in either of them.

If you have a specific rule, meaning a specific protocol, a


specific destination or source, and a specific port, make sure
you check both. Or if you don't want that complexity, just
pick one and use one. So you either use security list or use
network security groups.
Otherwise, you will be troubleshooting situations where it's
always a union, and the packet is allowed if any of those
rules allow the traffic. So it's really important.
Straightforward, but you need to keep in mind. Now there are
two things we didn't talk about earlier-- one is the stateful
security rules and stateless.
What do these mean? So stateful basically means that if
instance receives traffic matching the stateful ingress rule,
the response is automatically tracked and automatically
allowed regardless of any egress rules and vise versa. So what
do I mean by that? So here, look-- there is a port 80-- we
have been looking into this example.

It's a stateful rule. So usually, a source can be anything-- all


IP addresses, protocol TCP. Source port can be anything-- I
don't care. Destination port is 80, so the traffic is coming in
here. Because it is stateful, this traffic is automatically
allowed. So you don't have to do anything. You don't have to
write an egress rule-- the traffic is obviously always allowed.

Default security list rules are always stateful. And so when


you create a rule like this in the Console-- and we'll go in and
look into this-- what it's basically saying is this is stateful.
See, there is no like dropdown or something. Say source
CIDR is any address-- protocol, TCP.

Source port, I don't care. It might be my laptop, it might be


my phone, it might be some other endpoint. And destination
is port 80. Now if port 80 allows the traffic in, it would also
allow the traffic out.

So if you go into the browser, you put the IP address, you can
see a page come up. You're sending the traffic in and you're
receiving the traffic out. You don't have to write an egress
rule specifically.

Stateless is just the opposite. In stateless, response traffic is


not automatically allowed. To allow the response traffic for a
stateless ingress rule, you must create a corresponding
stateless egress rule. Similar example as before-- you allow
traffic at port 80. Now this traffic is not allowed if you don't
have this rule.

So you have to write this rule explicitly, and you will have to
say that now my destination CIDR is going to be any IP. My
source is now 80 because my traffic is coming from here, and
destination port can be anything. If you don't write this rule,
you will basically have traffic come in, but traffic will not go
out.

So if you do that, let's say, with a web server, you put the
address in there on your browser and you would not get a
response page back. Basically, what is happening here is
there is a mechanism called connection tracking. And in case
of stateless, you basically are saying that we don't want
connection tracking.

What's the advantage of this? These are better for scenarios


where you have a large number of connections. Because at
the end of day, stateful you are tracking your connections. So
there is a limit to how many connections you can have.

If you have scenarios where you have like a load balancer,


you have big computing [? or ?] lots of connections, it's better
to go with stateless. Of course, the number is pretty big, but
it's good to go with stateless in those cases. So with that, let
me just quickly jump to the console and show you a couple of
quick demos.
So until now, we have been doing a bunch of these demos.
And we have a web server, we have a bastion server, and all
that, and we showed you a demo where we used security lists
without-- we had not covered security lists by that time, but
let's go and look into some more details right now.

So it's a web server-- pretty straightforward. And we want to


access this web server. How am I able to access this web
server? Of course, it's in a public subnet, it has internet
gateway.

If those two things were not there, then, of course, I could not
access it. And third, it also has a public IP. [? So ?] again, it
didn't have a public IP-- there was no way I could bring it up
in my browser. Really straightforward. Now if I go into my
DemoVCN--

[AUDIO OUT]

--[INAUDIBLE] one of the subnets, I can see security list here.


And the first thing I will show here is the security list has
traffic open for port 80. Really straightforward-- we did this
earlier. Let me just remove this port 80.

Now if I come back here, and now if I click connecting-- bring


up the IP address, you can see there is a button here which
says connecting, and it doesn't come up. There's a spinning
wheel here. Basically, it means that I cannot access that
instance anymore because my traffic was coming at port 80.

I stopped that rule-- I removed that rule, so the packets are


black holing even though internet gateway is allowing. And
you can see that the site cannot be reached. So if I go back to
my slides on the demo, if you see this particular slide, my
internet gateway is there-- so my internet gateway is here.
This is fine.

I'm able to-- this has a public IP here. So there is an IP


address here-- I can bring it up. But because my port 80 is
not open here, I cannot get the traffic in. Really
straightforward-- nothing complex.

Now let me be back and create-- instead of a security list, let


me create a network security group, because we have not
used this until now. So it's rather straightforward. First, I
need to provide a name. So I will say this is NSG1-- or NSG
for DemoVCN.

And then right here, it says stateless or stateful-- same as


security list [? in ?] ingress-egress-- again, same as security
list. And right here, you could see I could specify a CIDR, I
could specify a service. So this is for cases where I want to go
to the service gateway.

So things like if I want to use the Object Storage-- all public


services-- all services in this particular region. It's a regional
thing here. Or I could use another network security group.

Now in case of security list, this option doesn't show up. You
cannot use another security list as the source or the
destination. So I pick CIDR. That's fine. I could say use all IP
addresses.

IP protocol-- I will say TCP is fine. Source can be anything,


and destination is port 80. So literally, I'm doing the same
thing, but now I'm going to use it through a network security
group, not through a security list.

So if I go back to my compute instance, visit a web server we


have been using-- I need to go to the VNIC, and then I need to
attach this network security group to the VNIC. We covered
this in the slides. So if I bring up my training compartment, I
just created this network security group.

Now I'll save my changes. And now if I go back to the browser


and I hit Refresh, you can see that I can get the page back.
So literally, what we did is we removed the security list and
we created a network security group. We attach it to the
VNIC, and now I can see my-- I can bring up my web server.

So hopefully, this gives you a quick overview of how security


lists work, how network security groups work. Those are two
flavors of virtual firewalls that you can use to enforce security
in your VCNs. Thank you for joining this module. If you have
time, join the next lecture where we talk about some of the
features related to DNS, and then we put all these things
together. Thank you.

9. DNS

A module on default VCN and internal DNS. My name as


Rohit Rahi, and I'm part of the Oracle Cloud Infrastructure
team. So first things first-- this comes up in the exam as
well-- your mission automatically comes with some default
components.

There is a default route table, there is a default security list,


and a default set of DHCP options, and we have been using
some of these until now. We have been using the default
route table, we have been using the default security list. We
have not really looked into the default set of DHCP options.

You cannot delete these default components. However, you


can change their contents. You can change what routes go in
there, and you can create more of each kind of thing. So you
can create more route tables or security lists. And again, we
have done that in the previous modules.

So what is this internal DNS? Well, the VCN private Domain


Name System-- DNS-- or internal DNS, enables instances to
use host names instead of private IP addresses to talk to each
other. So there are two kinds of options which are available.
One is called internet and VCN resolver.

This is the default choice for new VCNs. So if you don't do


anything, you would be using this kind of internal DNS
resolver. If you want to use something else, there's an option
to use custom resolvers. Now custom resolver lets instances
resolve the hostname of hosts in your on-premises network
through IPsec, VPN, or FastConnect.

In many cases, you have your databases instances running--


they need to go to your on-prem environments where you're
running your own DNS server side, and do the name
resolution using those DNS servers and not the native ones
which are available in the OCI. Now when you create a VCN,
or a subnet, or an instance, you can specify a DNS label. If
you don't specify one, we create it for you. So the way you do
that is in a VCN, there is the VCN DNS label, .oraclevcn.com.
You cannot delete this part-- it always says, but of course,
you can change it.

For subnet, similarly, you have more options. You can decide
what the subnet DNS label looks like. And the VCN DNS label
comes here because a subnet is part of a VCN. And then of
course, this part, you again cannot delete. For a host fully
qualified domain name, host name, subnet name, VCN name,
.oracleoraclevcn.com.

Seems pretty logical. Now instance fully qualified domain


name resolves to the instances private IP address. So in
previous examples, we had a local peering gateway. We were
pinging a couple of instances using their private IP addresses.
Instead of using the private IP address, we could have used
instance fully qualified domain names. We could have done
that, and we could have pinged those instances using that.

Now one thing to really keep in mind is there is no fully


qualified domain name for public IP addresses. So for
example, if you want to do SSH with host name, subnet
name, VCN name, .oracleoraclevcn.com, that feature is not
supported. I believe it's on the roadmap, but right now, the
instance fully qualified domain name-- just keep in mind--
resolves to the instances' private IP address.

So with that, let me quickly jump to the console and show


you a couple of these in action. So if I go to my networking
tab and create a new virtual cloud network. So let's say this
is my demoVCN2. Of course, I need to specify my CIDR block.

Right here, you can say you use DNS host names in this
VCN. So this is required for instance hostname assignment if
you plan to use VCN DNS or a third party DNS. This cannot
be changed later on if you do that.

Right here, you can see there is a DNS label, and it's coming
from the name I specified, but you could change this. So you
could say this is mydns, and now you will see that the DNS
which we are using is mydns.oracleoraclevcn.com. So I
created this particular VCN. And right here, if I go into the
DHCP options, you can see that this is-- because I'm not
using a custom resolver, it is using internet and VCN
resolver.

Now I can change that. If I click here Edit, I could specify like
an internet and VCN resolver, or I could do a custom resolver.
So if I want to do a custom resolver-- something like this-- I
could do it here, and I could save this change. Now this is-- of
course, I'm not going to use this resolver.

But it just shows you the options which are available. In


many cases, this would be going back to your on-prem
environment where your DNS servers would be running. Last
thing I want to show you, if you go to the compute instance
and you create an instance, the same thing happens. I'm just
going to create a temporary instance. And if you scroll all the
way down, you click on Advanced Options, you can see some
of these options here. So you can specify your hostname
which goes into your DNS, et cetera.

So you could provide those values here as you are creating


the instance. So hopefully, this gives you a good idea of the
different kinds of internal DNS options available. The thing to
keep in mind is you have an option to use a custom resolver
in case you are running your on-prem DNS servers.

Thank you for joining this lecture. In the next module we'll
bring together all the concepts we have learned in the VCN
and conclude the lecture series. Thank you.

10. PUTTING IT ALL TOGETHER


Welcome to this module on putting together all the pieces we
learned in the VCN lecture series. My name is Rohit Rahi,
and I'm part of the Oracle Cloud Infrastructure team.

So let's review all the concepts we have gone through until


now. So first thing is subnets can have one route table-- we
looked into this-- and multiple security lists. Now this
number, of course, is the default, but you can always change
it by opening up a request with us.

Route tables define what can be routed out of the VCN. You
don't need a local rule, because the traffic is already allowed
inside the VCN. But it basically decides what kind of traffic
can be routed out of the VCN. Private subnets are
recommended to have individual route tables to control the
flow of traffic outside the VCN.

And not just their own route tables, but also security lists. So
here, things are much cleaner. You don't mix and match
private and public subnets. All hosts within a VCN can route
to all other hosts in a VCN. There is no local route rule
required.

Now this is great because otherwise, you would be writing


local route rules to allow the traffic. Now the thing to keep in
mind is, even though this is true, the hosts within two
subnets cannot talk to each other unless you open specific
ports unless you make changes to the security list.

Security lists manage connectivity north-south-- so whether


it's incoming/outgoing to the VCN-- and east-west. So this is
what I was just saying-- internal VCN traffic between multiple
subnets, you still would need to operate on the security list
and make changes for the traffic to flow between them. OCI
follows a whitelist model, which means that you must
manually specify whitelisted traffic flows.

By default things, are locked down. Like I was saying, even if


two instances are in the subnets within the VCN, it's not
automatically-- the traffic is not allowed between them. In
fact, instances cannot communicate with other instances,
even in the same subnet, unless you permit them to.

So you can test this out. And this is going back to the
whitelisting model we were talking about earlier. Final thing--
we looked into this in the previous module-- Oracle
recommends using network security groups instead of
security lists, because network security groups let you
separate the VCN subnet architecture from your application
security requirement.

What does that mean? Two instances in the same subnet


with security lists need to have the same security posture,
meaning incoming/outgoing traffic specific ports have to be
the same. In case of network security group, that's not a
requirement-- that's not a restriction.

You could have different kind of traffic, different ports,


different protocols, different destinations and source
supported even though those two instances are in the same
subnet. So let's look into this from sort of a graphical, easy-
to-understand, mechanism. We have gone through this in
several demos, but it's good to recap here once more.

So we have a region, we have a VCN. Just again, for sake of


simplicity, I'm not showing ADs, but it can be a multi-AD
region or it can be a single AD region. We have two subnets
here. There's a frontend and there's a backend. And again, we
have seen this in the previous demo.
The frontend can be a web server, the backend can be a
database server. And of course, the web server is talking to
the database server to store the data, retrieve the data, do
something there. There can be a multi-tier-- again, just for
sake of simplicity, I'm keeping it pretty high-low.

And of course, first thing, because frontend is a public


subnet, it has its own route table. And backend is a private
subnet-- it has its own route table. Similarly, frontend has its
own security list, backend has its own security list. Pretty
straightforward.

So what is frontends requirement? The frontend


requirement-- it's a web server, it has a public IP, it wants to
go to the internet. People should be able to ping the web
server. So it has traffic going to the internet gateway.

Now in case of the backend, we don't want to allow that kind


of traffic-- we don't want to allow the traffic to go to the
internet. But in this case, it could go to a NAT gateway, it
could go to a service gateway because it-- lets say NAT wants
to get some updates and patches from the web. In case of
service gateway, probably, it wants to go to the Object Storage
to do a backup, for example.

So what does the increase look like? So for the frontend route
table, basically, we are allowing traffic from all packets-- from
all addresses to go to the internet gateway. We've looked into
this in the previous modules-- pretty straightforward.

What kind of security lists we are using here? Ingress, we are


saying traffic from all IP coming at port 80 is allowed-- it's a
web server. And then traffic, going out, we are locking down
the default traffic going everywhere. We are saying the traffic
is only going to this particular CIDR, and only on port 1521.

Now one thing you will notice here, I'm still using security
list, but you could have used a network security group here.
There is no requirement which says you just have to use a
security list. Now in the case of the backend, again, I'm
saying traffic going to all IP addresses-- any IP address can go
to a NAT, can go to a service gateway, or even can go to a
DRG.

If I'm going-- let's say this is my on-prem-- I want this traffic


to go to my on-prem environment-- I could do that. So this is
the kind of traffic which is flowing here. Right now, I'm not
doing any of this, so this is like a complete blank here, which
is fine. But you could have traffic going through different
gateways.

And for my security list, I'm saying my ingress is this


particular CIDR block. So the traffic is coming from here on
port 1521. And again, important thing to keep in mind, I'm
blocking all the other traffic. By default, this is always there
in my security list, which says that all kind of traffic-- traffic
is always allowed out but never allowed in except for a couple
of ports. Like if you're using a default security list, port 22 is
open, et cetera.

But in this case, I'm saying I'm locking out all the traffic-- I
don't want any traffic to go from here. I just want traffic to go
to the frontend. And because it's all stateful, if my packets
are coming in at port 1521, they're also going out from 1521.
So I don't have to open-- write a separate egress route. If this
was stateless, I would have to do that.
Really straightforward setup. We have seen this in the
previous demos. So hopefully, it gives you, again, a recap of
some of the concepts we have gone through.

Well, with that, thank you so much for joining this lecture
series on virtual cloud network. Virtual cloud network is one
of the core concepts you'll need to understand in cloud, and
of course, for OCI. I hope this was useful. If you have time,
please join me in the next lecture series on compute. Thank
you.

CONNECTIVITY – VPN CONNECT & FAST CONNECT


1. CONNECTIVITY TO ON-PREMISES NETWORKS

Hello, everyone. Welcome to this module on connectivity to


on-premises networks. My name is Rohit Rahi, and I'm part
of the Oracle Cloud Infrastructure Team.

So until now, we have talked about Virtual Cloud Network,


and we have looked primarily at creating an internet gateway.
And in that gateway, if you recall from my previous module
on Virtual Cloud Network, we had a couple of demos, one on
internet gateway, and the other one on a NAT gateway. And
we looked at things like resolved public IP, ephemeral public
IP, et cetera.

So with these, definitely looked in quite a lot of details on how


you can connect to the public internet, either using Internet
Gateway or a NAT gateway. There are two more options,
which we really didn't discuss in a lot of details in the
previous module on Virtual Cloud Network. And these options
around VPN Connect and FastConnect.
What do I mean by VPN Connect? VPN Connect is basically
an option where you connect two different sites using IPSec
protocol. So we will look into more details. And there are two
main options which are available here. The first is OCI
managed VPN service, which is offered for free. So you really
don't pay for anything except the underlying resources.

And then, the second option is you can run your own
software VPN. If you have a Linux VM, you could install your
own software like [INAUDIBLE] on, and you could run it
yourself. But remember, the first option here, the OCI-
managed VPN service, is offered for free. It's a standard VPN
between two two different sites, one site been your Oracle
Cloud environment, the other side being you on-premises
environment.

The third option we have is FastConnect. FastConnect, as the


name specifies here, gives you this consistent fast network
performance. And you can get both speeds of 1 Gbps and 10
Gbps increments. And the whole idea of a FastConnect it is a
private connection. So think about this as having your own
high-occupancy vehicle lane in the internet. So your traffic
doesn't go through the internet. You get your own, direct,
dedicated connectivity. That's number one. And then, number
two, like, I said, the name specifies, you get really fast,
consistent performance. You can go 10 Gbps and even
higher, if you want. And then, of course, there's an SLA
around that.

So the first option, public internet, we looked into the


previous module on virtual cloud network. In this particular
module, we are going to look in more detail around VPN and
FastConnect.
So before we get into the details, let's look at some of the
basics of VPN, right? VPN basically uses-- using a public
network, you make end-to-end connection between two
private networks in a secure fashion using a standard
protocol like IPsec, right? So this is basically how the VPN
works. You have two different networks here. There's a
private network one, private network two, and they want to
create a connection, end-to-end connection, over an unsecure
channel like an internet. And using VPN, they could do that.
Now, what are some of the key characteristics we talk about,
right? The first thing in a VPN is this thing called a tunnel. A
tunnel is nothing but a way to deliver packets through the
internet to RFC1918 addresses, meaning private network
addresses here. There is authentication where, basically, you
have to authenticate who you are. The really important piece
here is encryption. Packets need to be encrypted so they
cannot be sniffed over the public internet. So the packets can
come unencrypted here, but right as they enter this tunnel,
they are encrypted over this tunnel.

And there are two different kinds of routing which are


supported here. One is static routing, when you configure a
router to send traffic for particular destinations in
preconfigured directions. And the other is BGP, dynamic
routing, which is where we use a routing protocol like BGP to
figure out what that path traffic should take.

It used to happen that OCI only supported static routing for a


while, but now we support both static and dynamic routing.
And I'll show this in the demo when we go into the demo.

Now, IPSec, when you talk about it, there are two modes. One
is called a transport mode, where IPSec encrypts and
authenticates only the actual payload of the packet and the
header information stays intact. The other mode is called a
tunnel model, which we talked about here, where IPSec
encrypts and authenticates the entire packet. After
encryption, the packet is then encapsulated to form a new IP
packet that has different header information.

Now, in our case, in case of OCI, we only support tunnel


mode. We don't support transport mode, right? So if this
comes up in the exam or something, just remember that OCI
supports tunnel mode and not the transport mode.
Well, that was the basics of VPN. Let's look at some other
details. As you recall from the previous module, we did look
into this gateway called a Dynamic Routing Gateway. So
anytime you have your on-prem environment and you want to
connect to the on-prem environment-- let's say you have a
database running here-- you would use this gateway in OCI
called a Dynamic Routing Gateway. It's a virtual router that
is sitting at the edge, and it provides a path between your
private subnets and your on-prem environment.

So think about this as any situations where you want to


connect other than internet. You would use you would use a
DRG. Now, DRG is used both for VPN as well as FastConnect.
So both these options are actually supported on the edge.
And we'll look into this as we get into more details in the
demo. After catching a DRG, you must add a route for the
DRG in the VCN's route table to enable traffic flow. This is
pretty standard. We saw this earlier in the VCN module. If
you don't add this out, the packets here get black holed. They
have no way to figure out that they have to go to on-premises
through this gateway here, right? So you have to add this
rule.

And then, of course, you can have a security list or network


security groups here to secure your subnets and your
resources running in them. DRG is a standalone object. So
you create it separately, and then you must attach it to a
VCN. VCN and DRG have one-to-one relationship. It means
one VCN can only have one DRG, and one DRG can be
attached to only a single VCN at a time.

So let's look at some of these details on the VPN Connect


feature itself. Like I said earlier, VPN Connect is a managed
VPN service which security connects your on-prem
environments to OCI VCN through an IPsec VPN connection.
VPN Connect ensures secure remote connectivity via
standard industry standard IPSec encryption.

Bandwidth is dependent on the customer's access to the


internet and general internet congestion. Typically, we have
seen the bandwidth can be much higher. But the whole idea
of using a VPN Connect is to run proof of concept, right? If
you have any requirements around running enterprise apps,
our guidance is to use FastConnect and not use VPN
Connect. VPN Connect is offered for free, so there is no
charge for using the service. And like I said, customer proof of
concepts usually start as a VPN, and then they morph into
FastConnect designs as you move into production
environments.

Now, important thing to keep in mind-- OCI provisions


redundant VPN tunnels located on physically and logically
isolated tunnel endpoints. So what we do is coming out of
DRG, you get two different tunnels here. And we manage
those tunnels on physically and logically redundant
hardware, so it's not like the tunnels are running on the
same hardware, and just the hardware goes down, the
tunnels go down. The whole idea of having these redundant
tunnels is that you get some kind of high availability, right?
And we manage that on our site. But we give you two
tunnels, and it's our recommendation for you to use both
candidates.
Now, when I go into the demo, I'll show-- I'll just provision
one tunnel, just for the lack of time, but you can actually see
that we provide two tunnels when you create IPSec
connection.

Now, how does this whole thing work? Well, let me just run
this animation here. So first thing here is you have your on-
prem environment, right? And we have this particular
address space. In fact, I'll be using the same address space in
my demo.

The first important thing to notice in on-prem environment is


you have this thing called CPE Object. CPE Object is the
virtual representation of the actual network device, which
some people call CPE-- Custom Premises Equipment-- which
dominates the IPSec tunnel. It could be a router, it could be a
firewall, or it could be a virtual appliance supporting IPSec
running on-premises, right?

And you have a long list of devices here which are available
on our documentation page, and the configuration for those
devices, the supported devices, right? And so there's a good
chance whatever devices you are running today would be
supported by the OCI VPN Connect service.

Right here is the internet, is the unsecured channel. And you


want to connect your resources running here. Let's say I'm
running a database here via an on-prem environment using
the VPN.

So what you do is you create this tunnel, as we talked about


in the previous slide. And you can use either static routing or
dynamic routing. And then your resources are here, and you
create this DRG. And of course, you have to do the route
table so the packets can go from here to the DRG. And you
also create your network security group or security list to
secure your subnets or your instances, right?

So this is a pretty standard setup. In the previous module we


looked into here, we had an internet gateway, we had NAT
gateway, and then we went through into the internet using
those gateways, right? In this case, we are going to the DRG,
and then we are going to an on-prem environment, right? Not
just sitting else-- going out to the internet.

Now, does this whole thing work? Well, there are a bunch of
steps here, and it's actually rather straightforward. But let
me just quickly run through the steps here, right? First thing
you do is create a virtual cloud network. Pretty
straightforward. We have seen that in the previous lecture
series on VCN.

Then we create a DRG. We have not created a DRG until now,


but we'll go and do that. You have to attach your DRG
through your VCN, right? Remember, they have a one-to-one
relationship, and DRG and VCN are standalone. So you once
you create it, you have to attach it here.

Then you have to attach your route table to send the traffic to
the DRG. Then you create a CPE object which is your on-
prem, which is basically a virtual representation of your on-
prem router, and you would get this IP address running from
your router here, right? So whatever router you're running
here will have an IP, public IP address. And there are things
like if your CPE is behind a MAC device, what to do, and all
that. It gets into more complex details which we will cover in
our level 200 module. But you the CPE device, basically, will
have a public IP address.
So you create the CPE object in OCI, you add the public IP
address. Then, on the DRG, you create your IPsec tunnels so
you create these IPSec tunnels here and then-- between CPE
and DRG. And you could choose to use a static route, or you
could choose to use a BGP route, right? So you could decide
what kind of routing I want, right? Now you can see there's a
static route here.

And then, the last step is you configure your on-premises


CPE Router. So I'm going to run through these steps in the
next module where we will run through these steps, and I will
show you a tunnel being created and the IPSec tunnels being
provisioned.

So thank you for joining this lecture. If you have time, please
join me in the next lecture, where we'll talk about the VPN
Connect demo. Thank you.

2. VPN CONNECT DEMO


[ORACLE UNIVERSITY JINGLE]

Hi, everyone. Welcome to this module on a quick VPN


Connect demo. As we saw in the previous module-- let me
just bring up the setup here-- we have an on-premises setup
with an address space of 10.0.0.0/16. We have an on-prem--
we have an OCI setup with a VCN address space of
172.0.0.0/16, and we will create a DRG, we'll update the
route tables, we'll change some of these network security
groups here, the security list here, and then we'll create a
static route here, IPSec tunnels, and we'll show all this
working by concluding that IPSec are in an up state-- it's up
and running, right?
So let's go ahead and do the setup. Now, just for this setup, I
am running this whole on-prem side of the vault in AWS. So
I'm going to skip that part. I have a LibreSWAN VM which I've
installed in the AWS environment, and I've configured the
software, et cetera. So I'm going to skip it. But if you guys
want to follow, that is documentation right here.

So if you follow this documentation and you scroll down, it


shows you setting up LibreSWAN, and you can you can follow
some of this configuration here to make sure that your
involvement is-- the involvement you are running, the on-
prem involvement, has all the requisite parameters, et cetera,
right?

So let me just quickly show you the things you have to do as


the workflow. We had, like, this seven-step workflow in
creating the VPN Connect. So let me just quickly go ahead
and show that. So first thing I'm going to do is I need a
Virtual Cloud Network on the OCI side, right.

So I'll say this is my VPN, or my VPN Connect demo, so call it


VCN Connect. And right here, I'll just create my VCN. And I
have-- we'll create some subnets, et cetera, later on.

So right here, I can create my VCN, and the address space


I'm using is 172.0.0.0/16. Let's go ahead and create one
subnet, though I'm not going to spin up any instance in here.
But that's fine. Let me just quickly go ahead and create that
subnet.

And I'm going to use the default route table, and I'm going to
use the default security list, right? That's fine. And because
it's-- I'm going to access it to my on-prem environment, let
make it a private subnet and click Create here. And now, it's
created, right? So plain, simple VCN has one subnet, and I'm
using the default route table, the default security list.

So a couple of things I need to do is for this demo to work, I


have to open certain ports for my VPN Connect to work,
right? So for TCP, I need Port 4500, and I need Port 500, and
I need to do the same thing for UDP.

All right, so my ingress rules we just changed. Nothing


complex. We just changed TCP, we opened port 4500 TCP,
and then we opened port 500. And UDP, we opened port
4500, and we need to open 500 here. So let me just edit here
and just say All there.

All right, so pretty straightforward, right? So we created


between the security list. Now let me go ahead here, and as
you can see, there is no-- now, if I click on dynamic routing
gateway, you can see that there is no DRG available here,
right? So let's go ahead and create a DRG. It takes around a
minute or so, sometimes less, to create. So it is going on in
the background.

As the DRG is getting created, let me go ahead and create a


CPE. So CPE, if you recall, is the virtual representation of
your actual on-premises equipment network device running
your on-premises environment, right? So I need a public IP
here.

Now, like I said, I'm running this in my AWS environment,


and this is the public IP I have-- 3.230.163.217. This is my
public IP of the LibreSWAN VM which is running in AWS.

So let me just grab that and put that public IP here, because
I would need that, and create a CPU. And now my virtual
representation of that network device is created in OCI right
here. If I see DRG, my DRG is up and running. So it took less
than a minute.

Now, you can see here, first thing I need to do is I need to


attach it to a virtual cloud network. So right now, it says it's
not attached to any VCN, right? So the VCN we just created,
let's just attach it there. So VCN Connect, so just say, Attach
to a Virtual Cloud Network. Remember, they have one-to-one
relationship, so now it's getting attached here, right?

So next thing I also need to do is I need to create my IPSec


connections. So if you click on IPSec within your DRG, it
says, go to the IPSec on the menu here. So this is my
networking menu. A bunch of things we have already looked,
right? But there's IPSec connections here, right?

Now, I can see I have no IPSec connections, right? So let's go


ahead and change and create one. The first thing says, OK,
which compartment are you planning to use? And you have
to be careful because it chooses, like, the root compartment.
So choose the compartment, training compartment. All my
assets are running here. So I would just choose that.

It's asking to provide a name. I'm really bad at naming, so I'll


say IPSec1. Then it says, where is my CPE? We just created
this CPE a few moments back. So let's use that. And then it
says, where is my DRG? We just created this DRG, right?

And now it's asking for a static route. I could have used a
dynamic routing as well. So right now, just to keep the demo
simple, I'll use my static route here. But if you click on
Advanced Options here, you can see that there are there--
you could actually pick BGP routing as well, right? It's a new
feature.
And you could pick up your ITE version-- so ITE v1 or v2.
And again, some of these complex things we talk about in the
level 200 module. But I could have chosen a dynamic routing
here as well, right? I'm going with static. That's fine.

Let me just make sure that this is the static route I have. This
is my AWS VPC site here, 10.0.0.0/16, 16, right? That's my
static route, right? And I click here, and my IPSec
connections would now be created.

Now, this takes, literally, like a minute or so, and my


connection would be-- you would see this change to a
provision state. Now, a couple of things to note here, right?
I'm using static routing, which we just provided. You can see
that my public IP, tunnel has public IP, right? First thing is
we create two tunnels, right? This is tunnel one, and this is
tunnel two.

Then, you can see that the tunnel has a public IP address
here, right? 139.213.7.49, and 129.213,6.52, right? So the
two different public IP addresses. And the IPv6 status is
down, of course, because we have not set up the LibreSWAN
end, and we have not done all the configuration, and it's in
the provisioning stage right now, right?

So if I click on this tunnel, I can see my shared secret here,


right? And it's showing me the shared secret which we will be
using for our configuration. So as this is getting configured,
let me go ahead and make these changes to my LibreSWAN
VM, which is running on the AWS environment, right?

The first thing I'm going to do is I'm going to grab the IP


address here, and I'm already logged into my LibreSWAN VM,
which is running on AWS. And you can see here I'm logged
in. I'm using a CentOS environment to run my LibreSWAN, et
cetera, right?

So I'm logged in here. If I just do my directory, right now, I'm


in the ETC directory, right? So let me bring up a couple of
files we are going to use for our demo, right?

So the first file which we are going to use is the IPSec config
file. And if you can see here, it has certain parameters which
I was testing earlier. So there is a connection here we're
calling connection OCI1, and you can see some parameters.

There is the OCI public IP address for one of the tunnels,


right? So it's right here. And then there is the OCI VCN CIDR
range, so you can see that here. And then, the three
parameters down below are all related to AWS.

So this is my local VPC I'm using in the AWS environment.


This is the public IP of the LibreSWAN VM. And then, this is
my AWS VPC site here, right? So pretty straightforward. Let
me just go ahead and make changes to this file.

So the only thing which we need to change here, it looks like,


is the public IP for my tunnel site. So this is my DRG IPSec
public IP, and public IP for the tunnels. And if I see, I have
129.213.7.49 instead .59, right? So let me just go ahead and
make this change here. 49-- right here, also, put the same
thing, 49.

My OCI VCN site is the same. My AWS site is the same. My


AWS public IP is the same. And then, my AWS VPC site range
is the same, right? So none of those parameters changed.
So I went ahead and I saved my file. And if I see the change, I
can see that the IP address right now is changed here, right?
But there is one more file, which we need to change.

And if I see my ipsec.secrets file, it's pretty straightforward,


right? There, I just have one entry for my OCI tunnel public
IP. One of the tunnels, I have the public IP of the LibreSWAN
VM, and then, right here, I have the shared secret.

So let's go ahead and change this file as well. The first thing
I'm going to do is I'm going to change my IP here. So it's 49,
and then right here is this shared secret. It's going to be the
shared secret coming from my tunnel. Let me just delete this
whole thing and go back to my tunnel. Grab this whole thing
here. Copy it.

All right, so I got my secret file updated, and now my


configuration is almost done. I could restart, I could say ipsec
verify, and I could restart my-- and you can see some of these
things are-- [INAUDIBLE] sudo service, ipsec, restart.

And I could restart my service. I could verify ipsec again. And


I see a bunch of these things are OK, right? If there was an
error, it could have given me an error here. The thing is, the
tunnels are still being provisioned. If I go back here, I can see
the tunnels are still being provisioned.

So let me pause the video for a couple of minutes. It takes a


few minutes to provision these tunnels, and once the tunnel
is provisioned, you would see that the state would change
from a down state to an up state, at least for one of the
tunnels, because we just configured that particular tunnel.
So let me just pause the video here and come back once the
tunnels are provisioned.

All right, so it looks like my tunnels are up and running. So it


takes literally a few minutes. And you can see the state here.
For both the tunnels the ipsec status shows as down.

Now, one thing which we didn't do earlier when we were


running these, running the demo, is we didn't really create or
change the route table entries. So if I come back to my VCN
which we created, we have-- we changed the security list, but
we really didn't change anything in the route table. So you
can see that we have a default route table, and there is no
entry in here, so the packets have no way to go to the DRG,
right? So let's go ahead and change that.

So I can pick my DRG here, and my destination is my AWS


environment. And I can just add this particular rule here,
right? So let's do that.

And then, as we said, as we did in the previous, before the


tunnels were getting provisioned, let's go ahead and restart
my IPSec, just to make sure that-- clear my screen and
restart the IPSec service.

So we did that, and then let's do sudo IPSec auto --status.


grep. And you can see there that my route is visible, right? So
it shows the tunnels were created correctly. And you can see
that this is my-- this shows that the tunnel going from my
AWS VCN CIDR, AWS public IP, going to the OCI tunnel here,
right-- 139.13.7.49, right?

So the tunnel seems to have been created correctly. If I go


back to my DRG or my IPSec connections, my connection will
be up in a few seconds. Let me again pause the video. It
takes-- sometimes it takes a minute or two. And once it's up
and running, we'll come back.

All right, so that took a minute or so. And as you can see
now, my tunnel IPSec status is up, which means that my
tunnel is up and running. Now, if I had an instance running
inside that subnet, I could have pinned my libraries on VM
and vice-versa to show you the connectivity. But you can see
the IPSec status up here, and basically, what it means is the
tunnel has been established between the on-premises TP
device, which is LibreSWAN VM running in AWS, and my OCI
DRG-- the two tunnels I have, right?

And right here, you can see some of the matrix. I can do this
in less than a minute. And there is no data here, but you can
see that tunnel state, packets with errors, et cetera, right?

A couple of things I want to quickly show here is I just


configured one tunnel. If you recall, if I go back to my-- if I go
back to my LibreSWAN VM configuration and bring up IPsec
configuration file, I just have one tunnel configured here,
right? If I had both tunnels configured, it would keep both
tunnels up and running. One tunnel is down because I just
configured one.

So it's really straightforward. This took literally 15 minutes or


so. This is a good and easy way for you to test and start with
a POC environment. And then, for more complex scenarios,
switch over to FastConnect.

So I hope this was a useful demo. Thank you for watching


this demo. In the next module, we'll talk about FastConnect.
Thank you.
[WHOOSH]

3. FAST CONNECT
[ORACLE UNIVERSITY]

Hi, everyone. Welcome to this module on Oracle Cloud


Infrastructure FastConnect service. My name is Rohit Rahi,
and I'm part of the Oracle Cloud Infrastructure Team. So
FastConnect-- what is this service called FastConnect?
FastConnect provides a dedicated and private connection
with higher bandwidth options and a more reliable and
consistent networking experience when compared to internet-
based connections.

So the idea here is you can connect to OCI directly or via pre-
integrated network partners. Think about this as having your
own high-occupancy vehicle lane in the internet. So your
traffic doesn't go through the normal internet, which can be
unreliable because internet is a collection of networks which
are all peered together.

So the connectivity might be unreliable. You might get very


inconsistent network performance. In contrast, FastConnect
gives you that dedicated and private connectivity, sort of
having your own high-occupancy vehicle lane within the
internet. You get port speeds of 1 Gbps and 10 Gbps
increments. So some providers support only 1 and 10, and
some providers would support 1, 2, 3, et cetera, et cetera.
And you can even go higher than 10 Gbps.

There are two different ways you can use FastConnect. One is
called Private Peering, where you extend your on-premises
data centers into Oracle by using private connectivity, and
you access services running in a virtual cloud network. Or,
the other model is called public peering, where you can
connect to your on-prem environment with some of the OCI
public services such as object storage. And we'll look into
these in more detail in subsequent slides. There is no charge
for inbound/outbound data transfer, and as you can imagine,
FastConnect uses BGP protocol.

So what are the two scenarios where you would use


FastConnect? The first scenario is when you are co-locating
with Oracle in a FastConnect location, right? So your existing
network is here, right? You are co-located with Oracle, and
you just connect the edges together with BGP, right? It's a
BGP peering you do here.

The second scenario is when you connect with Oracle


through an Oracle provider. So this can be Megaport, or this
can be AT&T or someone, right? And you have your own
connectivity from your existing network to the provider. And
then, through the provider, you connect to Oracle Cloud
Interface here, right?

And again, you could use Layer 2 here or Layer 3


connectivity, depending on what your provider supports. So
you could either do that, but you could-- basically, you are
connecting to a network provider already, and you already
might have set up that connection, and then you connect
from that provider through Oracle Cloud Infrastructure, and
using this sort of a two-hop connection, you are connected
to-- your connecting your on-prem environment to the Oracle
Cloud Infrastructure.

So there is this concept called Virtual Circuit. Virtual Circuit


is nothing but an isolated network path that runs over one or
more physical network connections to provide this single
logical connection between customer's edge router and the
DRG and OCI.

So we talked about this here, right? So you create a virtual


circuit, and it runs over these-- this might be physical
connectivity here-- and it provides this connection from your
on-premises to the Oracle Cloud Infrastructure, DRG.

Each virtual circuit is made up of information shared


between customer, Oracle, and a provider, as you can recall
from the previous picture, right? Make sense? It's possible to
have multiple virtual circuits to isolate traffic from different
parts of the organization, right?

You can have a virtual circuit for this site CIDR. You can
have another for this CIDR, right? So you could have multiple
virtual circuits for reaching different parts of your
organization, or you could just do multiple virtual circuits
just for redundancy purposes. And like we said, FastConnect
uses BGP, and then I think it might have a bullet point here
which is missing. It might use layer 2 our layer 3
connectivity.

So what are these scenarios? We talked about private peering


and public peering. So private peering, as we talked about, is
you are extending your on-prem network to the OCI Virtual
Cloud Network, right? So Virtual Cloud Network, you are
running, let's say, your compute or your databases in a
private network, and you're extending your on-prem network
to reach that, to reach your OCI Virtual Cloud Network.

The communication happens using private IP addresses,


right? So the whole idea here is emphasis on private. You are
basically reaching your resources running in VCN through
on-premises using private connectivity.
Public peering, on the other hand, uses public OCI services
such as object storage, or console, or APIs. And these can be
dedicated, these can be reached over a dedicated
FastConnect connection. Why would you do that versus just
reaching them publicly over a public internet connection?
Well, for the same reason why you would use FastConnect,
right? It gives you that consistent, dedicated connectivity so
you get a much better experience.

As you can imagine, in public peering, you don't go through


DRG, right? Because again, you're just accessing the public
services. DRG is when you have to go through your
connectivity for your VCN, right? Remember, DRG is one of
the gateways for your on-prem. It's the gateway for your on-
prem environment.

The other gateways, we have things like Internet Gateway,


NAT Gateway, Service Gateway, et cetera, which have
different use cases, whether you are reaching internet or you
are reaching Oracle Public Cloud Services from a VCN. You
would use a service gateway, et cetera.

So as you can see here in the picture, this is your on-prem


environment. This is the FastConnect location. And right
here, if you're trying to reach your on-premises environment
through the FastConnect location through a VCN, you will
use a DRG, and then you will reach your VCN here, right?
And you could be running compute instances here. You could
be running database instances. And all these connectivity will
be direct. You are going through the private IP addresses.

In contrast, if you want to reach object storage, you could still


do the same. You could go from your on-prem, but now, you
are just bypassing the whole DRG and VCN and you're
directly going to your OCI public services such as object
storage, right?

So these are the two models which are supported, and this
also comes up in exams. The question might be, if I'm using
public peer, which statement is not true? And they will give
you four options. And you have to make sure that, in public
building, DRG is not used, right? So just be aware of that--
DRG is used only in case of private peering.

All right, and here is a list of all the FastConnect providers.


And there's a long list of providers. You should check the
online documentation to see. We are adding providers pretty
frequently, so you should take a look at the different regions
you are planning to use FastConnect location and which
providers are supported in that particular region.

All right, so with that, let's quickly jump onto the console and
show a quick demo. Now, for the demo, one thing which I
want to call out is in the demo, what I'm going to do is I'm
going to show you the connectivity. Sorry, let me just get the
slide back here.

In the demo, I'm going to show you the connectivity from here
to the provider network, to OCI. This connectivity, from your
existing on-premises environment to the provider edge, I'm
assuming that you already have this running.

[WHOOSH]

4. FASTCONNECT DEMO
Hello, everyone. Welcome to a quick demo of the FastConnect
service. So let me jump to the console.

We have been using the OCI console in several of our lecture


series. So right here, if I click on the background menu, I can
see all the different services, and right here is Networking.
And networking has an extensive set of services available. So
you can see a bunch of services we have used in other
modules. So right now, there is this link here on
FastConnect.

So let me click on that. And as you can see here, there is no


connection which is available right now. So it says, Create
FastConnect. And if I take on that, I get a choice between two
different options. The first option is colocation with Oracle or
using a third-party provider, and the second one is using an
Oracle provider.

So for the first one, basically, it means that you are co-
locating with Oracle in a FastConnect location, or you're
using a third-party provider, right? And if you click on that,
you can see things like cross-connect groups, cross-connect,
link aggregation groups, et cetera, et cetera. If you are
interested in more details on how this works, please check
out our level 200 module where we cover this and also go
through, show you this to our demo.

For right now, I'm only interested in that Oracle provider


option. So I click on that, and it gives me a drop-down, and I
can see the different providers which are available-- AT&T,
CenturyLink, Digital Reality, et cetera, et cetera. Even
Microsoft Azure is here, so I could use Azure ExpressRoute if
I want to connect to Azure environment, and I can have a
circuit running between Azure and Oracle Cloud
Infrastructure.
I'm going to pick Megaport here because I have a demo with
them. And right now, it's asking me for a name on my circuit.
I'll say Test. That's fine. I'm just trying to test out certain
things. And then I get a choice between private virtual circuit
and public virtual circuit. Private virtual circuits, as we saw
on the slides, are advertised using RFC1918 addresses. And
of course, you are reaching your VCN through a DRG. So it
uses a DRG.

In the case of public virtual circuit, you are using public IP


addresses-- for example, if you're using an object storage--
and there is no DRG which is required.

So once I provide my option-- I'm using-- private virtual


circuit is fine-- I have to choose a DRG. Now, you will recall
from the VPN Connect demo that we had already created a
DRG on which we created an IPSec-- we created a couple of
IPSec tunnels, and we showed that demo in the VPN Connect.

So right now, I have the DRG created, so I'll just use that. If
you don't have a DRG, you need to create one for this
purpose, right? And then I need to choose my provision
bandwidth. I'm going to choose 1 Gbps. Some providers
would go 1, 2, 3. Megaport supports 1 and 10, so I'm going to
use 1.

And right now, I need to provide the customer BGP address. I


already have it. I was doing this demo earlier. So I need this.
If you don't have this information, you need to have the
customer BGP IP address from your customer site. And for
Oracle, I would choose an Oracle BGP IP address as well.
And then, right here, I would choose my customer BGP ASN.
So this, again, you would need this from your customers if
you don't have one, right? So I provided you the customer
BGP IP, Oracle BGP IP, and then I have the customer BGP
ASN, Autonomous System Number, right? So 64556-- there's
the one I have.

So I click Create, and now my virtual circuit is created from


the OCI console. And you can see here it gives me a message
saying, your connection is created in OCI. You need to copy
this OCID and provide it to the provider, give it to the
provider, to provision the virtual circuit from their end. When
the BGP state changes to Up, the virtual circuit is ready to
test.

So I'm just going to copy it and then head over to Megaport.


This is the Megaport console we have, and you can see they
have a bunch of Megaport cloud routers, which is a layer tree
virtual routing instance provided and hosted by Megaport in
locations worldwide, right?

So I have a couple of these already running. We have been


doing a bunch of these demos. So I'm going to just pick one
here, and I'm going to create a connection. So let me just pick
this one and create a connection here.

So first thing it's asking me is what kind of connection I have,


right? So it's a cloud connection. That's fine. And when I click
on that, it's giving me a choice of different cloud providers. So
I'm going to pick Oracle here, and then I'm just going to paste
my virtual circuit ID, the one I got. And it's verifying the key
just to make sure that that key is valid, et cetera, right?

And now, it gives me choice of two ports. So there's a primary


port in US Easter (Ashburn) region, and there's a secondary
report. And remember, in my OCI Console, if I show you, I'm
in the US East region, so I'm connecting [INAUDIBLE] from
there.

So primary and secondary, I could use both ports for


redundancy. But for now, it's just a demo, so I'm just going to
use my primary port. So I say, Ashburn primary port is fine,
and click Next. And now it's asking me to provide a name for
my connection and also give a speed. So it looks like 100
Mbps. So for the demo environment, I have that available, so
I'll just picked that-- 100 Mbps.

And right now, it's asking me to add my IP address here,


right? So let me just provide the IP address I was using for
the customer BGP IP, slash 30, and I'll click Add here. And
right here, you can see that it's giving me options for static
routing or BGP routing.

Because I'm going to use BGP routing-- so I'm going to use


this option, right? And Peer IP, in this case, becomes my
Oracle BGP address I have, line two. And then, Peer ASN is
my Oracle ASN number, right? So I'll give 31898. And you
can hover over these links, and you can see some of these
values here, right?

The BGP Auth, I'm going to leave it blank. And then, for my
Override MCR ASN, this is the same as the Customer ASN
Number that we had earlier, right? 64556. So I'll click Add
here. And then, I'm going to click Next and add this VXC--
virtual circuit, right?

And as you can see here, this virtual circuit, I need to order
and click Order here. And now, as I order this service, what
Metaport is trying to do is to provision this virtual circuit on
my behalf to my Oracle Cloud Infrastructure location. I
specified US East, Ashburn region.

And now it's deploying, and this would take a few minutes, It
typically takes anywhere from five minutes to 15 minutes,
sometimes even shorter than that. And you can see here, it's
in the process of deploying. As soon as it's deployed, this will
turn to green, and when I come back to my console, I can see
that my life cycle stage would change to provisioning and my
BGP state, if the BGP information we have provided is
consistent and correct, it would change from Down to Up.

So let me just pause the video here. It's going to take a few
minutes. And I'll come back, and I'll show you these things
working in action.

All right, so that took a few minutes. Let's come back to the
Megaport portal. And as we can see here, we have these
Megaport cloud routers, and this is the circuit we just
provisioned, DemoFC, right? And if I come here I can see
some of the details-- the BGP connection we added-- look at
some of the details.

BGP IP is-- service BGP is Up. Service status is Up I can see


some of the logs, see the virtual circuit is up, BGP session is
up, and I can see some usage building, et cetera, et cetera,
right?

So if I go back to my Oracle Cloud Infrastructure Control


now, you can see that my lifecycle state is Provisioned and
my BGP status is also Up. It says BGP is currently down. It
probably happened because I was refreshing the page. And
first, the circuits get provisioned, and then the BGP gets
provisioned as well, right? The BGP status goes from Down to
Up.
So you can see here, Provisioned. My state is Up, right? So
my FastConnect circuit is now up and running. Now, I would
definitely need to set up my CPE and my network devices on-
premises, but this was a quick demo showing you how you
could provision FastConnect from the OCI Console and using
a third-party provider like Megaport.

End-to-end, the whole process takes 15 to 20 minutes. I had


to pause the video in between. It takes literally five to 10
minutes. But the whole process takes 15 to 30 minutes.

So this is how simple it is. I hope you find it useful, and


thank you for watching this demo. Thank you.

LOAD BALANCER
1. LOAD BALANCING INTRO
Hello, everyone. Welcome to a quick demo of the FastConnect
service. So let me jump to the console.

We have been using the OCI console in several of our lecture


series. So right here, if I click on the background menu, I can
see all the different services, and right here is Networking.
And networking has an extensive set of services available. So
you can see a bunch of services we have used in other
modules. So right now, there is this link here on
FastConnect.

So let me click on that. And as you can see here, there is no


connection which is available right now. So it says, Create
FastConnect. And if I take on that, I get a choice between two
different options. The first option is colocation with Oracle or
using a third-party provider, and the second one is using an
Oracle provider.

So for the first one, basically, it means that you are co-
locating with Oracle in a FastConnect location, or you're
using a third-party provider, right? And if you click on that,
you can see things like cross-connect groups, cross-connect,
link aggregation groups, et cetera, et cetera. If you are
interested in more details on how this works, please check
out our level 200 module where we cover this and also go
through, show you this to our demo.

For right now, I'm only interested in that Oracle provider


option. So I click on that, and it gives me a drop-down, and I
can see the different providers which are available-- AT&T,
CenturyLink, Digital Reality, et cetera, et cetera. Even
Microsoft Azure is here, so I could use Azure ExpressRoute if
I want to connect to Azure environment, and I can have a
circuit running between Azure and Oracle Cloud
Infrastructure.

I'm going to pick Megaport here because I have a demo with


them. And right now, it's asking me for a name on my circuit.
I'll say Test. That's fine. I'm just trying to test out certain
things. And then I get a choice between private virtual circuit
and public virtual circuit. Private virtual circuits, as we saw
on the slides, are advertised using RFC1918 addresses. And
of course, you are reaching your VCN through a DRG. So it
uses a DRG.

In the case of public virtual circuit, you are using public IP


addresses-- for example, if you're using an object storage--
and there is no DRG which is required.
So once I provide my option-- I'm using-- private virtual
circuit is fine-- I have to choose a DRG. Now, you will recall
from the VPN Connect demo that we had already created a
DRG on which we created an IPSec-- we created a couple of
IPSec tunnels, and we showed that demo in the VPN Connect.

So right now, I have the DRG created, so I'll just use that. If
you don't have a DRG, you need to create one for this
purpose, right? And then I need to choose my provision
bandwidth. I'm going to choose 1 Gbps. Some providers
would go 1, 2, 3. Megaport supports 1 and 10, so I'm going to
use 1.

And right now, I need to provide the customer BGP address. I


already have it. I was doing this demo earlier. So I need this.
If you don't have this information, you need to have the
customer BGP IP address from your customer site. And for
Oracle, I would choose an Oracle BGP IP address as well.

And then, right here, I would choose my customer BGP ASN.


So this, again, you would need this from your customers if
you don't have one, right? So I provided you the customer
BGP IP, Oracle BGP IP, and then I have the customer BGP
ASN, Autonomous System Number, right? So 64556-- there's
the one I have.

So I click Create, and now my virtual circuit is created from


the OCI console. And you can see here it gives me a message
saying, your connection is created in OCI. You need to copy
this OCID and provide it to the provider, give it to the
provider, to provision the virtual circuit from their end. When
the BGP state changes to Up, the virtual circuit is ready to
test.
So I'm just going to copy it and then head over to Megaport.
This is the Megaport console we have, and you can see they
have a bunch of Megaport cloud routers, which is a layer tree
virtual routing instance provided and hosted by Megaport in
locations worldwide, right?

So I have a couple of these already running. We have been


doing a bunch of these demos. So I'm going to just pick one
here, and I'm going to create a connection. So let me just pick
this one and create a connection here.

So first thing it's asking me is what kind of connection I have,


right? So it's a cloud connection. That's fine. And when I click
on that, it's giving me a choice of different cloud providers. So
I'm going to pick Oracle here, and then I'm just going to paste
my virtual circuit ID, the one I got. And it's verifying the key
just to make sure that that key is valid, et cetera, right?

And now, it gives me choice of two ports. So there's a primary


port in US Easter (Ashburn) region, and there's a secondary
report. And remember, in my OCI Console, if I show you, I'm
in the US East region, so I'm connecting [INAUDIBLE] from
there.

So primary and secondary, I could use both ports for


redundancy. But for now, it's just a demo, so I'm just going to
use my primary port. So I say, Ashburn primary port is fine,
and click Next. And now it's asking me to provide a name for
my connection and also give a speed. So it looks like 100
Mbps. So for the demo environment, I have that available, so
I'll just picked that-- 100 Mbps.

And right now, it's asking me to add my IP address here,


right? So let me just provide the IP address I was using for
the customer BGP IP, slash 30, and I'll click Add here. And
right here, you can see that it's giving me options for static
routing or BGP routing.

Because I'm going to use BGP routing-- so I'm going to use


this option, right? And Peer IP, in this case, becomes my
Oracle BGP address I have, line two. And then, Peer ASN is
my Oracle ASN number, right? So I'll give 31898. And you
can hover over these links, and you can see some of these
values here, right?

The BGP Auth, I'm going to leave it blank. And then, for my
Override MCR ASN, this is the same as the Customer ASN
Number that we had earlier, right? 64556. So I'll click Add
here. And then, I'm going to click Next and add this VXC--
virtual circuit, right?

And as you can see here, this virtual circuit, I need to order
and click Order here. And now, as I order this service, what
Metaport is trying to do is to provision this virtual circuit on
my behalf to my Oracle Cloud Infrastructure location. I
specified US East, Ashburn region.

And now it's deploying, and this would take a few minutes, It
typically takes anywhere from five minutes to 15 minutes,
sometimes even shorter than that. And you can see here, it's
in the process of deploying. As soon as it's deployed, this will
turn to green, and when I come back to my console, I can see
that my life cycle stage would change to provisioning and my
BGP state, if the BGP information we have provided is
consistent and correct, it would change from Down to Up.

So let me just pause the video here. It's going to take a few
minutes. And I'll come back, and I'll show you these things
working in action.
All right, so that took a few minutes. Let's come back to the
Megaport portal. And as we can see here, we have these
Megaport cloud routers, and this is the circuit we just
provisioned, DemoFC, right? And if I come here I can see
some of the details-- the BGP connection we added-- look at
some of the details.

BGP IP is-- service BGP is Up. Service status is Up I can see


some of the logs, see the virtual circuit is up, BGP session is
up, and I can see some usage building, et cetera, et cetera,
right?

So if I go back to my Oracle Cloud Infrastructure Control


now, you can see that my lifecycle state is Provisioned and
my BGP status is also Up. It says BGP is currently down. It
probably happened because I was refreshing the page. And
first, the circuits get provisioned, and then the BGP gets
provisioned as well, right? The BGP status goes from Down to
Up.

So you can see here, Provisioned. My state is Up, right? So


my FastConnect circuit is now up and running. Now, I would
definitely need to set up my CPE and my network devices on-
premises, but this was a quick demo showing you how you
could provision FastConnect from the OCI Console and using
a third-party provider like Megaport.

End-to-end, the whole process takes 15 to 20 minutes. I had


to pause the video in between. It takes literally five to 10
minutes. But the whole process takes 15 to 30 minutes.

So this is how simple it is. I hope you find it useful, and


thank you for watching this demo. Thank you.
2. LOAD BALANCING DEMO
Welcome to this module on a public load balancer demo. In
here, we will create a public load balancer and see it in
action. So let me jump to the console. We have been using
the OCI console for some of our other modules.

Here you can see the burger menu. And if I click on it, I can
see the various services. Right now, I'm in the US East region.
Right here in Networking, I can bring up a load balancer from
the link here.

So first thing I see here, there is not a load balancer which


exists. Now to create a load balancer, I need a virtual cloud
network. So just before recording this module, I went ahead
and I created this load balancer VCN. And it's actually rather
straightforward, but I just wanted to [? save ?] all the steps
and save some time in the demo.

So there are three subnets. There is an AD1 subnet, there's


an AD2 subnet, and there's a load balancer subnet. Now the
difference is AD1 and AD2 are AD-specific subnets, but the
load balancer subnet is a regional subnet. I made all of them
public. I could have made these AD subnets as-- I made all
three public, but there is no requirement for me to keep these
in the public subnet as well.

[INAUDIBLE] point that my compute instances can be


reached from the load balancer-- that's all I really care about.
So right here, I can see that my route table-- I just have one
route table, and it takes the route to the internet, and there's
an internet gateway here. And again, the same route table is
used for all three subnets.
Again, like I said, I could keep my backends with a different
route table. For security list, I have two different security
lists. So there's a default security list which my AD subnets
have. And port is open here, as you can imagine, because I
have a web server running there.

For my load balancer security list, I really don't have any


route. So ingress is empty, egress is empty. So just created a
shell and it's all empty right now. The final thing is if I click
on Compute, I can see that I have two instances running.

So there is a web server one running in AD1, and there's a


web server 2 running in AD2. And it's pretty straightforward--
if I click here, I can see that this is my web server 1. And if I
click here, I can see that I have my web server 2 running
here. So rather straightforward.

Now this is all the pre-work I have to do before I could show


you the load balancing action, because load balancer needs--
what I'm going to do now is I'm going to put a load balancer
in front of those two compute instances. So creating a load
balancer is rather straightforward. I click on a Create Load
Balancer here, and I'm going to use a public load balancer. In
the next module, we'll talk about the private load balancer.

So the default name is fine. Right here, I choose the


maximum bandwidth available. So small is 100 Mbps,
medium is 400, large 8 Gbps. I'm choosing small-- that's fine.
And down here, you can see that it's asking for a virtual
cloud network because load balancer needs a VCN where it's
running.

So I picked the load balancer VCN, and then it's asking me


for a subnet for the load balancer itself. And remember, we
could make an active copy and available copy in different ADs
because I'm in a multi-AD region. So I chose the load
balancer subnet here, and I've chosen the load balancer VCN
here.

If I click on Additional Advanced Options, I can do things like


tagging, et cetera. And if I was using network security groups,
I could use them to control the traffic here. So many times,
people ask, do backends have to be the same VCN or can
they be in a different VCN? They can be in a different VCN [?
to ?] the point you have the right security list, you have the
right network security groups, and you have the route tables
properly configured, there is no need to put all your compute
instances and your load balancer in the same VCN.

So I click on Next Step. And right here, it's asking me to


choose a load balancing policy. I can use a weighted round-
robin, IP hash, or least connections. I'll go with weighted
round-robin-- that's the default because it's set here. Right
now, there are no backends which are added, so I'm going to
add backends here.

And right now, I will choose the web server 1 and the web
server 2 backends which are running in the same VCN in
different subnets. But as you can see here, bastion, database,
web, my auto scaling instance pool all show up here. These
are not in the same VCN-- they exist in some other VCN.

But the reason they all show up is, like I said, I could have a
load balancer running in one VCN, and I could have my
compute instances running on an altogether different VCN.
Still security lists, network security groups, and the route
tables are configured properly. So I choose web server AD1,
web server AD2, and add my selected backends.
Now right here, it's asking me to choose my health check
policy. I will go with the TCP because I'm just making a TCP
connection, getting response back. With STTP, I'll have to
configure my URL and all those things. But probably, I'm just
showing a quick demo. So TCP is fine, port 80 is fine, and I
can change some of these options like interval, et cetera.

Now if I look at Advanced Options, I can see that security list,


there is an option here which is manually configure security
list routes or automatically add security list. And as you can
see here, it's showing the traffic egress route going to port 80
for the first subnet and the second subnet.

This subnet is AD1 and this subnet is AD2, because right


here are my instances running in AD1 and AD2. So the
system is doing that. If you don't want the system to do that
automatically, you could manually configure it as well. That's
completely up to you. But right now, you can see that the
system is doing that.

And now it's also opening my ingress security routes for my


load balancers security list. And for this one, let me just--
click [INAUDIBLE]. And right here, it's asking me to choose
HTTPS or HTTP. I'll choose HTTP because, again, I'm just
running a quick demo showing the web server is running
[INAUDIBLE] load balancer. And now I'll go ahead and create
load balancer.

And within a minute or so, you will see that I will get a public
IP address, and I should be able to bring that up in the
browser and chain the two web servers in round robin
fashion. So let me just pause the video here for 15 seconds,
and the load balancer will come up, and we'll use the public
IP address. So it looks like my load balancer is up and
running.
And I can click here, and I can see the public IP address
here-- it's available. So if I go ahead and bring this up in my
browser, you can see that my load balancer is working, and
it's sending the traffic in a round-robin fashion. So I could
click web server 1 and I could click web server 2, and I can
see the traffic is coming here. Well, that was a quick demo of
the OCI load balancer service in action. In the next module,
we're look into a private load balancer thank you.

[SOUND EFFECT]

3. PRIVATE LOAD BALANCER

Hi, everyone. Welcome to this module on the OCI private load


balancer. In the previous module, we have talked about the
public load balancer, and seen it in action, and some of its
key components in action. Let me quickly jump to the console
and show you one thing which we couldn't complete in the
previous module where we demoed the public load balancer,
and that's around health check.

So as we said, the health check has a three minute


granularity. And when I was doing the demo, I ended at the
public load balancer IP-- I showed the IP, but the health
check was unknown at that time. So if you see here, you can
see the overall health is OK, and the backend set health is
OK. And then I can also see the health check for each of my
backends.

So if I click on my backend set here, I can see that the health


is OK. And if I click here, I can see the health is OK shows
here. And if I keep on the backend, I can see the health is OK
for each of these backends. And if I hover over there, I can
see that some of these details where the health check is
running on two instances, so I can see some more details
here.

Now a couple of things I could do here, which again, I didn't


show in the previous demo, is if I pick a particular backend, I
have several actions I can take. Of course, I can change the
ports, et cetera-- the weight which I give [? it. ?] [INAUDIBLE]
round-robin if I want to change that.

But I can also do things like drain the state. And if I click
here, basically draining means that I disable new
connections, the load balancer stops forwarding new TCP
connections, and new [INAUDIBLE] requests to the backend
server. So this is good for scenarios where I want to do
maintenance, I want to take out a backend set out of the
rotation of the backends I have.

So I could just click through here and save the changes-- I


could do that. I could also come in here and I could edit my
offline state, and basically this disables ingress traffic
altogether. So the load balancer forwards no increased
traffic-- no incoming traffic-- to the backend server. So I
could do that as well.

And the third one is edit my backup state. So in this case, I


can set my server as a backup unit. And the load balancer
forwards ingress traffic to the backend server only when all
other backend servers not marked as backup fail the health
check policy.

So this is good for disaster recovery scenarios. If I have


scenarios around that, I could actually use this option. So a
bunch of options I could use here. Remember again, health
check is at the backend-- aggregated at-- for each of the
backend, you have a health check. And then you have a
health check aggregated for the backend set, and then you
have a health check for the overall load balancer.

And if you don't see any of these as OK, that means


something is going wrong in your load balancer, and you
need to investigate further. There are also a bunch of matrix
here where you can see your inbound, active connections, et
cetera, et cetera. If I click on my backend set, I can see some
matrix for my backends as well here.

So you can see unhealthy backends, how many backends


servers I have. You can see that both backend servers are
here. There is nothing which is unhealthy, et cetera. So I can
get some more details if I want to investigate further. So
anyways, let me go back to my slides and talk more about the
private load balancer.

So a private load balancer is assigned a private IP address


from the subnet hosting the load balancer. It makes sense
because we are talking about a private load balancer, so of
course, it will have a private IP and not a public IP. The load
balancer can be regional or AD-specific depending on the
scope of the host subnet.

So you could choose either AD-specific or regional like you


could do with a public load balancer. And primary and
standby, actually we should not use [INAUDIBLE] we should
use active and available as sort of the terms. So active and
failover load balancer each require an IP address from that
subnet.

Let's see how this works in action as we saw with the public
load balancer. So I'm using regional subnets here. So I have a
[INAUDIBLE]. I'm only showing two just to keep the picture a
little cleaner.

And right now, I can create a private load balancer in my


AD1. Of course, it's a regional subnet, so failover gets created
in AD2. And like we said, this is a private load balancer, so it
gets a private IP address.

And then like the public load balancer, you could send the
traffic to the backends-- whichever ADs exist. Now in case of
AD-specific subnets, there is a change here where my active
and my failover are both in the same AD [INAUDIBLE]. So
yes, we still have two copies, but they're both running in the
same AD.

And in this case, my subnet is AD-specific subnet here. It's


not a regional subnet anymore-- it's AD-specific. So hopefully,
now you can see why we advise you to create a [? set ?] of this
regional subnet so we can create a failover copy right in
[INAUDIBLE] AD if you have a regional subnet. If you have
AD-specific subnet in that case, the private load balancer a
failover copy gets created right here in the same AD.

So with that, hopefully, this gave you a quick overview of the


private load balancer. I don't have a demo to show private
load balancer in action. Why would you use a private load
balancer?

Well, if you have an application where you are talking-- you


have a web layer, you have, let's say, a multi-tier and you
have a backend tier, you could have a load balancer sitting
between your multi-tier and backend tier. You could have a
private load balancer, and you could have a public good
balance of sitting at your web-tier taking the traffic from
incoming traffic, sending it to your web servers. And the web
server could talk to the multi-tier using a private load
balancer just if you want to [INAUDIBLE] and distribute the
traffic.

And you could have a private load balancer between your


multi-tier and your backend tier. So hopefully, this gave you
a quick overview of private load balancer. Thank you for
watching this lecture series on OCI load balancing servers.
Thank you.

COMPUTE
1. COMPUTE INTRO
Hello, welcome to this module on basics of the OCI compute
service. My name is Rohit Rahi, and I'm part of the Oracle
Cloud Infrastructure Team. In this module, we'll look at the
basics of the OCI compute service. So before we get into lots
of details, let's look at the various form factors that the
service supports today.

The first form factor which we started with when we launched


the Oracle Cloud Infrastructure platform in late 2016 is the
bare metal shape. And what it means is, as the name implies,
you get direct hardware access. So customers get the full
bare metal machine. So you get this server. And some people
also like to refer this as the single-tenant model.

So as shown here on the screen, by the bare metal machines,


what we mean is you get access to the full servers. So right
here, you have access to the whole machine. It's a single-
tenant model.
Now, the second flavor is what customers are pretty used to
in the cloud model, sort of this shared, multi-tenant model
where you get virtual machines. So you have a hypervisor
where you can run virtual machines, but much smaller
shapes, much smaller CPU, memory, et cetera. So as you can
see here, in case of Oracle Cloud Infrastructure, we have the
bare metal offerings. And then what we do is we put the
hypervisor on top of that. And then we divide the bare metal
machine into VMs.

So typically, a multi-tenant model, a shared model, you don't


have to worry about the bare metal machine and how it
works. I'm just giving you the details behind the scenes. This
all works. You'll get access to virtual machines as you would
get with any provider.

And the third model which we launched recently is called a


Dedicated VM Host, or DVH. Now, in a Dedicated VM Host,
you can run your VM instances on dedicated servers that are
single-tenant and not shared with other customers. So what
it means is you get the bare metal machine here. But then on
top of that, you run these virtual machines. And it's a little
bit different than the bare metal shape here because in this
case, you get the single-tenant model, your bare metal server.
And then you also get the ability to run VMs on top of that.

In case of bare metal machine here, you have to manage your


own hypervisor. You have to install your hypervisor. So
definitely, you get more flexibility. But then you have to do
extra work. Right here of a dedicated VM host, we do the
extra work for you. You get the single-tenant model, and then
you can run a specific VM shapes on top of it.

Now, the thing to keep in mind is the VM compute instances


and the dedicated VM host scenarios, the instances run on
the same hardware as a bare metal instance, as we were
talking earlier. So the commonality here is the bare metal
machine. So it leverages the same cloud-optimized hardware,
firmware, software, networking, et cetera. So this was a
departure from many of the cloud vendors as we call them
gen ones versus gen two, where we designed this
infrastructure to start the bare metal. And then everything
was built as a first-class citizen on top of it.

So what are the use cases for bare metal? Well, any time you
have the highest security requirements or the highest
scalability requirement or the performance requirement, you
would use a bare metal machine. So the first thing is if you
have performance-intensive apps, probably you would go with
bare metal. For workloads which are not virtualized-- and
there are still lots of workloads like those-- you would, of
course, go to bare metal. Workloads that require a specific
hypervisor-- so you want to install your own hypervisor, do
certain things-- you would go with a bare metal machine. And
then also, in cases where you have bring your own licensing,
and there are specific examples, you would use a bare metal
machine.

So these are four predominant use cases. But there the other
use cases as well where you would use a bare metal offering.

Now, these are the different ships which are available today
in Oracle Cloud Infrastructure. And the best place to check
this, because this information keeps changing all the time, is
on the documentation site. But you can see some shapes
here, starting with standard shapes, which have only block
storage. You have DenseIO shapes, which have local storage.
You can see bigger model local storage here.
You have shapes where we support AMD processors. AMD
EPYC processors, those are denoted by E here. So we have
those shapes. We have HPC shapes. We have a bunch of GPU
shapes. And these are gen one shapes. And again, as I said,
we keep launching new families and instances all the time. So
the best place to check these are the documentation pages.

Now also note here we have various OCPUs listed here. And
we have memory here listed, and the network bandwidth. You
can see some of these instances have bandwidth going to 50
Gbps. The virtual NICs, you can use, et cetera.

One question which comes up all the time is what is an


OCPU? In case of Oracle, an OCPU provides CPU capacity
equivalent to one physical core of a processor with
hyperthreading enabled. So again, let me repeat. OCPU
provides CPU capacity equal to one physical core of a
processor with hyperthreading enabled. And again, you can
go and check this and get more details in the documentation.

Now, there are cases for AMD EPYC-based instances. The


first thing is these are cheaper. You look here, it's 66%
cheaper than all the other options out there, predominantly
[INAUDIBLE]. These are ideal for maximizing price
performance. So any time you are concerned about the price
performance, you can go with the EPYC AMD instances. All of
our apps are certified to run on AMD. So you can definitely
run those.

And then there are various scenarios like big data, et cetera,
where you can run the AMD instances. And you can see some
numbers here, different scenarios, big data, HPC scenario,
competition of fluid dynamics where, some of these numbers,
you can test and see it really is, in fact, price performance
run.
2. COMPUTE DEMO

Hey, everyone. Welcome to this demo of the OCI Compute


Demo. In this demo, we will quickly look at the various
capabilities supported by the OCI Computer Service. So let's
quickly jump to the console.

As you have seen in the previous modules, we have been


running a few compute instances around a web server,
around a bastion host, a database, et cetera in the previous
module. So the first thing, as you recall from the virtual cloud
network lecture, is a compute instance needs a VCN and a
subnet in order to be instantiated. It needs a subnet where it
would run the compute instance.

So if you recall from the previous lecture, we were using this


demo VCN. And within the demo VCN, we had two subnets--
subnet A and subnet B. If I click on subnet A, you can see it's
a public subnet. And that's good enough, I'm going to use
this to create a compute instance.

So I click on the instance, and then I click on the Create


Instance here, right? It gives me a default name, and I'm
going to call this as my instance. And then, right here, I can
change the image so I can see the various images which are
supported-- Ubuntu, CentOS, Oracle Linux, Windows Server,
et cetera.

There are also a bunch of Oracle Apps images-- so JD


Edwards, PeopleSoft-- anything, right? But you don't really
have to do all the work. These images have been pre-
provisioned, and you can use them. These are blank
templates. We cover them as blank templates which you
could use, for example, to create a JDE environment.
And there are also partner images which our partners have
published-- so all the way from GitLab, to Fortinet, et cetera,
et cetera-- those images you could find here. And then there
are things that are on custom images, et cetera, which we will
discuss in subsequent lectures. Let me just cancel it.

Oracle Linux 7 is fine. I'm in a multi-AD region, so I have


multiple ADs listed here. If you are in a single-AD region, you
will just have one AD listed here. I get a choice between
virtual machines and bare metal. So if I click on bare metal
and change the shapes, the shapes we were talking about in
the previous modules you can see here.

So there is a standard for 52-core shape, AMD shape, a dense


I/O shape, et cetera. I'm going to spin up a VM, and right
here, if you can see, you can change the VM shapes. So that's
possible as well.

Now, as you recall, right here, the console is asking me for--


the service is asking me for a virtual cloud network. So I'm in
the Training Compartment. I'm going to choose my VCN. And
right here, it says what kind of subnet I am planning to use.
I'm going to use a public subnet-- so that's subnet A, which I
choose here. And now, it says, do not assign a public IP, or
assign a public IP. I'll assign a public IP address.

And there are a bunch of advanced scenarios here. I'm going


to skip these. We'll talk about this later on.

And right here is a public/private RSA keypair. I have to


paste the public file, the public part of my key. So how do I
generate it? Because folks who are new to the cloud
sometimes-- they kind of get stuck here.
I'm using my Windows Subsystem for Linux machine. If
you're not on a PC and you're using Git, or you're using--
you're on a Mac, you would use similar commands, Git Bash.
But this is Windows Subsystem for Linux. It makes life a little
bit easier on a Windows 10 machine.

So the easiest way to generate an RSA private/public keypair


is where you're running this SSH keygen command. I could
have specified parameters like the algorithm I want to use. I
could have specified the key length. 2048 is the minimum,
but I could go to 4096, all that. But the simplest way is to
just run the command, ssh-keygen.

As you can see here, it's saying that it wants to generate a


public/private RSA keypair, and it's asking for a location. The
location is fine. You can change it if you want. And then, it
says it already exists. Should I overwrite? Select yes, it's fine.
And then it's asking for a passphrase.

The passphrase is a second layer of protection. So if your


keys get compromised, someone would need the passphrase
in order to use them. The downside is, every time you use
your keypair, you would also need to provide your password.
So you'll have to remember it.

In this case, it's a demo, so I'm just going to skip it, and my
keys are deleted. If I go to my directory here, I can see my
private and public keys, right? So id_rsa is my private key,
and id_rsa.pub is my public key.

So let me just get the public portion of the key and just copy
this one. And I need to provide this value right here in my
SSH window, the public portion of my SSH keys, right? And
then there are a bunch of advanced options here. We're just
going to skip all these. We'll talk about these subsequently in
other modules, right? And then, I'll click on Create Instance.

Now, within a few seconds, my instance will be up and


running. And as the instance comes up, it's a public
instance. It has a public IP. And if you recall from the Virtual
Cloud Network demo and the modules, port 32 is open, so I
can SSH into this instance. If those cases, if those things
were not true, you would have to go and open specific ports--
like port 22, in this case-- in order to do SSH.

As the instance is coming up-- let me just scroll down-- you


can see the progress here. You can also see things like
metrics. If I click on that, metrics are still not available
because it's coming up. But you can see that that's the VNIC,
and there you see the VNIC has been attached, and this is
the public IP I get. So if the instance is up and running--
looks like it's still provisioning-- I can SSH into that.

So we'll do that. There's also boot volume. We'll talk about


book volume in the next module. And there are other-- some
other capabilities which are there. So right now, it says,
Running. So let me just clear my space. And if you recall, the
user ID for Oracle Linux is opc. Just make sure that you're
using the correct IP address. 138 seems like a good IP
address.

And if I hit Enter, it says, do you want to connect to this


machine? Say, yes. And right now, you can see that inside my
instance, I'm SSHed into my instance. So this is a quick
demo of how you can spin up a virtual machine. Within a few
seconds-- literally 15, 20 seconds-- I was able to bring it up.
Now, let me show you a couple of other things really quickly.
So I also have a dedicated virtual machine host created. Now,
to create one, you could just come here and, with a single
click, you could create a dedicated virtual machine host.

Now, on this host, I can go ahead and create VMs now, right?
This is-- my host is dedicated to me, but I get a chance to
create VMs. See, if I click here, same experience as before.
Default name is fine. Oracle Linux is fine.

Virtual machine shapes-- now I could pick a bigger shape,


right? I could say I want to spin up, let's say, an eight-core
machine, a four-core machine. Let's see-- eight cores is fine.
Eight-core machine, select shape, and then, right here, all
this stuff is similar to what we had earlier, Demo VCN. I want
to spin this up in subnet A, assign a public IP-- why not--
and SSH key, I believe I still have it copied.

[INAUDIBLE] echo, [INAUDIBLE]. So we'll have go back. Let


me just-- this is it. [INAUDIBLE] pushing up my SSH key
provided here. And then, right here, you can see this is my
dedicated virtual whole, right? And I can click Read.

And now, what it will do is it will create this compute-- this


virtual machine, the eight-core machine I just spun up onto
my dedicated host, the host which is just dedicated to me. So
this is the second flavor which you could use with the
compute instance. The third one is, of course, the bare metal
machine.

Now, let me just quickly go ahead here and create a bare


metal machine. It takes literally a few minutes for it to come
up, but I will still just show you the experience.
So I come here. It's the same experience as a VM. I will just
pick a bare metal machine. And right now, it says-- it gives
me a choice of a 52-core standard machine, and that's fine.

Just everything is the same-- same experience. I can just pick


a demo VCN. I can pick the subnet A, assign a public IP. I
have my SSH keys. They're just copied in the previous step,
so I have it there, and right there, I go and create my bare
metal machine. Now, this will take a couple of minutes, but
in a couple of minutes, you would see that my bare metal
machine is up and running.

So this was a quick demo where we showed creating a VM, we


showed creating a dedicated virtual machine host, and then
we created a VM on top of that. And then, finally, we showed
you how similar the experience is to create a bare metal
machine.

So hopefully, it gives you a good idea of how the Compute


service works at a very high level. In the next few modules,
we'll dive deeper into the specific things around advanced
networking, around choosing file domains, around custom
images, boot volumes, et cetera.

Thank you for watching this demo. If you have time, please
join the next module on the Compute service. Thank you.

3. IMAGES

Hi, everyone welcome to this module on images, where we


talk about image import-export and things like Bring Your
Own Image. My name is Rohit Rahi, and I'm part of the
Oracle Cloud Infrastructure Team.
So in this module, well, first let's talk about Oracle-provided
images. Now, Oracle-provided images are templates of virtual
hard drives that determine the operating system and other
software for an instance. Now, images can be Oracle-
provided, as we said, can be custom images, or you can bring
your own images. Now, we provide several pre-built images
for Oracle Linux, Microsoft Windows, Ubuntu, CentOS. And
again, the best place to look up this information is on our
documentation page, as this information keeps changing all
the time.

Now, a couple of things to keep in mind. We have been seeing


this in the demo. If you spin up a linux image, the username
opc is the one you use. We allow port access on port 22,
default set of firewall rules, so you don't have to open that
explicitly-- though you could start with a firewall rule which
blocks everything and you'll have to open it manually.

You can provide a startup script using cloud-init. We'll look


into this in subsequent demos, and somewhat some other
things related to Oracle Linux. On Windows, the username
opc is automatically created with a one-time password which
you will have to change when you try to log in for the first
time. And you can use things like Windows Update Utility to
get the latest updates from Microsoft.

Now, there is this concept of custom image. What is a custom


image? You can create a custom image of an instance boot
disk and use it to launch other instances. Instances you
launch from the custom image includes the customization,
configuration, and software installed when you created that
image.

So what this means is that customers have these golden


images where they harden the image, they install certain kind
of patches, they have-- a company mandates to use the same
image across their divisions, or geographies, or different
regions, et cetera. So you could support that, the concept of
gold images, using a feature in OCI called Custom Image.

Now, during this process, when you're creating a custom


image, the instance shuts down and remains unavailable for
several minutes. The instance restarts when the process
completes. Keep in mind, this is only boot disks where your
operating system fits, right? We talked about this. We'll
talked about this in the next module.

So if you have block, volumes those block volumes will not


be-- the data would not be included from those volumes. So
the way you think about this is you'll have a compute
instance, and then compute instances can have boot volumes
and can have one or many block volumes. Block volumes are
where you would keep your applications and your data. Boot
volume is where your operating system is, right?

So custom images only care about your boot disk, not about
your block volumes. A custom image has some limitations. It
cannot exceed 300 gigs, and there are some limitations
around Windows custom images.

Now, what is this capability of import-export? Now we said


custom image, one of the key reasons you would use it is to
use the same image, let's see the same hardened image--
also, some people call it as "gold image"-- across accounts,
across regions, across geographies, et cetera-- across
divisions.

So if you have to do that, there has to be an easy way to


share these images-- share these images across tenancies
and across cloud regions. So you could do that using this
using this capability called import-export. And as you can
guess, the import-export capability uses OCI Object Storage
Service. So you could use that as a temporary place that you
would store these images for either importing or exporting.

Both Linux and Windows operating systems are supported,


and then there are certain things which are related around
the mode of input-export. So there are three modes which are
supported today. One is emulation mode, second one is called
paravirtualized, and the third one is called native mode. What
do these mean?

Emulation mode, as the name specifies , means that virtual


machine I/O devices-- whether it's disk or network, CPU, and
memory-- are implemented in software. So that's the term
"emulation." An emulated VM can support almost any x86
operating system, whether it's a 32-bit or 64-bit. But the
downside is, these VMs are slow because you're emulating all
of them, the hardware in software.

The second mode we support is paravirtualized. Now


paravirtualized, by its very name, means that the virtual
machine includes a driver specifically designed to enable
virtualization. So many of the instances support
paravirtualization, but remember that it's a specific driver
which is used to enable virtualization.

In the native mode, you get the best performance, without


getting into all the details. And some vendors also call this as
hardware virtualized machine, HVM. So you would have
terms like single-route I/O virtualization, SR-IOV. So for
those kind of scenarios where you have the maximum pass-
through and you get the best performance, you could use the
native mode.
And this is the mode you can choose when you are spinning
up the instance. I'll show you this in the demo. And the
industry general term is HVM, hardware-virtualized machine.
We call it native mode.

So these are the three different modes you could use when
you spin up your instances and you create your custom
images. And again, you can find more details here. There is a
white paper, and there are more details around that.

Now, there's also this capability of bringing your own image.


Now, what this lets you do is bring your own versions of
operating systems to the cloud, as long as the underlying
hardware supports it.

Now, why would you do that? Well, if you have lift-and-shift


scenarios. If you want to use old operating systems-- we just
talked about that-- you could use. Or you want to do
experimentation, flexibility, et cetera, you can bring your own
image.

Now, the way this process works is you have your on-prem
environment. You bring the image in a qcow2 format. Like we
said, the import-export uses object storage, so you store the
image here. And from there, we can create a custom image, or
you can create-- you could do the vice-versa, right? From an
instance, you create a custom image that you could store in
object storage.

Now, when you do that, of course, you have to comply with all
the licensing requirements. And this is a topic we will discuss
in greater details in the level 200 module on compute.
So with that, let me just quickly jump to the console and
show you a quick demo on custom images. So if I go back to
my Compute console, you can see a bunch of instances we
have been running and we have been terminating. It's a good
idea to terminate images which you're not currently--
terminate instances which you are not using.

So right now, it says there are no images that match. There


are no images which are-- custom images which are
available. So if I go back to my instance link page and scroll
down, I created this web instance when we were doing the
Virtual Cloud Network, [INAUDIBLE] some matrix, et cetera.

So first thing I can do here is I can come here and create a


custom image. Now, creating a custom image, as we said, will
involve a bit of a downtime, so just be aware of this point.
And this will take another 15 to 20 minutes, and this image
will be created.

Now, once the image is created, I can use that to spin up


other instances. And this will include all the customization.
So for example, in this particular instance, I had installed
Apache Server. So when I create this image, it would include
the Apache server, right? So that's one thing to keep in mind.

Now, we were talking about the modes which are available. If


you see here, under Launch Options, you can see the modes
which are available, right? So it looks like this is
paravirtualized, which was my second option we talked
about.

So how do we get to a hardware-virtualized machine? So if


you click on Create Instance here, we'll do something really
quickly. We have an instance name here. Right here, if you
change the image source, you can bring custom images here.
So once that web image is up and running, I could actually
spin up an instance using my custom image, right? So I
could just get that right here.

Or, I could come to Image OCID, and if that custom image is


stored in object storage, I could actually get the link here, if
the link is available, right? So I choose over here, I choose my
DemoVCN. That's fine. I choose my Subnet A. I've been using
that earlier. Right here-- because I'm not going to assess it
into the machine, so let me just skip that-- right here, if I
click on Networking, it gives me an option to choose my
networking, right?

I could let Oracle choose, and if you click on this page, you
can see the various options which are supported, right? So
you can see the difference between paravirtualized and SR-
IOV, which family supports which shape, et cetera, et cetera,
right? Or I could choose it here, or let Oracle decide.

So I'm going to choose a hardware-assisted SR-IOV. And this


is the same as the HVM we were talking earlier. And it gives
you this warning saying some instances might not be
supported with the mode I am choosing.

But right, here even though the instance is getting


provisioned, you can see that my NIC attachment type is
VFIO. Now, what that means is I'm using single-route IO
virtualization here for maximum pass-through. This is
different than the NIC virtual attachment I had for the other
instance.

If I go back to my web instance, you will recall that in that, I


was using something called-- for the MIC attachment, I was
using a paravirtualized mode. But that's the difference
between single-route IO versus a native mode. And of course,
I don't have an emulation mode here, but you could certainly
use that as well.

Now, as you can see here, my custom image is created, right?


It literally took less than a minute. And now, here, I can
create an instance, right? Or I could export this custom
image, right?

So if I say Export, basically, it goes to one of the-- I store this


in my object storage. So I have a bunch of these buckets
here. I could actually go to a bucket, or I could just do an
object storage URL. In that case, I'll have to give a URL here.

But right now, let me just pick this bucket. Actually, I have
bucket called Pictures, but I'll just use that right now. And I'll
Save that and Export Image. And now, what it would do is it
would create-- it would put this image, it would give me that
URL which, now, I could use, share with other groups, and
they could use that to import this image and create instances
out of this.

So hopefully, this gives you a quick overview of custom


images and image import-export capabilities. Thank you for
joining this lecture. If you have time, join the next lecture
where we talk about boot volumes. Thank you.

4. BOOT VOLUME
Hi, everyone. Welcome to this module on Boot Volume. My
name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure Team.

In this module, let's look at what boot volumes are. A


compute instance is launched using an OS image stored on a
remote boot volume. We looked at this earlier. So let's say you
have a compute instance here, you have this concept, "boot
volume," and block volumes.

And "boot volume" is where your OS is there-- so this is the


boot-- and you have block volumes here. And a block volume
is where you would keep your application data, and a boot
volume is where you would keep your operating system. Both
of them are a form of-- boot volume is a special kind of a
block volume. And we'll talk more about this in the block
volume module, but it just gives you a quick understanding
of what a boot volume is.

Boot volume is created automatically and associated with an


instance until you terminate the instance, because that's
where the operating system is booted from. You can scale
your instances to a larger shape by using boot volumes. You
can preserve the boot volume when you terminate a compute
instance. You get that option. And they're only terminated
when you manually delete them if you don't dominate it when
you are actually terminating your instance.

And then, the other thing is boot volume cannot be detached


from a running instance. Again, it makes sense because this
is where your operating system resides. You can do things
like manual backup, assign backup policies, create clones.
We'll talk about all of these in the block volume module. But
boot volume is nothing but a special kind of a block volume.
So everything you could do with block volume, you could do
with boot volumes.

Now, with OCI, when you create an instance, you can specify
a custom boot volume size. So for Linux, the default is 46.6,
but as you can see in this picture, we could go to 100 gigs,
right? For Windows, it's to 256 default, but you could go to a
bigger shape. You could go all the way up to 32 terabytes,
because that's the maximum size supported by a block
volume.

Now, there are a couple of things which are different between


a custom image and a boot volume. Because if you recall, the
goal of a custom image is to create a gold image, right? So
"gold image" meaning you have the operating system and all
the configurations, customizations, et cetera.

Boot volume is the operating system, right? Where the


operating system resides. So they're trying to do the same
thing, but what are the differences between custom images
versus using a boot volume and taking a backup of that boot
volume? Because if you make changes to the boot volume
where your operating system is, you could do a backup, and
you could use that to spin and create other instances. So a
few things to keep in mind.

With custom images-- you saw this in the previous module--


you can do import and export across regions and across
tenancies. There is no cost associated. So you definitely-- you
store them into object storage, but there is no cost which you
have to pay.

The downtime is the instant shutdown. In the previous


module, we saw the instant shutdown for a couple of
minutes-- less than a minute, actually. And then, there are
some limits which you have to work.

But now, in case of boot volume backup, you're doing


something similar. You have a boot volume. You can make
changes. You can take a backup.
First thing is it doesn't require downtime. So you are running,
and then you do a boot volume backup. The advantage is, it
preserves the entire state of your running operating system,
right? Because again, there is no downtime. So if you're
running it, you just take a backup.

The downside is, there's a cost associated with the amount of


object storage you use, right? So these images are big. Of
course, you know, you're paying back.

And then, the second one is, while the instance is running,
you get the whole entire state, it also creates a crash-
consistent backup. So it's always a good practice to shut
down your instance and then take a backup, right? Because
that way, if you want to-- you're running SharePoint, or
Exchange, or something, it's not a good idea to take a backup
while your application is running.

So with that, let me quickly jump to the console and show


you a quick demo of where the boot volumes are. So in the
previous module, you were creating-- using a custom image,
you were doing an export. Let's see if this process is done or
still going on.

OK, it looks like it's still going on. So right here, you can see
the boot volumes, right? And if I scroll here, I can see a
bunch of the boot volumes which I have created in the [?
second ?] right?

And some of them are running, like this-- the instances


which are running, the web instance is running. The bastion,
the database, et cetera, are running. So I can see a Compute,
the boot volume.
I just created this instance a few minutes back. It's still
running, right? It's a Linux instance. So you can see the boot
volume is 47 gig in size right here, all right? And I could do a
bunch of things like assign a backup policy. I could change
the encryption key, bring you own keys-- I could do all that,
right?

Right here, you can see the detached instance, and you can
see that Detach From Instance is grayed out. I cannot detach
it because an instance is still running. And it says it's in a
running state. And I could do things like in-transit encryption
and a bunch of other things, right?

One thing I want to show you is creating a boot volume


backup. So I can come here, and I can create a boot volume
backup here. And I will say it's a full backup. I could do
incremental backup, or I could do a full backup. And there
you go-- I could create a backup of this boot volume. So it's
similar to creating a custom image, but now I'm doing the
Boot Volume Backup, as you saw in the previous slide.

There is also Boot Volume Clone. And I could come here, and
I could do the clone here, right? The thing is, clone and
backup are mutually exclusive, meaning only one can run at
a time. So I could not run both of them at the same time.

So if I come here, and I click this, click Clone, it would give


me an error that there is currently a backup operation in
progress. So once my backup operation is done, I could come
in and create a clone. So if I go back here, it looks like my
backup is still going on and right now, you can see the
backup size. It's a 47-gig boot volume, so you can-- if you
take a backup here.
Now, a couple of other things to see. If I go to my boot
volumes, these are the boot volumes which have been
terminated. And when I was terminating my instance,
probably I went and terminated these boot volumes.

So if I want to quickly show you that experience. I have a


bunch of these instances running. Probably I'm not using this
one, so I will try to terminate this instance.

And when I do that, it says, "Permanently delete the attached


boot volume?" By default, it keeps it. But if I click this link
here and terminate this instance, now my boot volume which
was associated with this instance is also going to be deleted.

So if I go back to my boot volume link and scroll all the way


down, this one still says, available, because it's in the process
of getting deleted, terminated. But if refresh this page, it
should be grayed out. I think it's still going on. But in a few
seconds, you will see that this is grayed out because I'm
going to-- because I gave a command to terminate my boot
volume.

If I click on the Boot Volume Backups, you can see the


backups. The one which I just took, you can see that the
backup here. The data [INAUDIBLE], right? I just took this
backup.

So hopefully, this was a quick-- let me just go back and see if


the boot volume is-- OK, the boot volume should have been
deleted. Let me just go back to my instance because the kit
said to permanently delete it.
So hopefully, this gives you a quick demo. It's still in the
process of terminating, so that's why it was-- sometimes, it
takes a few more seconds to terminate the instance.

So hopefully, this gives you a good understanding of boot


volumes, what they do. Operating system is still there. You
could do a boot volume backup, similar to custom images,
some pros and cons, and build it up again.

Whatever you can do with block volumes, you can do with


boot volumes. It's a special kind of block volume. We'll talk
more about boot volumes when we discuss the block volume
model.

Thank you for joining this lecture. If you have time, please
join the next lecture where we talk about instance spools,
auto-scaling configuration, et cetera. Thank you.

[WHOOSH]

5. AUTOSCALING

Hi, everyone. Welcome to this module on instance


configuration, instance pools, and autoscaling. And these are
important concepts, and so we'll talk about those, and then
we'll quickly see them in action. We'll also cover these in more
details in the level 200 models and module on Compute. So if
you want to get those extra details, please check out the
level-200 module on Compute.

So first things first-- what is instance configuration? As the


graphic shows here, you have a running instance, you could
take-- you could create a config out of that. And what it
means is the config has the operating system image, has the
metadata. Of course, the shape, and things like your Linux,
your networking configuration, storage, et cetera.

Why would you do this? Well, you will do this because, with
the config, you create a config, and then it basically becomes
a template, and you could spin up multiple instances using
that template. You could put them in different availability
domains if you have a multi-AD region. You can manage all of
them together. So you could stop them, you could start, you
could terminate.

And then, the one big advantage is you could attach it to a


load balancer. And then this becomes your unit, and you
work with this as a unit behind a load balancer. So as we
said, you clone an instance, you create a template, save to a
configuration files, and then you create standardized,
baseline instances out from this template. And then you can
easily apply them using CLI, et cetera, et cetera, right? The
whole idea is automation of the provisioning process, right?

So if you have to-- in many cases, you have similar instances.


To spin up similar instances, you don't want to copy the
same line of code 10 times, right? That's a bad idea. You
don't want to do that any of those 10 times using CLI or the
console. That's, again, not a good idea. So the whole idea of
an instance configuration is you templatize the provisioning,
and then you could spin up multiple instances using that
configuration.

Now, instance pool is basically the ability to centrally manage


a group of instance workloads that are all configured with a
consistent configuration. So you create an instance
configuration, and then you create a pool out of this, right?
The idea is, again, if you have a multi-AD region, you could
distribute this pool across availability domains, and you
could scale out instances on demand by increasing the
instance size of the pool. So you could either scale out or
scale in using this concept of instance pool. And if you don't
understand it all on the slides, we'll go take a look at the
demo so you'll understand it better.

Instance configuration and instance pools are the basis for


autoscaling. So what, basically, do we mean? What do we
mean by autoscaling? Autoscaling enables you to
automatically adjust the number of Compute instances in a
pool based on performance metrics such as CPU or memory
thresholds. Today, the only policy we support is threshold,
and we only supports CPUs and memory. In future, of course,
more things will be supported, but those are the ones we do,
at least, support today.

So look at this graphic here, at this picture here. It sort of


explains the process. So you have an instance pool before
scale, you have a minimum size, and you have an initial size,
right? You could see my initial size is bigger than my
minimum size. That's fine, right? And then you define some
scaling rules.

So the threshold for CPU greater than 70%, add two


instances. CPU or memory threshold less than 70%, you
move two instances, right? Your initial size is two. If your
CPU goes beyond 70%, you are adding 1 plus 1, right? If it
goes less than 70%, you remove two instances, right? So if it's
four, you would remove two, and so on, and so forth, right?
And the minimum size is two, in this case, or one. So if could
go down to one, right?
So you define those scaling policies, and then, depending on
the application you are running or the demand you're seeing,
you could autoscale. You could either scale out or scale in.

So with that quick introduction on instance configuration,


instance pool, and autoscaling. Let's go and see this in
action. Let me jump to the console, and let me bring up the
computer console.

So we saw instances spinning up, a virtual machine--


dedicated virtual machine host, other virtual machines. We
did see a dedicated virtual machine host, and other pieces,
we still haven't gone and taken a look, right?

So first thing I'm going to do is, I have an instance running


here. I'll [INAUDIBLE] create an instance, right? And I'll
create this as the basis-- I'll use this as the basis for my
autoscaling, right? So I would say, let's call this autoscaling.
Just me just-- autoscaling.

I'll use Oracle Linux 7.7. It's fine. It's a multi-AD region. AD1
is OK. One core machine is fine. Where do I spin up? We have
been using this demo VCN network, and this Subnet A, the
public subnet. That's fine. I assign a public IP address.

Right here, I could do custom boot and all that. I'm probably
just going to skip it. Right here, it asks me to pick up the
SSH keys. Let me just get my SSH keys here, [INAUDIBLE]
private SSH keys here. And below here, you can see some
advanced options, right? I can choose my fault domain, et
cetera, et cetera.

In this example, I'm going to use a cloud-init script. And let


me just base the script here. It's a really simple script. It's
saying #!/bin/bash, meaning a shell script is coming in the
next couple of lines. And I'm installing this utility called
Stress, and then I'm running the Stress utility, spawning 30
threads with a timeout of seven minutes. And what this is
doing is it's generating some load for my compute instances
so I could show scaling in action.

So I would click Create, and this would create my instance,


right, in 15 seconds or so. And my instance would be created.
As my instance is created, I would be able to create my
instance configuration.

So as that is happening, let me just create my instance


configuration. And it gives me a name. I will just say this is
my autoscaling instance configuration. And I would create my
instance configuration-- even though my instance itself is still
in the process of getting configured, right?

So once my instance is configured, the configuration is done,


I could create an instance pool here. So I already had a
couple of pools which I was using earlier, but I could come
here, and I could create a pool of my instances, right? It is--
how many instances do I want in a pool let me just start with
a very simple number. I'll start with one.

And then it says, pick an instance configuration. So this is


the instance configuration we just picked, we just created.
And right here, I can choose my VCN, I can choose my
subnet, and I could choose my AD, et cetera, right? I could
attach a load balancer. So you can see, I could have chosen a
bigger number here as well.

So this is just telling me that I am creating a configur-- I'm


using a pool based on this configuration where the VM is one
core, 15 gig of RAM, boot volume is this, and then I'm
spinning up this in this particular Virtual Cloud NIC, OK? So
pretty straightforward. So I create my instance pool.

Now, we've created an instance, we've created an instance


configuration, and we are creating an instance pool, right?
Pretty straightforward. Now, the last thing for us to do is to
create an autoscaling configuration here, right? So if I click
on Autoscaling Configuration, I would come here, and right
now, you could see that my pool appears here, even though
it's in the process of getting created.

So let's look at a couple of things here, right? So first thing, it


is provide the name. I'm OK with that.

It gives me a cool-down period. This is the minimum period of


time between scaling actions, whether you're scaling out or
scaling in. So it's a good idea. Right now, it's five minutes.
You could change it to a lower number. The downside is, if
you do that, you don't want too fast scale-out or scale-in. So
you want to have a minimum period before you either scale
out our scale in.

Now, depending on your use case, you might change that, but
it's a good idea to not do frequent scale-in or scale-out, right?
So I keep it at 300. It already picked my autoscale and my
instance pool. So it did that. And then, right here, it says,
what is my policy? So like I said, today, we support CPU and
memory, and the only policy we support is the threshold.

So it says, OK, how do we scale? The minimum number of


instances is one. Maximum number of instances, I could see
something like three, starting with one. And what is the
scaling rule? As I said, you know, today, we only support
thresholds. So if my CPU utilization goes beyond 50%, I want
to add one.
So first time it goes beyond 50%, it adds one. It again goes
beyond 50 seconds up to five minutes, it will add one more,
with a maximum three. If my CPU falls below 40%, I want to
remove one instance. And, of course, there is a cool-down
period of five minutes, so it would happen over this period of
five minutes, right?

So it would go down from three to two. Again, after five


minutes. If it's less than 40, it would go down to one. It will
remain at one. So keep in mind, minimum is one, initial is
one-- so start with one-- the maximum is three, right? And
this is it. I can just create my autoscaling here.

Now, what you would see is, if I go back to my instance, the


first thing you would see here is my pool getting created,
right? So you can see that my pool is getting created, right?
So this is my instance which I used to spin up this pool, and
you could terminate this instance. It's fine. It's just used to
create the template.

If I click on this instance, the first thing you would see here
is-- of course, we did the public IP, and it's launching in
Subnet A, which is a public subnet, et cetera. You would see
here that this instance, it has-- if you click on Matrix, you
can see that the matrix, I can see my CPU utilization is
something around 66%, right? It's definitely breaching the
threshold of 50%.

I can SSH into this instance. And if I've got a command like
top, I can see the Stress commands we ran, right? You can
see here the various Stress commands. Remember, we had a
startup script where we gave the Stress command with a
timeout of seven minutes to spawn threads on this machine.
So you can just do Control-C here, then something like iostat
minus cpu. You can see that the cpu utilization is 83%,
right? And if we go back to my-- refresh this page, go to my
matrix, you can see here it's actually more than 82%. It's
going to 98%, right now, right?

So this, what it could mean is, if it stays like this for five
minutes, this will trigger an autoscaling action, meaning you
would see one more instance get spun up because of this
behavior. Other way to look at it is, if I go back to my-- go to
my monitoring tab and I click on Service Matrix, you can see
the matrix for various resources running here, right? So if I
go into my Matrix Explorer, I could actually run a custom
query here.

So I will say compartment is Training. I want to look at my


Compute agent. And the CPU utilization is my metric, interval
is a minute, "mean" is fine. And right here, I can choose my
dimension. So I'm going to choose instance pool ID. And right
here, I'll have to go back and take a look at my pools because
a couple of them are from the pool I was running. So if this is
the one, let's see if it gives me data, more data for this time
range. Maybe this is the one.

So if I update this chart, you can see right here that my pool
is running a new frequency here. There is an orange and a
blue light. So it looks like I have spun up another instance.
And the way I can see that is you can see the initial starts
with INST, meaning my autoscaling triggered in, and I was
actually able to spin up a couple of more instances because
it's constantly staying beyond 50%.

So if I go back to my Compute instance, it still hasn't-- it


hasn't spun up. But in couple of minutes, you would see a
couple of more instances spin up here because of the
autoscaling action, that as we saw that in the Matrix
Explorer. So let me just pause the video here, wait a couple of
minutes, and then I can show you a couple of more instances
getting spun up because of the autoscaling going on.

And we paused the video for a minute or so. And as you can
see here, this is my original instance which was running as
part of the pool. You can see that another instance is getting
provisioned. And it's been less than five minutes, so you can
see that another instance is getting provisioned, again,
because my load is more than 50%. This will be [INAUDIBLE]
because of all the 20 threads I spawned. And so that's the
reason why I'm spinning up another instance.

And if you can see the difference between the time interval,
842 and 849, roughly five minutes' difference, that's the cool-
down period we had. It means that the that's the time period
between scaling action-- scale-in or scale-out. And this
instances is running here in the matrix.

It's still not available, just scaling provision. But hopefully,


you can see that the scaling action, we went from one
instance to two instances because the CPU utilization was
greater than 50. If it stays beyond 50 for another five
minutes, I will have three instances. And then I'll stop there,
because that's the maximum I set up in my policy.

If I bring up my policy here and click on this, you can see


that it's the CPU utlization, the scale-out rule says greater
than 50%, add one instance. If it falls below 40%, you would
see that we would go from two to one instance. And the
maximum, we could go to three.

So hopefully, this gives you a good idea of instance


configuration, the [INAUDIBLE] instance pool, where you
create a poll and you manage these instances as one. And
then, that also forms the basis of instance autoscaling, where
you can write a policy, threshold policy today on CPU and
memory, and you could scale out or scale in depending on
your load.

So thank you for watching this module on instance spool,


instance configuration, and autoscaling. If you have time, join
the next lecture, where we talk about some other features of
the Compute service. Thank you.

[WHOOSH]

6. INSTANCE METADATA AND LIFECYCLE


Hi, everyone. Welcome to this module on instance metadata
and lifecycle. So instance metadata includes things like
instance OCID, name. OCID is the unique identifier. Name,
compartment, shape, region, availability domain, and all the
values you would attribute with the instance-- you can also
have custom metadata such as an SSH public key as part of
instance metadata.

Now, instance metadata runs on every instance and is an


HTTP endpoint listening at this particular IP address,
169.254.169.254-- really simple to remember. You can get
instance metadata by logging into the instance and using a
metadata service. That's a very simple way to use. And there
are some commands I have listed here, pretty straightforward
commands where you can get all the metadata, or you could
get specific metadata values.

Now, you can also add and update custom metadata for an
instance using the SDK or the CLIs. Let me quickly jump to
the instance. And we have been running a bunch of things.
This is my instance where we were doing some auto-scaling.
So if I just clear my screen and I run my instance metadata,
you can see here that I just did a call to this particular IP
address, 169.254.169.254. And I'm trying to get all the
metadata respective of the values we just took.

So I can see that it's an AD-1 obligatory domain. I can see the
far domain, compartment ID, display name, image, so on and
so forth. Everything which is on the instance, I can get here. I
can also get the public portion of the SSH key and all the
values. Now, I can also update a few values, if I want to,
using the CLI or the SDK. So it's pretty straightforward, like
with any other cloud product.

Now, there are a couple of things you need to know on the


instance lifecycle. Starting an instance-- if a stopped instance
is there, of course, you start. Stop, meaning you shut down
the instance, and after the instance is shut down, you can
start it again. Reboot does the same thing, shuts down the
instance and then restarts it and sort of combines stop and
start.

Terminate is permanently deleting the instance you no longer


need. There are a couple of things to keep in mind. When you
terminate an instance, the public and the private IP address
are released. And some other instance are available for other
instances to use both public and private IP. The boot
volume-- you have an option to permanently delete the boot
volume. However, you can preserve the boot volume and
attach it to a different instance as a data volume, or use it to
launch a new instance.

Launch a new instance-- pretty straightforward. We looked


into this earlier. You have a boot volume. You can make
changes. And then you can launch other instances using that
boot volume. You would attach it as a data volume to other
instances for troubleshooting. This comes up in the exam.
How do you troubleshoot an instance failing to boot or some
issues with the boot volume? You could attach it as a data
volume to another instance and you could use it to
troubleshoot.

What happens when you do the billing, whether you stop or


start? So for standard shapes, the billing pauses in a stop
state. Standard shapes have no local storage. Everything is
block. So if you stop an instance, basically you are sort of de-
provisioning the compute. Your block is on the remote
storage, so you preserve that. You don't pay anything because
your billing stops.

Dense I/O have lots of local storage depending on whether


you're using a VM or a bare metal. And the billing continues
even in the stop stage. Even if you stop it, you're still paying
for those shapes. GPU shapes-- again, billing continues. And
then HPC shapes-- again, the billing continues even in the
stop state.

So this was a rather quick module. We talked about instance


metadata. We talked about instance [INAUDIBLE]. Hopefully
you like the lecture series on the OCI compute service. Thank
you for watching this lecture series. If you have time, please
join me in the next lecture series where we talk about the
various storage services available on the OCI platform. Thank
you.

BLOCK VOLUME
1. LOCAL NVME
Hello, everyone. Welcome to this lecture series on OCI Block
Volume Service. Before we dive deeper into block volume and
local NVMe storage, let's look at the gamut of storage services
supported across the OCI platform. So starting on the left-
hand side, you see in this table, we have local NVMe, we have
block volumes, file storage, object storage, and archive
storage. So this is the whole gamut of storage services
supported by the platform.

Why do we have so many services? Well, these are tied to the


use cases or the requirements customers have, and each of
them have specific use cases and specific characteristics. So
let's look into this really quickly.

Local NVMe is NVMe SSD-based temporary storage. What


this means is-- sorry, let me get back to this slide. What it
means is it's non-persistent storage but does survive reboots
because if you're running, let's say, a database, you want to
reboot the database. So it does support that. But the key
point here is it's non-persistent because it's temporary
storage.

The capacity can be in terabytes, and you can see some


numbers here for virtual machines or bare metal instances.
And the use cases are applications which require a lot of
throughput-- so like big data applications, OLTP, where you
require a massive amount of very fast storage, lots of IOPS,
lots of throughput, very fast storage. Local storage, you don't
want to go on to the network, you would use local NVMe.

Block Volume is also NVMe SSD-based block storage, but in


case of Block Volume, the difference between local and block
is block volume is durable. What does that mean? "Durable"
means that we make multiple copies of the volumes in an
edit. So even if one volume dies, we make three copies. So we
guarantee that durability.
The capacity can be petabytes, much more than that the
capacity supported by local NVMe. And you can see some
numbers right here, and the use case here would be up
applications that require SAN-like features. So whether it's
Oracle Database, if you're running VMWare, or Exchange, or
any of these other applications which really require SAN-like
performance.

The third storage that was supported in OCI is the file


storage. This is an FSB3-compatible file system. Again, it's a
durable storage, multiple copies in an availability domain.
The capacity is exabytes, and you can see some numbers
here, and this is for applications that require a shared file
system. So for Oracle, it would be Oracle own applications
like EBS. It can be HPC. It can be some other scenarios.

And then, the last two storage services are sort of-- you can
think about those as storage for the web, right? So if you
have a lot of unstructured data, you would store them in
object storage. Highly durable. We maintain multiple copies
in the data centers, across the data centers, in a multi-area
region. Capacity is petabytes, and you can see some of the
numbers here. And then, as I said, this is good for
unstructured data.

Archive storage is a class within object storage, and it's suited


for long-term archival and backup. Again, highly durable, and
the use case is for applications that require-- for applications
that have a need for long-term archival and backups.

OK, so let's move and talk a little bit about local NVMe
storage. So in this section, we are going to cover local NVMe
storage. And in the next module, we are going to talk about
block volumes.
So what do we mean by local NVMe storage? In OCI, some
instances have locally-attached and NVMe devices. And what
this means is, if you have applications that have very high
storage performance requirements, lots of throughput, lots of
IOPS, local storage, you don't want to go through the
network, you would use these local NVMe devices.

As you can imagine, these are locally-attached SSDs, and


they are not protected in any way through RAID, or
snapshots, or backup out of the box. So we don't guarantee
any prediction out of the box, which means that US
customers are responsible for their durability of data on these
local SSDs.

And you can see some instances here that support local
SSDs, right? BM, bare metal, dense IO shapes, [INAUDIBLE]
the virtual machine dense IO shapes, and you can see the
sizes we support, rate? Going from 51 terabytes, all the way
come down to something like 6.4 terabytes for the smallest
shape.

So again, depending on your use case, you could either go


with a bigger shape or a smaller shape. And if you go log into
one of these, SSH into one of these instances and list all the
block devices, so see a list, [? blklist ?] all the block devices,
you can see the block, the NVMe devices appear here, right?
You can get the list.

So rather straightforward. Now, one thing which is important


and you need to consider is these-- like we said, these devices
are not protected by us out of the box. So how do you go
ahead and protect these devices, right? You can always
configure RAID on these NVMe, local NVMe devices. So there
are various options. I'm just presenting three of them here.
But you can do something other than these three options as
well.

The simplest is RAID 1, which is basically just a copy or a


mirror of a set of data on two or more disks, right? So you'll
have disk zero and disk one, and the same data is just copies
across these two different disks, right?

The disadvantage here is there is no parity. As you know,


parity is a calculated value used to reconstruct data after a
failure. So if both the disks fail, there is no redundancy which
is built-in.

RAID 10 stripes data across multiple mirrored pairs. If you


can see here, it's written as a combination of two different
RAID 1 pairs. As long as one disk in each mirrored pair is
functional, data can still be retrieved, right? It provides that
extra protection there. RAID 6 is block-level striping we two
parity blocks distributed across all member disks. So it
becomes a little bit more-- it becomes a little bit more
complex. You take a performance hit. But you get this extra
durability.

So again, depending on which RAID configuration you want


to choose, you could go either with a RAID 1, or RAID 10, or a
RAID 6, or you could try something else. But again, keep in
mind that this is something which is completely your
responsibility as a customer.

Though we don't configure RAID out of the box, we do provide


SLAs for NVMe performance. And again, you can get these
numbers from our documentation page. But you can see that
going from DenseIO1.4, 1.8, 2.16, and so on, and so forth,
you can see the IOPS value we support, the minimum IOPS
value we support.
So if you go with a bare metal DenseIO shape with
[INAUDIBLE], we support 3 million IOPS. And again, there is
some finer print you have to read. There are for tier block
sizes, sort of the read/write mix for the workload, et cetera,
right? But there is an SLA in case you are using local NVMe
devices.

So with that, thanks for watching this module on local NVMe


devices. In the next module, we'll introduce block volume
service, and we'll look into some of the details. Thank you.

[WHOOSH]

2. BLOCK VOLUME INTRO


The OCI Block Volume Service. In this module, we'll
introduce the Block Volume Service, and we'll talk a little bit
about some of the inherent features.

So what is the Block Volume Service? Block Volume Service


lets you store data on block volumes independently and
beyond the lifespan of compute instances. And the key thing
to highlight here is "independently and beyond the lifespan of
compute instances." And we'll look into that in greater detail.

Block volumes, as the name suggests, operates at the raw


storage device level and manages data as a set of numbered,
fixed-size blocks using common typical protocols, storage
protocols, network storage protocols, such as iSCSI. You can
create, attach, connect, move volumes as needed to meet
your storage and application requirements.
Now, why would you use block volume? The first use case is
the most important one, and that's for providing persistent
and durable storage. As you saw in the previous module,
local storage-- also sometimes called as ephemeral storage--
is temporary. It lives and dies with the instance, and so it's
not persistent. It's not durable.

So if you want-- if you have applications where you want to


store the data durably, if you're running, let's say, a
database, or you are running HTA and SharePoint, these
applications, [INAUDIBLE], you want that durability. So you
would go with block volume.

And then, there are some other cases like expanding instant
storage, instant scaling, et cetera. But the most important
and the most relevant one why customers would use block
volume is for the persistent store and the durability of the
data.

Now, there are some characteristics of the block volume you


should know. First is the size. We support anywhere from 50
gigs all the way to volumes which are 32-- which can be 32
terabytes in size. That's pretty massive size.

The disk type is, as we said, is NVMe SSD based, and IOPS,
Input/Output Per Second, IOPS performance, it varies. It
goes all the way from 2 IOPS per gig all the way to 75 IOPS
per gig. And you can see the IOPS per volume, we support up
to 35,000 IOPS for volume. And we'll look into these in
greater details.

Some things are interesting here. You could literally attach


32 volumes, for instance, which would give you a 1-petabyte
storage space. So 32 terabytes per volume into 32 volumes
per instance, nearly are reaching a petabyte of storage. That's
mind-boggling, given the amount of storage you can you can
use.

And then you can see some of the things that are on security.
Data is encrypted at rest. You could bring in your own keys.
Otherwise, you could use the keys provided by us. And you
can also do in-transit encryption.

Now, there is a new thing which we introduced recently,


which is the performance tiers. And there is this
characteristic call Volume Performance Unit, or VPU. So what
does that mean? Basically, what we are giving here is
different performance levels. So starting with, if you-- there
are three levels. So there is a lowest-cost, there is a balanced
tier, and then there is a higher-performance tier.

So what do these tiers mean? The balanced tier is the default


tier you get if you create a new block or volume, or a boot
volume. You get this balance tier. This is what the block
volume used to be in the past. You get 60 IOPS per gig all the
way to 25,000 IOPS per volume, maximum IOPS per volume,
you get to 25,000.

If you want to go higher than that-- you will probably have


big databases you are running and you want the best
possible performance-- you could go with the higher
performance tier. That would give you 75 IOPS per gig, all the
way to 35,000 IOPS per volume. Add right here, you can see
some of the numbers. The maximum IOPS per volume is
25,000, IOPS per gig is 75.

If you don't need that kind of performance, you could go with


a lower-cost tier. In this tier, you get 2 IOPS per gig all the
way to 3,000 IOPS per volume. Important things to keep in
mind-- this tier is not available for boot volumes, because, as
you can imagine, if you are building up the volumes, you
need higher performance. So that's why we start with
balanced. But you could always go with a higher-performance
for boot as well.

And then, the second thing is, there is no separate volume


performance unit charge there if you go with the lowest tier. If
you go with a balanced or higher-performance, you have to
buy a specific number of volume performance units. So in the
case of balanced, you buy 10 VPUs per gig per month. In case
of higher performance, you buy 20 VPUs per gig per month.

All, right so having looked at the block volume elastic


performance and the concept of volume performance unit
reviews, let's jump into some of the operations you can
perform with block volumes. As you can see on the screen--
and I'll bring up a demo, and we'll get into more details
there-- you could create, attach a block volume, right? And
there are two different Ways You could do that. One is by
using something called iSCSI, and the other one is using this
mechanism called paravirtualized.

Now, paravirtualization is a light virtualization logician


technique where the VM utilizes hypervisor APIs to access
remote storage directly as if it were a local device. ISCSI, on
the other hand, utilizes the internal storage stack in the guest
operating system and network hardware virtualization to
access block volumes.

So in this case, iSCSI case, hypervisor is not involved in the


iSCSI attachment process. So as you can guess, it gives you
better performance, but you have to do a little bit more work
in order to attach the block volumes to the instance, right?
And we'll look into this in more detail as we go into the demo.
You could detach and delete block volumes. And that's the
reason people use block volume-- it's persistent. It's durable.
If you don't need it, your instance dies, you can still have it.
You can still keep your block volumes. You can detach. You
can attach it to another instance. And you [INAUDIBLE]
right? You can delete and all that. And again, we'll look into
these in more details.

Now, one thing to keep in mind today-- Block Volume Service


supports this capability call Offline Resize. So what do we
mean by "offline resize?" What we mean is, if you want to
expand an existing volume, you cannot just do a dynamic
resizing where you go from, let's say, 50 gig to 100 gig. What
you have to do is this thing called Offline Resize. So you will
have to detach the volume first, if it's attached to an instance,
and then you can change the size.

So that's one way to do it right. The other way is, you can use
Volume Backup, and you could backup to a larger volume, or
you could do Clone, and again, you could go with the larger
volume. And you do backup and clone, you're not restricted
to do backup and clone as the same size as the original
volume. You can actually go higher, right? So again, we'll look
into these in subsequent modules.

But just keep in mind Offline Resize and various mechanisms


you can use to change the size of the disks. Now, also keep in
mind, you can only increase the size of your volume. You
cannot decrease the size of the volume. It seems logical, but
just keep that in mind.

[WHOOSH]

3. BLOCK VOLUME DEMO


Hello, everyone. Welcome to this module on a quick demo of
the OCI Block Volume Service. In the previous module, we
introduced the Block Volume Service and we looked into
some of its details. In this module, let's demo the service and
look at some of those things in action.

Right here, I'm in the OCI console. We have been using the
OCI console for some of the other modules. And if I click on
the sandwich or the burger menu here, I can see the various
service links here, right? So there's Compute, Doc Storage,
Object Storage, et cetera.

So I'll click on Doc Storage, and the first link here is Block
Volumes. And right here, it gives me an option to Create a
New Block Volume. So let's create a new block volume. And I
have been creating a bunch of these block volumes in my
account previously. I'll call it blockvolume1. Compartment
training is fine. I'm in a multi-AD region, so it gives me a
choice of three different ADs. If I'm a single-AD region, I'll just
see one AD here, and that's fine.

It gives me a size. Let's just pick 100 gig, right? Below, you
can see that the sizes can go from 50 gigs all the way to 32
terabytes, right? We looked into this when we were discussing
the service. There are backup policies, et cetera. We'll look
into those in a subsequent module.

And right here, there are the volume performance units. We


talked about the whole idea of elastic performance with the
block volume service. So I can pick three different
performance levels. There is a lowest cost-- lower cost-- which
gives me 2 IOPS per gig. There is the balanced level, which
gives me 60 IOPS per gig, and this is the default. If I hadn't
picked anything, it would default to 60 IOPS per gig. This is
the default, both for new block volumes as well as boot
volumes.

And then, right here, the third one is the higher performance.
If I go into this, I get a 75 IOPS per gig. The use case is the
first one-- is for applications like streaming, data warehouse,
log ingestion, where you need a lot of sequential throughput.
You will go with the lowest cost.

Balanced is good for any kind of random read or write-- so


booting up your desks, running your databases, SharePoint,
VMWare, et cetera. And then, the highest performance is for,
really, the best performance workloads like your databases, et
cetera. You would go with the highest performance. So I'll
pick balanced here. That's fine.

Right below, I see an option for encryption. So this is server-


side encryption. I could encrypt using Oracle-managed keys.
So we manage the keys for server-side encryption. So this is
encryption at rest or data at rest. Or I could bring in my own
keys. I could do that.

I am going to let Oracle manage the keys. That's fine. And


right here, I could do tagging. So let me just click Block
Volume, and it's a rather straightforward process to create a
block volume. And again, it's flexible. I can choose any size
between 50 gig all the way to 32 terabytes, right?

As it's getting created, you can see here that I have some
links for attached instances, matrix, backups, clones, et
cetera, right? So if I click on Attached Instances, I can see
that there is no instance which is attached right now.
So I can click Attach Instance, right? And I can attach this
block volume to an instance. If you recall from the slide,
block volume, the whole idea is to give you the durable and
persistent storage. So you can attach it to an instance, then
you can detach it. Even if the instance goes away, your data
is still persistent and durable.

So I get two options here-- iSCSI and paravirtualized. I'll


choose iSCSI. Then I get different access-type options. So
there's read-write, there's read-write shareable, and there is
read-only. So read-write means you can read and write both.
Read-write shareable means the same block volume can be
shared across multiple compute instances, right? No other
cloud vendor does this today except Oracle.

And so I'll choose read-write. And read only means you want
to protect the data. You just want to read, don't-- not write to
it, right?

So I can select the instance here. And I have these four, five
instances running. If you recall from the other module we
had on Compute, we were running auto-scaling. So let me
just pick this auto-scaling instance, and then it's asking me
to pick a consistent device path.

And if you scroll here, you can see the device path and get
more details. The whole idea is, if you are rebooting your
instance and you want your block volumes to mount
automatically, it's a good idea to use the consistent device
path. Otherwise, when you do that, you'll have in your ETC
[INAUDIBLE] file.

Sometimes, you will see inconsistent behavior. Your block


volumes will not be mounted, et cetera. So it's good to use the
consistent device path. And you can see the device path
here-- /dev/oracleoci/oraclevdb. It always starts with, it
always ends with Oracle V-D-, and then there is a letter here.
Whether it's V-D-B, V-D-C, V-D-D, et cetera, et cetera, and
then you can choose any of these values.

So I click Attach Here. And what it's doing now is it's


attaching the block volume to the Compute instance. And as
it does that-- let me jump over to the compute instance-- as
it's doing that, because I chose my attachment type as iSCSI,
I have to run some commands to attach my volume.

Because remember, these volumes are running on the


[INAUDIBLE] storage. They're running somewhere else.
They're not directly attached to the instance. They're storage
servers running over the network. So I need to run those
commands.

If I had chosen paravirtualized, it automatically takes care of


attaching the volumes as if it's running locally. But the
downside with paravirtualization is you have a little bit of
performance overhead. With iSCSI, you don't run into that.

So if I click on the instance here, I can see that my block


volume is attached, right? So if I see, on this ellipsis menu,
there is a list of all the iSCSI commands to attach or detach.
So let me just copy my iSCSI commands, and right here, let
me SSH. First thing, let me SSH into this instance.

So if I scroll up, I can see the public IP address, and this is


the instance we were using for auto-scaling. And it is living in
a public subnet And we have done-- we had created the VCN
and in the subnets, et cetera, in the previous models.
So I am able to SSH into the instance, and let me now run
these iSCSI commands to attach the block value to the
instance. So I run these, and you see all of these, the
acknowledgment is that it's successful. So now, if I do a list of
block devices, I can see that this 100-gig block volume we
just attached appears here, right?

But what happened to the consistent device path we just


talked about? So to look at that, we could do a listing and
find out where those volumes are, the consistent device paths
are, right? And if you see here, I'm doing just a listing of my
disks.

You can see that this one here is the one we just attached,
right? /dev/oracleoci/oraclevdb. And to confirm that, if I go
back to my console, you can see the consistent device path
here is the same as what appears on my screen here, right?
Let me just clear my screen.

And then, what I need to do is-- this is typical of any


operation you want to do with your block devices-- I need to
create a partition first. Let me run this. And then I need to
create a file system here.

And when the file system is created, I can create a mount


point, and then I mount this drive, this block volume, to the
mount point, and then I can start using it, right?

So let me just quickly [INAUDIBLE] point [INAUDIBLE], which


is this one is fine, and then sudo mount. And if I run list
block devices now, I can see my 100-gig file is here, right?
And now I can go start running my application here. I can
start storing my data here, right? It's as you would expect in
a block volume service, right?
A couple of other things I can do here is my block volume, if I
click here, I can change my performance tiers, and it's
dynamic. So if I want to go to a certified IOPS [INAUDIBLE], I
could just click here, higher performance, and change my
performance.

And you will see that even though I can change the
provisioning, it's a dynamic provisioning, right? So I don't
have to detach my volume. I don't have to incur a downtime
to do that, right? And you can see my performance is now
changed, and now I am at the higher performance tier, right?
So it's pretty straightforward.

I can also do backups and close. So you could click Backup


here. I could say this is my full backup or incremental
backup, and click Create, and now my backup is getting
created.

Backups and clones are mutually exclusive, so I can run only


one at a time. So if I want to come here and run a clone, it
will give me an error saying that I cannot do a clone because
there is a backup which is going on, right? The one thing
which you should notice here is my original volume was 100-
gig, but I could go to a higher tier or a higher-size volume
here.

So I could go to 200 gig, and nothing prevents me from doing


that. I could do that and create a clone-- and it's still the
backup is going on. Otherwise, I would create a clone of 200
gig.

So remember we talked about block volume resize? The three


ways you can resize, one way is you create a clone, a bigger
clone. The other way is you create a backup and a bigger
backup, right? You could always go from a 100-gig volume to
a backup which is 200 gig and then create another volume
from that backup.

And the third way to do resize is-- if I go back to my block


volume-- is you would see the resize is right now grayed out,
right? And the reason it's grayed out is that this volume is
attached to an instance.

So in order for me to resize this volume, I have to create-- It's


offline resize, so I'll click on Detach here. And it detaches the
volume from the instance. And now, if I go back to my block
volume, I should be able to resize it, right?

So if I come here, it's just in the process of getting detached.


Let me just refresh my screen. I should be able to resize it,
right? And of course, when you resize, you can attach it back
to the instance. But when you do that, you will have to-- get
the operating system has to recognize the new volumes.

You'll have to create partitions, and all that stuff you'll have
to do. Again, depending on Windows or Linux, the behavior
will be slightly different. But right there, you can see I am
going from a 100-gig volume to 200 gigs.

So that's pretty much a quick demo on block volumes, some


of the characteristics. In the next few modules, we will look
into things like backups and restoration and, also, a little bit
more details on cloning and volume groups. Thank you for
joining this module on a quick block volume demo.
4. BACKUP AND RESTORATION
Hi, everyone. Welcome to this module on OCI Block Volume
Service Backup and restoration capabilities. So in this
module, we'll look into backup and restoring block volumes
and what do those features actually do.

So backup, as the slide shows, here, is a complete point-in-


time snapshot of your block volumes. So what does it [?
means ?] is, well, if you take a backup of a block storage
device running here, let's say, for example, the backup
actually goes into Object Storage, which is a regional service.
And then from the Object Storage, you can just store the
backup.

Now, this is a multi-AD region. Right, so you can see,


Availability Domain 1 and Availability Domain 2. So you
could take the backup, and you could restore a backup to
any of the ADs, [? well, ?] restore the backup as a new volume
to any of the ADs within the same region. But if it's a single-
AD region, of course, you are limited to the same Availability
Domain from which you take the backup. But the thing you
could do, even in those single-AD regions, is you could copy
block volume backups from one region to another.

Because one of the common use cases which comes up all the
time is [INAUDIBLE] running some application in this region,
but I also want to quickly clone that application, let's say, in
another region, right. So the easiest way to do that is you
copy your block volume backups from one region to another.

Now, backups are done using point-in-time snapshots. So


that's why, you know, we talked about this earlier. Therefore,
while the backup is being performed in the background in a
sort of asynchronous manner, your applications can continue
to access your data without any interruption. Right?
And you can see some numbers, here. These are sort of
typical numbers. But, again, we don't have an SLA or
anything like that. If you're doing a 2 terabyte volume
backup, you take something like 30 minutes, and so on and
so forth. Right?

There are two kinds of backups you can do. Right, so there is
on-demand, one-off volume [? backups ?] which you could do,
or you could do policy-based backups.

So on-demand, one-off backups, there are two options you


get. Right? One is incremental, where you can just do a
backup, let's say, in different point-in-time snapshot periods.
And then you could just do the next one incrementally [? over
?] the previous one. Or you could just do a full backup. You
don't care about the previous backup. You just do a full.
That's on-demand, one-off volume backup.

The other option is-- this, we just talked about-- is the


automated policy-based backups. Now, automated policy-
based backups, what we mean is you take backups on a
schedule, and you take them based on the selected backup
policy. Now, we support three backup policies, Bronze, Silver,
and Gold.

And don't worry, I'll show this in the console. But today, you
cannot do like a customized backup policy, so you could not
say that, you know, I want to combine Bronze and Silver or
do my own sort of policy. It's not supported today. So, with
that, let me just quickly jump to the console and show you
were the backup policies are.
So first thing you see, here, is the backup policies are listed,
here, right in the console, so Gold, Silver, Bronze. If you click
on Gold, you can see that there are different backup types. So
there is a daily backup which happens. Right?

There is a weekly backup which happens. And, as you can


see, here, the daily backups are again for a week. Weekly
backups are retained for a month. And then there's a
monthly backup which happens, which is retained for a
whole year, 11 months, sort of like a year. And then we also
do a full yearly backup, which is retained for five years.

And as you can scroll here, it will show me some of the-- like,
the times when the backups will happen, right. So it's
showing me the timing for the next three backups. [? It's ?]
going to happen next few-- you know, three days, right? It's
showing me, for weekly, the scheduled for the next three
weeks, right, and so on and so forth. So you can see these
schedules here.

If I go back to the menu, here, you can see there's a Silver


backup, which is slightly lower frequency than the Gold back
up. It has a weekly schedule. It has a monthly schedule, and
it has a yearly schedule. And, again, some of the things you
can see, it doesn't have a daily backup. Weekly is retained for
a month. Monthly is retained for a year. And yearly is
retained for five years, right, as with Gold.

And then there is Bronze, which is basically monthly and


yearly. So there is no daily backups. There is no weekly
backups. So, again, depending on what your requirements
are, you could either go with Gold, Silver, or Bronze.

So how do you do these backup policies? Right, how do you


apply them? So if I come to a block volume here, first thing I
could do here is I could just do a manual backup. Right, so
this block volume is running here. I could just click Create
Manual Backup. And I would give a name, backup1. And I
could choose a Full Backup or Incremental Backup. I will say
Full Backup, right? I could just do that, right, pretty
straightforward.

I could also assign a backup policy. So if I come here--


because when I created these volumes, I didn't assign a
backup policy. So I could say, you know, do a Bronze backup,
which means it's a monthly backup. And then it will also
have a yearly backup. I could have gone to a Gold, which is
sort of, as the name specifies, the highest tier of backups.

But then, again, remember that as you are doing the


backups, you are also paying for them, right, so there's a
trade off between the frequency you want and the cost you're
willing to spend. So Gold, probably I don't want to do. So
Bronze is fine, and I could just assign my backup policies
here. And, right here, you could see that my backup policy is
assigned.

And if I go to all Block Volume Backups, the first backup


which I had just created is getting created here. Right, you
can see this, just this icon here showing it's available. And
because I-- the policy I assigned is a Bronze, so it's going to
do a backup on a monthly basis here.

For this one, if I-- let's say I want to do a Gold, you will see
right away that there is a backup being created for this
particular volume. See, if I go to Block Volume Backups, now,
in a couple of seconds, you will see that policy would entail a
backup, and the backup would start there, right, because it's
like a daily backup we take. And then there's a weekly
backup.
So, hopefully, this gives you a good idea of how the backup
works, whether it's on-demand or it's a policy-based backup
and the various different tiers of policy-based backups we
support. Thanks for watching this module. If you have time,
please join me the next module where we talk about cloning
and volume groups. Thank you.

5. CLONE AND VOLUME GROUP


Hi, everyone. Welcome to this module on OCI block volume
clone and volume groups capabilities. Let's first look at the
clone functionality and what it does. Clone allows copying an
existing block volume to a new volume without needing to go
through a backup and restore process. As you recall from the
previous module, when you do a backup and restore, you are
doing a point-in-time snapshot. The backup is going to the
OCI Object Storage, and then you restore from there.

If you don't want to do that, you could use this clone


capability. As you can see here, clone is a point-in-time direct
disk-to-disk deep copy of an entire volume. So there is no
going to the object storage. None of that is involved, right? It's
directly in the data center, point-in-time, disk-to-disk deep
copy.

Clone operation is immediate, right? Like backup also


happens asynchronously, so you can continue using the
volume as the backup is going on. But clone is very similar,
right? It's immediate, but actual copying of data happens in
the background, right? It can take-- you can see some
numbers here.
It's very similar numbers to backup and restoration, slightly
better than [INAUDIBLE]. For one terabyte, backup would
take, let's say, half an hour. A clone would take half that
time, right? Because again, you're not going to the OCI object
storage service. You're just doing a direct disk-to-disk deep
copy there.

You could-- clone could be created in the same AD with no


need of detaching the source volume. As I said, you can be
using the clone, and you could just be using the block
volume and just clone it. Clones cannot be copied to another
region, unlike block volume backups.

One of the key advantages of backups is you could copy the


backups to one of the regions. Clone, you could not do that.
And there are different lifecycle states for the cloning process.
As soon as the state changes from provisioning to available,
which typically happens within a few seconds, you could start
using the clone.

Clone and backup operations are mutually exclusive, which


means that, at any given time, you could run only one
operation. You could not do both at the same time. Number
of clones created simultaneously-- if the source volume is
attached, you can create one clone at a time. If the source
volume is not attached, it's detached from the instance, you
can create up to 10 clones from the same source volume
simultaneously. Depending on the use cases, you might want
to attach it or not attach it, keep it detached. And so you
could create more clones if you have a requirement like that.

There is also this feature called volume groups. Volume


groups, what it does is you could group together block and
boot volumes from multiple compartments across multiple
compute instances and create this thing called a volume
group.

Now, why would you do that? The reason you would do that
is, in reality, as you're working through your applications,
you will have many, many block volumes and many boot
volumes, right? If you have to do a backup and cloning and
management of those building block volumes, it becomes
cumbersome to do it by one by one, right?

Typically, folks will write shell scripts, and they would try to
automate it using Terraform, or write scripts. Now, that's,
obviously, a good way to automate the operations, but OCI
provides this capability out of the box, across volume groups
where you could do that, right, in a seamless manner.

So using these volume groups, you can create the volume


group backups and clones that are point-in-time and crash-
consistent. You could do a full incremental backup because
this-- again, the whole set of these volumes operate as one
volume. So all the operations you could do with backups and
cloning, you could do that using the volume groups.

This is good for prediction and lifecycle management of


enterprise applications, which typically require multiple
volumes across multiple compute instances to function
effectively. And then, there is no charge for using this feature.

So with that, let me quickly jump to the console and show


you some of these features in action. So first thing first, if I
come here, I see the block volumes. I could do clones here,
right? So if I say Create Clone, I need to provide a name here.
And as you can see here, the original volume was 100 gig, but
I could actually go to a higher size clone, right? I could go to a
200 gig clone. This we talked about earlier when we were
talking about how you do instant block volume resize, offline
resize, right? So I could do that, right? I could create a clone
here.

Now, for the same volume, if I start-- if I want to do a backup


right now, you can see that it's grayed out because Backup
and Clone are mutually exclusive. You can only do one at a
given point of time, right? So I cannot do that.

The second thing I want to show you is, if I click on this--


sorry, if I click on this block volume, I can see here-- probably
crossed out their block volume. I could see that the clone is
created here, right? It's available right away. And we also did
a backup earlier, so you could see that it's available.

And as you can see here, it's attached-- this volume is


attached to an instance, so I could only create one clone at a
time. If the volume was not attached to an instance, I could
create more than one. I could fit more than one clone at a
time.

So let me just quickly-- let me quickly show you a couple of


other things. See, I click on this Volume Groups here, I could
create a volume group really straightforward. On the right, so
I would say this is my volume group one. Because I'm in a
multi-region, I can see three different areas, but if it's all a
single AD region, you would only see one.

All right, now it's asking me to select the volumes. So I will


select blockvolume1. I will also add blockvolume2. And let me
add another boot volume, the autoscaling boot volume, right?
Because these two volumes are attached to the instance, and
this is my boot volume for the instance, right? So let's see-- if
I was running my application on this instance, I would
consider these three sort of together, right? So as a unit.

Now, I get this, and you can see here, my volume group has
two block volumes, and it has one boot volume, right? So first
thing I could do here is I could create a backup. So I would
say, you know, create a backup for my volume group, call it
backup1, and just create here. And now you will see that the
backup, you will see number of backups three because there
is a block volume, two block volumes, and one boot volumes.
I could create a clone as well.

And again, these are mutually exclusive. So it's sort of grayed


out, because I can only do one operation at a time. Hopefully,
this gives you a good idea of the learning capability, and our
volume groups, and how you could do backups and clones for
volume groups.

Thank you for watching this module. I hope that you found
this useful. If you have time, please join me in the next
module, where we'll talk a little bit about boot volumes.
Thank you.

[WHOOSH]

6. BOOT VOLUMES
Hi, everyone. Welcome to this module on boot volumes. We
have already covered boot volumes under the compute lecture
series. So I'll really go through this really fast because we
already covered it. But if you haven't yet watched that lecture
series, it's good to recap some of the key points here.
So a compute instance is launched using operating system
image stored on the remote boot volume. We talked about
this earlier. You have a compute instance. You have a block
volume where you keep your data and applications. And then
you have boot volume, which is a special kind of a block
volume where your operating system is stored.

And the boot volume is created automatically and associated


with an instance until you terminate the instance. And all the
characteristics of block volume carry over to the boot volume.
If you want to launch another instance with the boot volume,
first what you have to do is create a custom image of your
boot volume. And then using the custom image, you can
launch the new instance. Alternatively, you can launch a new
instance directly from an unattached boot volume if you don't
wish to create a custom image. So those two options are
available.

Delete boot volume-- you can delete an unattached boot


volume. And you can actually choose to automatically delete
the boot volume when terminating an instance by selecting
the checkbox in the delete confirmation dialog. We have seen
this in some of the earlier demos. If a boot volume is directly
attached to an instance, you can add delete it. [INAUDIBLE]
straight forward.

And then all the things you could do with block volumes, you
could do with boot volumes. So you could do manual
backups. You could do a policy-based backup. You could
create clones of boot volumes.

There are a couple of things you could do with boot volumes.


You could attach a boot volume to an instance as a block
volume for troubleshooting. This comes up in the exam. If
you have a boot volume which is having an issue, how do you
do troubleshooting? Well, you can follow some of these steps
here. And I'm not going to go into these details because we've
covered this in some of these in the compute module. You
could attach the boot volume to an instance as block volume
and do troubleshooting.

We also looked into this earlier. You could create custom boot
volumes. A default size Linux is 46 gig. For Windows, it's 256
gig. But nothing stops you from going all the way
[INAUDIBLE] to 32 terabytes. You don't need that much
space. But you can, of course, go well beyond the default
sizes.

So with that, let me quickly jump to the console and show


you the boot volumes in action. We have, again, looked into
these in the compute section. But let's quickly look into this
again here.

So if I go into my compute section, you can see boot volumes


here, right? And remember, boot volumes are a special kind
of block volumes where you keep your operating system and
you boot from [? them. ?] Other than that, everything else is
exactly the same as block volumes. So as you can see here,
you can assign a backup policy, you can create a backup,
you could do cloning. And you can, of course, terminate the--
you can create a manual backup, et cetera.

The reason some of these options are grayed out is because


these instances are running. So if I find an instance, first
thing I could do is I could delete an instance. And I could
decide to keep the boot volume. So if I come here, I could say
terminate. And it says permanently delete the attached boot
volume. I don't want to do that. So let me just terminate this
instance and keep the boot volume. And I want to show you a
couple of things we could do with that.

So if I go here into the boot volume, probably would take a


few seconds. But you will see that this boot volume would be,
now, available. And I could come and create an instance out
of that boot volume. Right here, I could do things like manual
backups. So if I want to do a backup of the boot volume, I
could just create a backup here. And there they go. I could
create a backup of the boot volume.

Similarly, I could do clone. Looks like this boot volume is now


available because the instance is terminated. I could actually
create an instance out of this boot volume. And again, we
have covered this in the compute section. But I could use the
boot volume to spin up a new instance.

So hopefully this gives you a quick overview of the boot


volumes and how they behave. If you haven't watched the
compute lecture series, probably you should go watch that. It
will talk a little bit more in detail on the boot volumes. So
with that, thank you so much for watching this lecture series
on block volume. Hopefully you found it useful. If you have
time, please join me in the next lecture series where we talk
about the OCI object [? storage ?] service. Thank you.

FILE STORAGE
1. FILE STORAGE INTRO
Hi, everyone. Welcome to this lecture series on OCI File
Storage Service. And in this particular module, we are going
to introduce the File Storage Service and look at it some of its
characteristics. My name is Rohit Rahi, and I'm part of the
Oracle Cloud Infrastructure Team.
So we have been using this slide to show you the range of
storage services available on OCI, starting with local storage,
block storage, file storage, and object storage. These have
different storage architectures. In this particular module, we
are going to look into file storage service.

File storage service works on our storage architecture in


where you manage data as a file hierarchy. This is in contrast
to object storage, where the storage architecture, we manage
data as objects, and also in contrast to block storage where
we manage data as blocks within sectors and tracks within
physical disk drives.

So that's the main high-level overview difference between file


storage, where you manage data as a file hierarchy, versus
block storage, where you manage data as fixed-size blocks, or
versus object storage, where you manage data as objects. And
then we'll look into some of the details in subsequent slides.

So what are the use cases for the File Storage Service? There
are several use cases, some of which are related to Oracle
applications like EBS, which are neat and works on file
storage requirements. Then you have general-purpose file
systems. There are scenarios on big data analytics, HPC
scale-out apps, and several other scenarios where a file
storage service and be used.

What are some of the features of the File Storage Service? The
first thing is, the services is AD-local. If you have a multi-AD
region, it's an AD-local service. Supports NFS v.3 protocol. It
supports network lock management for file locking. It has full
POSIX semantics. Data protection, we support snapshot
capabilities. And you could create up to 10,000 snapshots
per file system.
For security, we do support security in the sense of
encryption for data at rest for all file systems and metadata,
and very soon, we are also going to support encryption in
transit for data on the file systems. Of course, you can access
the service through the Console, APIs, CLI, SDKs, and all
that. You can create 100 file systems and two mount targets
per AD per account. And of course, these are soft limits. You
can always increase them.

So let's get into some of the details on what the file storage
service entails-- what is a mount target, what is a file system,
what is an export path, et cetera. So before I [INAUDIBLE] let
me have the ability to write on the screen.

So what is a mount target? Right now, I'm showing you a


region which has two availability domains. Now, it can be a
single-AD region, also, and all the concepts I'm going to talk
about remain the same. So we have our region with two ADs,
and just for illustration properties, regions have three ADs.
I'm just showing this for illustration purposes.

We see a VCN, which, if you recall from the VCN module, it's
a regional service, and it's running at this particular-- it has
this particular address space. I have two subnets, smaller
subnets within the VCN, 10.0.0/24 and 10.0.1.0/24 here,
right?

So I create this thing called a mount target which is nothing


but an NFS endpoint that lives in your subnet of choice. You
can choose to have a mount target created here, or in some
other subnet, or in another AD. It's specific to an AD as it's
shown here, and a mount target has an IP address and DNS
name that you can use in your mount commands. So the
simple way to identify a mount target is you get this private
IP address, which you could use with your file systems.

And the way your NFS client accesses the file system is going
through the mount target. So you can see there are two NFS
clients here in two different areas, two different subnets. They
are accessing a file system right here on this particular
mount target.

A mount target requires three private IP addresses in the


subnet. So it's a good practice to not use a subnet of /30
because a /30 subnet will have only four IP addresses. So
three out of the four would be taken up by the mount target.
If you have this kind of a scenario where you have a mount
target and also an NFS client on the instance, and you have
one more instance here, you will run out of IP address spaces
because your subnet is too small. So don't use subnets which
are /30 or smaller.

And you might ask, why do we require three private IP


addresses? Two of the IP addresses are used during the
mount target creation, and the third IP is used for high
availability. And we'll talk about that, how your TCP
connections have to survive reboots, or if your mount target
has a failure, we also-- this is highly available, right?

So how do we ensure that the mount target stays highly


available, right? So the third IP is used for that. We'll talk
about this in more detail when we go into the security
section.

So it's a best practice. The difference between the previous


slide, one of the key differences between the previous slide
and the next slide is you can see that right here, I have the
mount target and one of the clients in the same subnet.
Now, there is no hard requirement that says is that you
cannot do that, right? Right now, you can see that mount
target has its own subnet, and this client instance has its
own subnet. They are running in this AD1.

Now, placing NFS client and mount target in the same subnet
can result in IP conflicts. Why? Because when you create the
mount target, you are not sure which IP address is used for
the mount target.

Like I said, 10.0.0.6, you see this, but there are two more IP
addresses which get used, right? We don't know what those
two are. And if you don't know those, either one of the clans
could actually grab one of it-- one of those IP addresses.

Now, it's not a requirement, but it's a good idea to place FSS
mount in its own subnet, the File Storage Service mount
target in its own subnet, where it can consume IPs as it
needs, right? So create its own subnet and let it run there,
instead of having a single subnet where you put the mount
target as well as you put the instances. But just again, keep
in mind, there is no hard rule which says you cannot do that.
You absolutely can do it. It's just a good best practice to
separate them out.

So we talked about mount target and what those are. It's an


NFS endpoint, highly available, where you run your file
systems. Now, what is a file system? File system is the
primary resource for storing files in this File Storage Service.

To access your file systems, you can use an existing mount


target. Like you'll see here, this file system is running on this
mount target, which was already there. Or, if you don't have
a mount target, you will create a new mount target. We'll look
into the demo, and you can see how that works.

As we said in the beginning, you can create up to 100 file


systems per mount target. It's AD-specific, which seems
pretty reasonable. And then, you can access the file system
through any of the instances, whether it's virtual machines,
bare metal-- doesn't matter. Or you can access from on-prem
through FastConnect or VPN as well, right?

So we looked at mount target, what mount target is. We


looked at a file system. But how do you make all this real?
How is a file system made available?

So a file system is made available through a mount target


using this concept called export path. Again, a file system is
made available through a mount target through this concept
called export path. Export path is the unique path specified
when the file system is associated with a mount target when
you create them, during the creation process.

So one thing to keep in mind is no two file systems associated


with the same mount target can have overlapping export
paths. So what do I mean by that? A path like /example and
a path like /example/path are not allowed. Why? Because
this part is the same between these two export paths, and the
system gets confused. It doesn't realize that these are two
separate file systems, right?

So let's use a graphic to understand that. So that's much


simpler. So you create a mount target. We saw that in a
couple of slides back, and it's nothing but highly available
NFS end point. You get a private IP like this, right? And there
are two more IP addresses which are not shown.
Export path one can be something like example one path one.
And this can be a file system one. And export, path two can
be, example two slash root, example two slash path. And
here, you can have your second file system. So this is how
you would create-- right now, I'm showing two. You could
create up 100 file systems per mount target.

Now, how do you use it, right? Export path, along with the
mount target IP address, is used to mount the file system to
an instance, all right? So what do I mean by that? You run a
command like this-- typical mount command-- sudo mount.
This is your mount target. This is your export path, separated
by a colon here. And then, this is your directory on the NFS
client instance on which external file systems are mounted,
right?

So in this example here, I am mounting file system one to this


mount point, and I'm mounting file system two to this one
point. So as you click a mount target and a file system, and
the next step you have to do is mount them to an instance
running in OCI, right?

So you launch an instance. You install some of the utilities,


NFS utilities, if they're not there, and then you just mount
them. The process is really straightforward. You just saw this
mount command here.

You create an instance, you install some of these NFS


utilities, you create a mount point, and then you mount that.
You mount the file system with the mount target here, the
mount export path here, to the local directory on your
instance-- so the mount point on your instance. And that's
how simple it is to use.
So with that, let me jump forward to the console and show
you a quick demo of how FSS, on File Storage Service, works
in action. Thank you for joining this module.

[WHOOSH]
2. FILE STORAGE DEMO

We'll take a quick look at the File Storage Service and some of
its workings in action. My name is Rohit Rahi, and I'm part of
the Oracle Cloud Infrastructure Team.

So let me first show you the setup for the demo. So as you
can see here, we are going to use a VCN for this particular
address space, and we are going to run this in a US East,
which is a multi-AD region, but I'm just going to use a single
AD. I could have used multiple ADs, but I'm just going to--
it's a demo-- going to use a single AD here.

In this particular VCN, I'm going to create a public subnet


and a private subnet. And you can see some of the address
spaces here. And of course, as we talked in the Virtual Cloud
Network module, the private subnet really will have its own
route table and security lists, and the public subnet will have
its own route table and security lists, right? That's a good
best practice to do that.

The public subnet, as you would have guessed by now, would


have an internet gateway so we could access it via they
internet. And the reason I'm going to do that is I'm going to
spin a couple of instances, and I'm going to SSH into them.

Now, there is no requirement to do that in real-life situations.


You could run all of this in a private subnet and still access
those clients using bastion host or something, right? So you
absolutely don't have to just follow this demo in a real-life
situation. It's a demo. I'm just going to show you something
quickly, so I'm just having this setup.

Now, this is where I'm going to run my mount target and my


file system here on the private subnet. So it's all secure. It's
private. And then, in the public subnet, I'm going to run two
client.

You've got NFS clients which will access this particular file
system here through the mount target. And what I'm going to
show you in the demo is the client one would have the
Read/Write access, and, of course, client two also has
Read/Write access. Both of them would take in--

[AUDIO OUT]

--a quick example of a shared file system where data is


managed as a file hierarchy and both clients can read and
write to the same file system, and access it, and manage it.

So with that, let me quickly jump to the console and start the
demo. So right now, I'm in the console. We have been using
this OCI Console for some of the lecture series. We have this
burger menu here. If I click on that, I can see links for
various services. Right here is the File Storage Service. So
we're going to use that.

You can see I'm in US East. We could have used another


region as well. So if I click on File Storage, first thing you see
here is there is no file system here. I can click File System
and create one. There is a mount target, and again, there is
no mount target here, right?
So if I just click on Create File System, it creates a file
system, it creates a mount target, and so on, and so forth,
right? And it picks up a VCN here. Now, this is the VCN we
were using in our VCN module and the compute module. I
don't want to use the same VCN, right? Like I showed you on
the slide, I want to have a new VCN which I'm going to use,
right?

So let me just cancel this, and first thing, let me go and


create a virtual cloud network really quickly. So this, because
I'm doing a File Storage Service demo, I will call this FSSVCN.
And I'm going to pick the address space which I had in my
slide, 10/16, and I'll click Create VCN.

Now the mission is created. There is no internet gateway.


There is no subnet, et cetera, right? So first thing I'm going to
do is I'm going to create a mount target subnet. Regional is
fine. I could have done AD-specific as well. It doesn't matter,
because I'm just going to use one AD.

10.0.1.0/24-- this is the address space I had on the slide.


And I'm going to make this a private subnet, right? Because I
don't want this to be exposed to the internet because my
mount target, my file system, I want to keep them secure.

I will choose the default route table, and I'll choose the
default security list, and I'll change this subsequently,
because now, we can edit them. If we have a new one created,
we could have used that here, or we could edit it later on. So I
create-- let me just make sure it's a private subnet here. Let
me create this subnet here, right?
And then, what I'm going to do is I'm going to create another
subnet for the client. I'm calling it computesubnet, and this is
where my Compute instances will be running, right? So this
is the address space which I had on the slide, and I'm going
to make it a public subnet, right?

And I'm, once again, choosing the default tables, but we'll
change that, right? So I'll choose the default security list and
the default route table, right? And this is a public subnet. All
right, got it.

So Create my computesubnet, right? So we have the


computesubnet we have the mounttarget subnet. First thing
first, let me just go ahead and create a private route rable
here, because I don't want my mount target and file system to
be exposed through the default route table, where I'm going
to have a route going to the internet gateway. So I created a
private route table, and let me just go ahead and create a
private security list. Right? Pretty straightforward.

Now, what I'm going to do is I'm also going to create an


internet gateway and add a route in the route table for the
internet gateway. These are things, again, you would have
seen in the Virtual Cloud Network module, so nothing
complex here. This is, basically, the basic steps in order for
me to SSH into an instance running in a public subnet,
right?

So I do this here. And then, last thing, let me go into my VCN


and change my mount target subnet to use the private
subnet here, private security list here, and also change my
route table to use the private route table. So [INAUDIBLE]
edit just the security list here.
The console keeps changing. So I think it's right here. So if
you click Edit here, now, here, you can see I will choose my
private route table, right? So really straightforward. Now we
have mount target subnet, which has its own route table and
its security list, and we have the compute subnet which has
its own route table and security list, right?

So the basics are done now. So let me go to my file storage


and create a file system here, right? So I can just come here
and create a file system, all right? Pretty straightforward. But
I want to show you how we talked about this in the previous
module.

So first, let me create a mount target. And this, we are doing


a demonstration-- mounttargetdemo. It's a mount target, and
the file systems are AD-specific, so it's picking up AD1. I
could have chosen another AD. That's fine. My subnet is
regional. I could have done that. But AD is fine.

And right here, I'll pick the VCN we just created, FSSVCN,
right? And now, for the subnet, I don't want the compute
subnet. I want the mount target subnet, right? Because it's
private, and that's where I'm going to create my mount target.

Now, there are some advanced options here. I could provide


an IP address, private IP address, hostname. Really, I'm just
going to leave them blank and have the system populate that
automatically, right?

So as I create the mount target, you would see that first thing
I want to see here is that I get a private IP address.
Remember, the way we identify a mount target is by that IP
address. And that IP address, along with the export path, is
the way we expose a file system to the clients, right? So that's
how it works.
So right now, you can see, I got 10.0.1.3, and this is in the
address space 10.0.1.0/24. As you'll recall, the first three IP
addresses cannot be used, right? So the first one, dot 0, is
reserved and reserved for-- you cannot-- that's your network
address. And then, the first two IPs and the last IP in a region
gets reserved, right? So I cannot use those. But I could use
the 10.0.1.3 right here.

So right now, if I go into-- I can see my mount target is


running. Now, I can create a file system here. So let me-- so
when I click on File System, it's a one-click thing, right? I can
just Create here, and it will create my file system.

But I don't want to use their default names. Let me call this
file system demo. AD1 Is fine. I could use Oracle Managed
Keys for server-side encryption, or I could bring in my own
keys. I'm just going to let Oracle user the Oracle-managed
keys.

And right here is the export path. Now remember, export path
is how the file systems are exposed, in a given mount target,
to the clients. This is how you expose them-- so through this
path here, export path.

I could choose another name or something, right? But never


use just a root here, right? Because then, if you use the root,
you can have another path on the same file system, right?
Because the path systems have to be mutually exclusive.

So that-- don't do that. But other than that, any path is fine.
And then, right here, you can see it chose the mount target
demo, the mount target we have because we have the only
one mount target. If I wanted to create one more, I could just
come here and create a new mount target-- I could have done
that. Because I have already this mount target running, so
I'm just going to use that.

So with that, let me just click Create here. And now my file
system would be created, right? And export path is there. So
now I could just click on Mount Commands here, and I get
the commands to use with my instances in order to mount
this file system to my clients, to my compute instances.

And using that, now, I can access the file system, right? So
it's a rather straightforward way where we are managing all
the complexities behind the scenes and you get a file system
service, highly available, running in the cloud, right? So
pretty amazing, that, really.

Now, a couple of things to keep in mind. Let me go back to


my mount target. We did this, but one thing which I want to
do talk about is, if you click here, you can see that before we
can mount a file system, we must configure security list rules
to allow traffic to the mount target's subnet. If we don't do
that, then your clients would not be able to access the mount
target and, hence, the file system.

Now, which ports do we need to open? We need to open


stateful ingress, TCP ports 111, 2048 to 2050, and UDP ports
111 and 2048. And we need to do that for stateful ingress,
and we also need to open certain ports for stateful egress.

Now, you might say, why are we doing ingress and egress?
Because didn't we say that when ingress-- basically, it's
stateful, so if for a packet coming in, you automatically
guarantee the packet going out. Yeah, we do that. The reason
we do this is this concept around the TCP connections, to
survive the reboots.
I remember we talked about the fact that the mount target is
highly available. So if I go back to my slide, if you see this
mount target, the mount target here is highly available, right?
So what happens her is, if your clients are connecting to
this-- this is the client-- of course, the response is going to go
back. But what happens is, because it's highly available,
sometimes, your mount target has to be moved to another
machine in case this underlying server has a problem, or it
has a reboot or something, right?

So in case this one moves here, your packets which are


coming from this-- suppose the packets were coming in
flight-- now the packets have to go from here. Now, the
packets have no way to figure out how to go out, because the
source, the place where they were running, is now different.
It's on a different server.

So that's why we say egress here, and we say source is the


port here, whatever the port was-- let's say 111. So that way,
we guarantee that for TCP connections in flight, they can
survive the reboot-- TCP connections coming from that LAN
So that's why we have both ingress and egress.

So let's go ahead and quickly do this. So we go to the mount


target subnet. Mount target subnet has its own security list.
So let me just go to the private security list, and you can see
that right now, it has no ingress or egress, because we just
created this a while back.

So I could have done this right now. I could have picked the
CIDR for my public subnet here-- 10.0.2.0/24-- but I'm just
going to do it for the whole subnet, for the whole VCN. The
reason being, if there are other subnets, they could just use
the security list. Otherwise, I'll have to go out and open for
each individual subnet if I have more than one.

So again, just a demo. You could have-- in real life, you


would go and open the mount target subnet for specific
instances. And the subnet is where they're running, right?

So TCP source can be anything. Destination has to be 2048


to 2050. I'll add another rule. Same CIDR here. TCP has to be
111.

I'll open another one here. TCP, and this is UDP, and the port
has to be 111. And another one, and it's UDP again, and the
port has to be 2048. So let me make sure I have all the ports.
So IP is fine. The VCN, CIDR, TCP 2048 to 2050, TCP 111,
port is fine. Destination, UDP 111 and UDP 2048, right? So
this is my ingress routes.

Now, I also need to egress. Otherwise, what we just talked


about is not going to happen, right? So I TCP, and now, my
source port is the ports we talked about, right? So 2048 to
2050. We need to add another one. Source is-- TCP sources is
111, and the third one-- it's UDP, and the source is 111,
right?

All right, so egress rules-- TCP 2048-2050, TCP 11-- 111, and
then UDP 111, right? So we add these egress, right? So now
my rules are all down here.

Now, if I go back to my security list, and if I talk about the


other security list I had-- the default security list-- that is still
allowing SSH. And that's port 22, so I can SSH into that, and
that's pretty much it, right? So that's fine because I'm using
that as part of my public subnet.
So now, last thing I need to do-- my mount target is done, is
created. My file system is created. Now I need to test it out,
right? So let me create a couple of instances really quickly so
I can test out my file system and mount target, right?

So I'll say FSS1. Oracle Linux is fine. The virtual machine


shape is fine. Right now, I see that FSSVCN, and our
compute subnet is chosen here. I can assign a public IP
because I want to use it. And I'm running Windows
Subsystem for Linux.

So let me just get my public key. What did I say?


Public/private key here. That's funny. Let me just copy the
public portion of the key, paste it here-- all right-- and then
click Create.

And then, let me just do that for one more instance. So these
become my two compute instances, which I'm going to use to
connect to my file system and mount target and run my
demo. So FSS2, virtual machine is fine, computer subnet is
fine, assign a public IP address, and right here, I can paste
my SSH keys. Do that, and click Create.

And you will see that these instances will be up and running
in a few seconds. Unfortunately, I missed supplying our
public IP here. We'll solve that. We'll go into the I and we'll
assign a public IP there. It takes literally a few seconds to do
that.

So let's use if the instance is up and running. It's still coming


up. Let me go back to my file system, and let me make sure
that the ports are all correctly open. So if I click on Mount
Target, click here, click on this. So stateful ingress, TCP, yes,
we opened those-- UDP 111. Stateful egress-- now the source
port is 111, 2048, 2049, 2050, UDP source port 111.

All right, so the ports are all open. Let's see if one of the
compute instances is up and running. They're still getting
provisioned. All right, it looks like the compute instances are
running. So let me just copy the public IP address, clear the
screen. Now it's Oracle Linux, so the user name is "opc", and
let me SSH into the first instance.

All right, so right now, I'm in the instance, and I could go and
I could access the file system. And I'll do the same thing for
the other instance as well. So right here, if I click on File
Storage or File Systems and click on the Export Path--
remember export path is how you expose a file system to your
clients.

So right now, I need to run some NFS utility in Oracle Linux.


If you were using something else, like Ubuntu or Windows
Server, you would have a different set of commands. Just
follow those comments.

So it looks like it's done. Let me just clear the screen. Let me
just create a local mount point-- pretty straightforward. And
right here, I'm going to mount the file system. And bingo,
there, that's-- there we go, right?

So now, if I cd into my local mount point, right now, I can see


there is no file here, right? So I could go ahead and create a
file. File from FSS1. And I could save this file, right? It's as
simple as that. Now, I am actually writing this file in the local
file system.
The other instance we had, let's go ahead and do the same,
run the same instructions on the other instance. The issue is
the other instance, we forgot to assign a public IP address
here, right? You can see-- uh-oh. Actually, we have a public
IP, sorry. I thought for some reason we didn't have.

So it's actually, then, really straightforward. Let me go to the


other instance, just log into the other instance, SSH. And
right here, let me just go back to my file system and run
those commands again.

Create a local mount point, and then mount the file system to
that local mount point. OK, really straightforward. Now, if I
come and cd to that local mount point and run an ls
command, I can see this file is existing, right? And if I do a
cat, I can see that the file has this content, which we just
wrote earlier.

So if I want to open this file-- and let's say, now, I want to


change it-- [INAUDIBLE]. Let's save this file. And now, if I go
back to my first instance and do a cat, there we go. I can see
that the changes appear here, right?

So very quickly what this showed you are two instances


running in the same subnet, right? You could have chosen
different subnets. That's fine. But literally you have two
subnets running here-- two instances running here-- and
these instances are accessing this common file system and
they're reading and writing to the file system.

I can control the kind of access these have, and I could create
another file. I could create a second file system here. I could
create a third file system here, and so on, and so forth, right?
So hopefully, this gives you a quick overview of how the File
Storage System works. It's the highly available file system in
the cloud, massive with massive capacity, scalable, elastic,
and it's really, really simple to use. Thank you for watching
this demo. In the next module, we'll talk about FSS security.

[WHOOSH]

3. FILE STORA SECURITY


My name is Rohit Rahi, and I'm part of the Oracle Cloud
Infrastructure Team. In this particular module, we'll talk
about the various separate and distinct layers of security
available with File Storage Service. So as you can see on the
slide here, there are various distinct and separate layers of
security which you could leverage in order to secure your file
systems and mount targets, starting with IAM service.

And again, every service we have talked until now and every
service in OCI, you could leverage the Identity and Access
Management Service to control actions like who can create
instances, client instances, who can create the FSS VCN, and
even who can create, list, and associate file systems and
mount targets.

So all those activities, the control plane activities, can be


controlled by your identity service, right? So you create your
users, you create-- add the users to the group, and write
policies. So you do authentication and authorization, and if
these users don't have the correct level of permissions, they
cannot create file system, mount targets, compute instances,
and virtual cloud network sites. So pretty straightforward. We
talked about that previously.
There is also a concept of security list, which is associated
with your virtual cloud network and network security groups
now, because I must add [INAUDIBLE] are also available now.
So you could use security lists, as well as you could use
network security groups. We'll talk about those in the next
slide.

There is also something which is called Export Options,


which basically is applying control, access control, per file
system based on source IP CIDR blocks that bridges the
security list layer and NSF UNIX security layer, right? And
we'll talk more about what, exactly, we mean by that. It
adds-- it gives you that additional security layer which you
could leverage.

And then, finally, you could, of course, leverage the NFS UNIX
security. So when you mount your system, you read and
write the files, you could use different options. Againm, that
caveat goes here-- when mounting file systems, don't use
mount option such as nolock, resize, rsize, or wsize. These
options can cause issues with file locking and performance.
And again, if you go on documentation, you can read all
about these.

But there are four different distinct layers-- Identity and


Access Management, security list and network security
groups, export options, and NFS UNIX security. In this
particular module, we'll look into these two in greater detail.

So we looked into this earlier. In the previous demo, we had


another client running here in the same subnet. But what
you saw was that, for these clients to access the file system
and the mount target, you had to open certain ports, right?
So ingress, you had to open this port-- these, your TCP and
UDP ports. And for egress, you had to open these ports-- but
this becomes the source ports for egress, right?

And we talked about this briefly. We do this for the TCP


connections to survive the reboots, right? And we talked
about the fact that you're running mount target here. It's
highly available, right? So if, let's say, the server on which the
main target is running has to fail over and you have a client
here which has a TCP connection in flight, now, the response,
if everything is fine, the response goes here, you really don't
need this piece here, right? It's not needed.

But suppose the response is coming from here, from here,


and then, suddenly, the failover happens because, end of the
day, it's highly available. Now the response has to go from
here, right? So that's why you have to open specific source
ports. Otherwise, this particular TCP connection cannot
survive the reboot. The packets will get lost if you don't write
these other routes. So that's the reason why we have those
rules available there, right?

And the way this works is-- let me see if I can discard those
comments. So the way this works is-- we saw this in the
previous demo. In this case, we have to open second ports for
ingress, certain ports for egress, as we just talked about.

Ingress-- just right now, only this client is accessing the file
server. So this is the IP address here, right? And I have to
open TCP ports, the station ports, these ports. And for egress,
I have to open these specific ports as source ports, right?
Exactly the scenario we just talked about.

If you want all subnets within a VCN to access the file


system, just change the CIDR for the VCN. Then, all the
subnets can access the mount target. And we did that,
precisely, in the previous demo.

So we looked into that. We actually did the demo. I talked


about the logic, why we do that. Now, let's look at export
options, what these are.

Security list is all-or-nothing approach, right? The client


either can or cannot access the mount target and, therefore,
all the file systems associated with it. Pretty straightforward.
You write those security lists. If you don't have them, your
clients cannot access the mount targets.

In a multi-tenant environment, using NFS export option, you


can limit clients' ability to connect to the file system and view
or write data. And we'll look into this into more detail as to
how this works.

When you create a file system and associated mount target,


the NFS export option gets created. So you don't have to
create this. So this is automatically created when you create
your mount target, you create your file system, and associate
that to the mount target.

But it is allowed-- it is the entry there allows for that file


systems are set to allow full access for all NFS clients. So it
has full access, and you really-- all the clients can just fully
access all the file systems, right? And basically, this is the
rule because their source is all IP addresses, but unless
source port is false, read-write access is there, so you can
read and write, and identity squash is none, right?

So what does it look like? If we just quickly go to my console,


I can show you if I go to my console here, I have the File
Storage Service here, right? So this is the File Storage
Service, or File System we created. And right here is the
mount target. Right here is the mount target.

And if I click on my file system, I can see that this is the


export path which got created. If I look here, you can see that
the export options already got created, right? So it allows
access for every IP address. The access is read-write, and
other options are just set to no. The [INAUDIBLE] is open for
any port, right? So this is the default option you get when you
create a file system and an associated mount target right? So
pretty straightforward.

Now, what does the export option really do? So now, let's look
at a scenario The previous scenario, we had something like
this, and, of course, we were running both instances in the
same subnet in our demo. But right now, those instances are
running in different subnets.

Now, let's say we create a mount target, and we create a file


system A, and we create a file system B. Pretty
straightforward, because without the mount, a single mount
target can up to 100 file systems, right? So of course, when
you create a mount target, you're going to have multiple file
system running on it.

But Client X here has Read/Write access requirement to file


system A but no access to file system B. It shouldn't be able
to access file system B. On the other hand, Client Y has Read
access to file system B but no access to file system A, right?

So if this export option was not available, you could either


open this client access here, or could open this client access
here, right? And they could access every file system, right?
There is no granular set of access permissions you could
specify with your security list, right?

But now, in this case, because we have this export option


feature available, I could do something like this where I
specify that, for my Client X, I have the Read/Write access,
right? And to whom, I have the Read/Write access to my file
system A. And for Client Y, I have access to file system B, but
I only have Read-only access to file system B, right? And so I
am meeting this requirement and I'm meeting this
requirement using the capability around export option.

So how does this really work in action? So let me go to my


console and show you this really quickly. So as you can recall
from the previous demo, we had two instances running here,
two clients-- FSS1 and FSS2. They are running-- both are
running in the same subnet, so that's not a problem. And
right now, again, both of them can read and write to this file,
right?

So we saw this earlier. I can come to this from FSS1 here


again, and it allows me access, right? So I can actually access
my file and I can write it, right? Pretty straightforward.
Nothing different than what we just did in the demo.

Now, if I come here, I could change this, edit this NFS


options. I'm able to do this because every client has full
access to all the file systems running here. So if I click on
Edit here, first thing I could do is I could give only read-only
access, right? So for all the source code. Or I could just do for
the specific CIDR which I just had. See if I just updated here,
right?

And go back to my file system one, my client one, and try to


write something here. You would see that I don't have write
permissions anymore, right? It's saying, cannot write to this
file, because I just changed my access level here, right? So I
just said, read-only, and so I cannot go ahead and access the
file system in a Read/Write mode anymore.

Now, we could do some more things here, right? So for that


particular instance, the IP address is 10.0.2.2, and I could
say Read-only access to that particular instance. And the
other instance which we are running, the second client
instance, the private IP is 10.0.2.3. And right here, I could
say, give her Read-Write access, right? And I can update my
permissions like this.

So now, if I go back to my first client, FSS1, which is


10.0.2.2, I have only Read-only access, right? So if I bring
this file again and I want to change things here-- change and
save the file-- I cannot, right? Because it gives me-- it's giving
me, I have Read-only access, so I cannot change the file. But
if I go to my second instance, which has Read/Write access--
which has both Read and Write access-- bingo-- I can go
ahead and change it, right?

So what we just did is, for the first client, we changed our
permission to Read-only, and for the second client, we
changed our permission to Read and Write. And so we could
access it even though they belong to the same subnet. Right
now, I have only one file system running, but if I had many,
many file systems running, I could control granular access
using a capability like this.

And if you come to this on our documentation page, you can


read a lot more about export options. This is a more complex
topic, so we cover this in greater detail in our level 200
modules, 200 videos. But you can see different scenarios
here, right?
The first scenario, if I scroll down, is the control host-based
access. And this is exactly what we just showed in the slide
here, right? So it talks about Client A, Client B, how you can
control the access, and it gives you some [INAUDIBLE]
examples, [INAUDIBLE] example, et cetera.

The second, the one we just did, is limiting the ability to write
data for specific IP addresses. So if my client is running this
IP, private IP which we just did, not Read-only, we did that,
Read-only for one client. And for the other client, we did Read
and Write, Read and Write both. Both were running in the
same subnet.

And the third one is, we can have more secure access to limit
the root's user privilege and things like that, right? So in our
200 module, we talk more about what a privilege code is and
what identity squash are, et cetera, right? But this one, we
can skip all the details. But this is the page where you can
find all the information.

All right, so with this, this module, again, we talked a little bit
about security lists. And we didn't really cover network
security groups, but their behavior is very similar to how we
did open certain ports for TCP and UDP, both ingress and
egress.

And then, we saw that, look, that is more high-level. Either


you open access, or you close access for instances, and then
every instance has access to every file system. So if you want
to go a little bit more granular you'll use this option-- this
capability called "export option." It gets automatically created
when you create a file system and associate it with a mount
target, but you can go granular. And you can have things
like, one of your clients can have Read-only, access, other
client can have Read/Write access, and so on, and so forth.
And you can make it, really, more detailed and have more
granular security controls.

I hope that was useful. Thank you for joining this module. In
the next module, we'll talk about snapshots. Thank you.

[WHOOSH]

4. FILE STORAGE SNAPSHOTS

Hi, everyone. Welcome to this module on file storage service


snapshots. This is a really quick module where we'll talk
about snapshots. Snapshots provide a read-only, space
efficient point-in-time backup of a file system. Snapshots are
created under the root folder of file system in a hidden
directory named dot snapshot, and we'll actually see this in
our demo.

So how you create a snapshot using the console, it's really


simple. You can come to your system and just hit Create
Snapshot, and you would be able to create one. And now,
when you go to your-- cd to this .snapshot hidden directory
in your file system, you could access and see all the
snapshots there.

Now, you can take up to 10,000 snapshots per file system,


and I believe this is a soft limit. You can even increase. You
can restore a file within the snapshot or an entire directory
using the cp or rsync command. So as it's shown here, you
could run a command like this, copy snapshot.
These snapshots are stored in this snapshot directory, as we
said, right? And the names are something like
snapshot_name, and then you will have a date or something,
unique identifier. Or you could change to add your own
custom value there. And then, you could say the destination
directory name.

So if you have a lot of files, and people are writing, and


reading, and changing stuff, and you will want to just take a
snapshot, many files, you could just do the restoration using
a command like this here. If nothing has changed within the
target file system and you take a snapshot, it does not
consume any additional storage, right?

So it has a pointer, and it keeps track of what has changed. If


you regularly keep taking snapshots and there is no change,
It does not consume additional storage for you, because you
pay for that storage. So it's not like you're constantly paying
for additional storage.

So let me go here and show you how the snapshot


functionality works. So if I go to my file system, right here, we
look into Export Files, and Export Options, and all that. If I
click on Snapshot, right, I can see there is no snapshot here.
So I can create a snapshot.

And right here, it's giving me-- you know, it's giving me a
default name. But I can change that, right? So I can call this
Snapshot One and then hit Create here, right? Then my
snapshot would be created. I just have one file in my file
system, so it would create a snapshot of that file. If I had a
directory with multiple file systems, with multiple files, it
would actually create a snapshot for all of them, right? And
you can see, it's active.
See, if I go to my interface client, I have two running here in
the previous demos-- FSS1 and FSS2. And if I cd to the
snapshot directory, I can see my snapshot1 here, right? And
if I-- if I cd to the snapshot1 directory, I can see my file is
available here, right?

And if I go from the client, too, let's go ahead and create


another file here. This is my second client. And there you go,
right? We created another file. If I go to my snapshot1 now,
you will see that it's still one file, right? Because I took a
snapshot. Snapshot is nothing but point-in-time backup,
right?

Now I added another file after that. So of course, I'll have to


create another snapshot if I want to have that file, which we
just created, be available here, right? So I create my
snapshot2, and if I go from my second client, if I access the
snapshot directory now, I can see that I have the snapshot2
directory created, right? And if I click the snapshot2
directory, I can see that I have my file2 created, available
here, right?

So really straightforward. Nothing complex. You can do


10,000 snapshots per file system. And this is how you create
snapshots. I'm using console. Otherwise, you can use it
through CLI or whatnot, right? And if you have to restore
these files, you can run a command like copy, and you could
copy everything in that snapshot directory with different
snapshots, and you could get a tool like a destination
directory.

So thank you for watching this module. I hope you found it


useful. And thank you for watching this lecture series on file
storage service.
[WHOOSH]

OBJECT STORAGE

1. OBJECT STORAGE INTRO


Hello, everyone. Welcome to this module and lecture series on
Object Storage. In the first module, we will introduce OCI
Object Storage and look at some of its capabilities. My name
is Rohit Rahi. And I'm part of the Oracle Cloud Infrastructure
team.

So as we have been looking into this slide earlier, we have


looked at the local storage. We have looked at the block
storage. And in the next lecture series, we'll look at the file
storage.

Object Storage is the kind of storage architecture where you


manage data as objects. This is in contrast to other storage
architectures, like file storage, where you manage data as a
file hierarchy, and block storage, where you manage data as
blocks within sectors and tracks on a disk. So that's the main
difference between Object Storage and other storage
architectures.

And you have different classes within Object Storage. And


one of the classes is Archive Storage. And we'll look into more
details on the Archive Storage, as well.

So what is Object Storage? Again, as we just saw in the


previous slide, let me recap some of the key characteristics.
Its internet-scale, high performance storage platform where
you manage data as objects. This is ideal for storing
unlimited amounts of unstructured data. And there's a huge
explosion of unstructured data nowadays, whether it's
images, media files, logs, backups, et cetera.

Like I said, data is managed as objects. And you use APIs


built on standard HTTP verbs, like get object, when you want
to read an object from a bucket, put object, when you want to
write an object to a bucket, et cetera. This is very different
than using NFS protocol, which you would use with file
system, where you are managing data in a file hierarchy, or
iSCSI, which you would use in a block storage to access data
as fixed-size blocks on the disk site.

So that's the main difference, data being managed as objects


using standard HTTP verbs. Object Storage is a regional
service. And again, unlike the file storage and the block
storage, you are not really tying this to compute instances.
It's not like you mount this disk and you use it, or you use it
as a disk to store your data and applications for your
compute instances.

There are two different distinct storage classes. You'll need to


address the need for performant, frequently access "hot"
storage. And there is also less frequently accessed "cold"
storage, which is also called archive storage. So we'll talk
about those.

And you can have private access from Oracle Cloud


Infrastructure resources in a VCN, using a concept called
Service Gateway, which we looked into when we were
discussing the VCN module. OCI Object Storage supports
advanced features, such as cross-region copy, pre-
authenticated requests, lifecycle rules, and multipart upload.
And we'll look into each of these in greater detail
subsequently.
So what are some of the scenarios for Object Storage?
Content repository is a big one, where you want to store a lot
of unstructured data, whether it's images, logs, videos, et
cetera. We looked into archive/backup. Object storage seldom
is used as a backup location. We saw this in the block
volume module, where for block volumes, if you want to do
backup, the backup actually is stored in OCI Object Storage.

Object Storage can also be used for long term archival, to


reduce the cost. It's a good place to store your log data or
large data sets, whether you are running any kind DNA
genome data, or Internet of Things, IoT, do data sets. Where
you have lots of data, you could store that in Object Storage.
And again, you can read through a bunch of these big data
scenarios. And we have connectors, and so on, and so forth,
where Object Storage is that is a good candidate for those use
cases.

Now, what are some of the key features of the OCI Object
Storage Service? The first one is this concept of strong
consistency. Strong consistency means that Object Storage
Service always serves the most recent copy of the data when
retrieved. So what happens is if you write a data, and then
you update that data, sometimes, if your service-- there's a
concept called eventual consistency. If your service is based
on that, and you try to write a data. And then you update it
subsequently. And you try to retrieve it. Sometimes, it will
return the stale data. It will return the old copy, not the
updated copy.

Strong consistency means that it will not return your data


unless it has committed it everywhere. And as you can
imagine, these are distributed systems. So the data is
actually written across multiple ADs, if it's a multi AD region
in a single AD region, still written to multiple storage servers.
Strong consistency means you are always guaranteed the
most recent copy of the data, which is eventual consistency.
But eventually, your data might be consistent, might be with
the latest. But you might get still the time between.

As far as durability is concerned, like I said, data is stored


redundantly across multiple storage servers, across multiple
ADs. If it's not multiple ADs, single AD, data is stored
redundantly across [? far ?] domains. In each [? far ?]
domains, we only talk about computer and databases. But
the storage services, some of the storage services, also
leverage that internally.

So the data gets stored across multiple [? far ?] domains, so


that even if one of the [? far ?] domains goes down, the other
[? far ?] domains are still up and running. Data integrity is
actively monitored, and corrupt data detected and auto
repaired. So service takes care of that. So it's a highly durable
service.

So strong consistency, high durability, and also high


performance-- Compute and Object Storage Services are co-
located on the same fast network. So if your Compute
instances are reaching out to Object Storage, we guarantee a
big, fat pipeline between them, [? pipe ?] between them, so
that they get very fast performance.

And you have several features, like you can define your own
metadata. There is server side encryption. And we also allow
you to bring your own keys, if you want to encrypt data using
your own keys.

So let's look at some of the things in a little bit more detail.


First, we said data is managed as objects. So whether it's
logs, videos, whatever, regardless of the content type, you
manage all the data as an object. Now each object is
composed of object itself, and the metadata object, which
describes what the object is. And it has some more details,
things like identified, et cetera.

Bucket is a logical container for storing objects. So each


storage is stored in a bucket. Namespace is a logical entity
that serves as a top-level container for all buckets and
objects. So objects go in buckets. And then buckets are
placed in namespaces.

Now each tenancy, when you create a tenancy, your account,


is provided one unique namespace. That is global, spanning
all compartments and regions. So it means you have one
namespace, which is global. But you can have bucket names,
which can be repeated across tenancies.

So bucket names must be unique within your own tenancy,


but can be repeated across tenancies. Because the real
unique identifier here is the namespace. Now, this is different
than, let's say, Amazon S3 where your bucket names have to
be globally unique. In the case of OCI, bucket names have to
be unique within the tenancy. Because that unique identifier
here is the namespace, which is similar as your tenancy.

Within a namespace, buckets and objects exist in flat


hierarchy. But you can simulate a directory structure using
prefixes and hierarchies. And we'll look into this in the next
slide.

So how do you name the objects? Well, the service prepends


the Object Storage namespace string and the bucket name to
object name. So if you see right here, this is my namespace,
which comes from my tenancy. This is my bucket name. And
this is my object name.
So this is how the service creates the object naming. So let's
say if you have an object, databases.dbf. It looks like a
database backup file. If you put that, you put object API, and
you put this into the Object Storage, this is sort of the URL
you would get. And this is how the object would be named.

So you have the namespace here. You have the bucket here.
And then you have the object here. This is the fully qualified
domain name, or the fully qualified string, if you will, which
you'll need.

So we've talked about that objects are stored in flat hierarchy.


Now we are used to directory structures, where we store data
according to directory structures, and multi directory
structures, et cetera. So how does that happen in Object
Storage, because it's a flat hierarchy to begin with?

So for a large number of objects where you might have similar


objects, you can use prefixes and hierarchies. So what do we
mean by that? If you look at this example here, there is a
prefix at marathon. And there's another prefix, which is
marathon/participants. So you actually get with two different
prefixes here.

Now you can use CLI to perform bulk downloads and bulk
delete of all objects at a specified level of the hierarchy,
without affecting objects in levels above or below. So what do
I mean by that? So look into this example here. You can
download or delete all objects at this level, at the Marathon
level, without downloading or deleting objects at the
Marathon Participants level.
So even though it looks like Marathon Participants is sort of
children directory structure from Marathon, you can still
operate. If you create prefixes like these, you can operate at
them independently. And you can have another object here,
which is Start Line and Finish Line and Middle Line, et
cetera, et cetera.

And if you want to operate at all those objects as one, you


could use the Marathon prefix. If you want to have
participants here-- you have 100 participants and you have a
bunch of those objects you could operate them using
marathon/participants prefix. But that's some of the ways
you can operate on a large number of objects, particularly
because Object Storage itself doesn't have any kind of
hierarchy. It's a flat hierarchy structure.

So with that, let's just complete one more slide. And we'll
quickly jump into our demo. So we talked about the Object
Storage tiers. So what are the tiers which OCI Object Storage
supports today?

so the first tier is the standard storage tier. It's also referred
to sometimes as the hot tier. This is where you store your
data. And you get fast, immediate, and frequent access. You
can retrieve your data instantaneously.

Always serves the most recent copy of data when retrieved--


why? Because we support strong consistency. That is the
characteristic or definition of strong consistency. As we said,
data retrieval is instantaneous. So you upload it, download it.
It's really instantaneous. Standard buckets cannot be
changed. So once you create a bucket as standard, you have
to keep it at the standard level.
So there is another class, which is called archive storage.
People also refer to it as the cold storage. This is for use
cases, or data, where you seldom or rarely access data. But
you have to retain them and preserve them for long periods of
time.

What are the use cases? This might be compliance. This


might be our audit logs. This might just be long term backup
and retention. You have lots of data you just want to retain
them for a specific period of time.

There is a minimum retention requirement of 90 days. If you


change, if you restore your data before that, I think there is a
cost which you have to incur. And you can look at pricing
and see how that works.

One restriction here is you cannot instantaneously retrieve


data. Objects need to be restored before you can download
them or retrieve them. The time to first bytes after archive
storage restore request is made is four hours.

So you upload the data. Let's say 90 days have passed. You
want to get it back. You make a request. It takes at least four
hours before you can download your data.

And like we saw with the standard tier, once you designate a
bucket as an archive bucket, you cannot upgrade to standard
storage tier, and vise versa. Standard cannot be downgraded
to archive. Archive cannot be upgraded to standard. And
right here, you can see when you create a bucket, you get a
choice of either standard tier or archived tier.

So thank you for watching this lecture on a quick


introduction to the OCI Object Storage Service. In the next
module, we will do a quick demo of the service and see some
of its key characteristics in action. Thank you.

2. OBJECT STORAGE DEMO


[SOUND EFFECT]

Hi, everyone. Welcome to this module on a quick demo


of the OCI Object Storage service. In the previous
module, we looked into the service and some of its key,
characteristics like strong consistency, high
performance, high durability, et cetera.

In this one, let's quickly do a demo of the service. So we


have been using the OCI Console in our last few lecture
series. Right now, I'm logged into the OCI Console. You
can see I'm in the Ashburn, US East region. And if I
click on this sandwich hamburger menu here, I can see
different services links. So there's compute, block
storage, networking, file storage, et cetera.

So Object Storage is where you would find the Object


Storage service. So if you click on Object Storage here,
the first link you'll see here is-- it says create a bucket.
Now before we do that, there is a compartment here
which you have to choose.

Right now, we have been using training compartment for


all our demos, so we'll just use that. But just keep in
mind that the buckets you are creating are also in your
compartments, and compartments are the logical
isolation. We talked about that in the IM module.

So let me just go ahead here and create a bucket. So I'll


say this is my bucket for test. So I'll say this is my test-
bucket. And right here, I get a choice of whether it's a
standard tier or an archive tier.
So I'll choose a standard tier, and we'll go create an
archive tier bucket also and take a look. And down here,
you can see that I can use-- I have server-side
encryption, and I have an option to let Oracle do server-
side encryption using Oracle managed keys, or I could
bring my own keys-- customer managed keys. Right
now, I'm just going to let Oracle do the server-side
encryption using Oracle managed keys.

And I can, of course, do tagging. So I create a bucket


here. And you can see the bucket is created, and there
is nothing in the bucket right now. So a few things you--
we'll discuss this in the next modules, but a few things
you see here-- first, the visibility is private, meaning this
bucket is not open to the world, and that's the default
behavior.

But I can come here and I can edit the visibility. I can
make it public. Now there is a checkbox here which
allows users to list objects from this bucket, and I'm OK
with that. I'll say it's a public bucket. And it gives me a
warning that enabling public visibility will let
anonymous and unauthenticated users access data
stored in the bucket.

So you should only do this if you have a need like this


where you are sharing something, like a webpage-- you
really don't care if it's open to everyone. If not, you
should always have a bucket as a private bucket, not
make it public.

So let's go ahead and upload a couple of objects here. So


I have been recording a bunch of these videos-- probably
those are pretty big. So probably not going to upload
them. I have this picture of Mt. Rainier-- let me just
upload it-- it's a pretty small file, 100 KB or so. So I see
the file is uploaded here.
Now again, keep in mind what we said in the first
module-- you're managing data as objects. So whether
it's a JPG, it's a video, it's a log, Object Storage doesn't
really care-- regardless of the content type, it manages
them as objects. And you can see the object here.

So if I click on View Object Details, I can see some of the


details here. First thing I see here is the URL path. So if
I have to access this object, I can click on it and I can
access it.

You can see here, this is my service URL


objectstorage.us-ashburn-1.oraclecloud.com. Because
Object Storage is a regional service, so it's tied to that
region. Then there is a namespace. In my case, my
namespace is intoraclerohit, and that's the same as my
tenancy name.

So remember, every tenancy gets a unique namespace,


and you can create buckets within that namespace. And
the same bucket, test-bucket, can be in another tenancy
because the unique identifier is not the bucket name,
but the tenancy name. And then the object is rainier,
and you can see these delimiters here.

Slash n for namespace, slash b for bucket, and slash o


for object. I can see some other values here. I can see
the size of the file, I can see it's a standard tier. I can see
things like ETag-- Entity Tag.

And these are used for-- if you're doing multi-part


upload, you can match your ETags if there is-- ETags
are nothing but your MD5 hash. And you can see some
of these MD5 hash, sometimes the values will be
similar, sometimes it will be different. So again, we're
not getting into a lot of those details, but you can see
some of these characteristics here.
So if I click on this link here, I can see my object. This is
the object which I have-- it's available here. So the
service is really straightforward. I can click Download
and I can download this object, and I can get that.

Now a couple of things I want to show here is if I change


my visibility, and make it private, and save changes,
now if I come here and I try to refresh this page, you will
see that the page gives me an error message, saying that
either the bucket name does not exist or you're not
authorized to access it. No, it definitely exists because
we created it. So the second statement has to be true--
we are not authorized to access it.

So this is how your default behavior should be for


objects you don't want to release to the world. You
should keep them in sort of a private bucket so that
only people who, again, have the requisite access
permissions can access it. Let me go ahead here and
create another bucket, and this time call this
archivebucket.

And the behavior is very similar. I can upload an object


here like we did earlier. But remember, it's an archive
bucket, so the data has to be restored before we can
download it. So the file is available here, and you can
see here the Download button is grayed out, because I
cannot download it right away. But this bucket here is
available.

So if I click on Restore, it says time available for


download in hours-- it's an optional value if I want to
provide. I'll just go with the default. And now I'm in the
process of restoring s data. Now if I do this before 90
days, there is a cost which is incurred.

But like I said, time to first byte is typically four hours.


And in four hours time, I would be able to get this data.
Because the whole use case for archive storage is long-
term retention and backup. So if you want to access
your data in an instantaneous fashion, you should go to
a standard bucket.

Hopefully that gave you a good, quick overview of how


the OCI Object Storage service works. In the next
module, we'll look into some of the more advanced
details, like cross-region copy, pre-authenticated
requests, et cetera. Thank you for joining this demo. I
hope to see you in the next module. Thank you.

[SOUND EFFECT]

3. OBJECT STORAGE CAPABILITIES


[SOUND EFFECT]

Hi, everyone. Welcome to this module on Object Storage


capabilities. In this module, we will look into various
advanced features, such as pre-authenticated requests,
cross-region copy, multipart upload, et cetera.

So let's start with managing access and authentication. In the


previous demo, we uploaded an object to a standard bucket,
and we were able to retrieve the object by changing the access
level from private to public. So let's look at some of the other
things we can do with objects in the OCI Object Storage
service. So pre-authenticated request is a way to let users
access a bucket or an object without having their own
credentials.

So as you can see here, creating a pre-authenticated request


is pretty straightforward. You can create a pre-authenticated
request either on the bucket or on the object. And you can
have a variety of options, whether you just want reads on the
object, writes on the object, or read and write-- both. And
we'll actually go and show this in a quick demo.

Once you create a pre-authenticated request, users can


access the object-- let's say you're creating this for object
using a URL, like the one shown here. So you can see here,
this is sort of a URL, and this portion gets appended to the
URL, and that shows that it's a pre-authenticated request.

And you can see the prefix here, slash p. If you remember
from the previous module, slash n is the namespace, slash b
is the bucket, slash o is the object, and slash p here shows
that this URL, this object, is being accessed using a pre-
authenticated request. You can revoke the links at any time.
So suppose you give users access to a bucket or an object
without having their own credentials and their job is done,
you can always revoke the links, and they will have-- they will
lose the access to the object of the bucket going forward.

Now we looked into this a little bit in the previous demo as


well on the concept of public buckets. So when you create a
bucket, a bucket at time of creation is considered private.
And access to a bucket requires authentication and
authorization.

We created a bucket, we uploaded an object, and then we


could not access the object, because it said the bucket
doesn't exist or you don't have access-- you're not authorized
for the bucket, you don't have access to the bucket. So that's
the behavior when you create a bucket. But when you have
an option to change that and have your users access the
bucket and object using anonymous and unauthenticated
access. So they don't have to be authenticated, they don't
have to have any kind of authorization policies to be
implemented.
They can just go, and in an anonymous fashion, they can
read the contents of that bucket. When you do that, again,
the thing to keep in mind is it can be a security nightmare--
you have been seeing some of these reports in the press. So
unless you really have a need, you should not change the
visibility of a bucket from private to public.

And also another thing to keep in mind, that changing the


type of access doesn't affect existing pre-authenticated
requests. So if you have an existing pre-authenticated
request, it will still work. So if you go from a private bucket to
a public bucket, public will give unauthenticated
authenticated anonymous access.

But if you have given your pre-authenticated request to


users, they can still use it and will still work fine. So let me
quickly jump to the Console. And before I talk about the next
feature, show a pre-authenticated request in action. So we
were in Object Storage part of the OCI Console, and we had
created two buckets-- archive bucket which is for archival
long-term retention and backup. And then we had test bucket
which is a standard tier bucket.

Right now you can see it's private. So if I want to access this
particular object which we uploaded in the previous demo,
you can see that it gives me an error, saying the bucket
doesn't exist-- we know bucket exists-- or you're not
authorized to access it. So we don't have authorization--
that's the reason we are not able to access it, because it's in a
private bucket, and it doesn't allow for anonymous
unauthenticated access.

So what we could do here is create a pre-authenticated


request. So if I come here, I simply authenticate the request--
that name is fine-- the default name I picked out. I could
create it at the bucket level or I could create it at the object.
So if I want to go more granular, I could do it at the object
level.

And now I can also say what kind of access I want, whether
it's read, write, or read and write both. Read is fine. And then
I can also choose the time which this link will be valid. And
you have to choose this time-- you cannot just create a pre-
authenticated request for an infinite amount of time. You
have to have a timebound access.

So by default, it picks a week worth of time, but you could go


even longer. So a week is fine, and I create this pre-
authenticated request. And now I need to copy this link
because it goes away after that, and it's not shown again for
security reasons.

So I copy this. And if I go back to my link I had earlier and I


paste this new link now-- and you can see the part where the
pre-authenticated request comes in with this slash p--
everything which follows after that, that shows that this
object is not being accessed using a pre-authenticated
request. So there you go-- I can see my object here, the Mt.
Rainier picture I had uploaded earlier.

So the pre-authenticated request you have created are


available right here. I can come here, I can delete it if I don't
want it anymore. It's as simple as I delete it. And if I delete
this one and go back to my link earlier, if my users have it
and they try to bring it up, you can see that it will not work
because it's gone.

So it's as simple as that. If you change your-- it's pretty


straightforward to create a pre-authenticated request on the
bucket. So the way you would do that is if you click on the
buckets here.

You have an option to create a pre-authenticated request on


the bucket itself. And similar to the object, if you do this, now
you get this URL, and you can list objects in your bucket if
use this URL, and you could do certain operations using this
URL. So hopefully, it gives you a quick overview of how pre-
authenticated requests work.

Let's talk about the second feature, which is sort of this


advanced feature, on cross-region copy. So one of the key
requirements for object storage is to copy objects to other
buckets in the same region, and to buckets in other regions.
So the use cases can be you're taking a backup, and you
have a [? DR ?] situation where you want to create your
database from that backup in another region, or you want to
create a compute instance from a custom image you have in
another region.

So for those kind of scenarios, you'll need to copy your


objects from one region to another region. Because object
storage is a regional service, so you have access within the
region, but not outside the region. So this feature lets you do
that.

So creating this is really straightforward. Of course, the name


space is the unique identifier. You have to provide the
namespace. You provide the destination region where you
want to copy your objects. You have to provide a destination
bucket.

And then very importantly, one thing you have to keep in


mind-- and then the copy happens. But one very important
thing you have to keep in mind is you must authorize the
service to manage objects on your behalf. So you have to
write a policy for every region-- this is the home region where
you are copying from. Otherwise, the cross-region copy
doesn't work.

So you can see here, there is a policy-- if you don't write this
policy, cross-region copy isn't going to work. And you also
need to specify an existing target bucket. If you don't do that,
it will not let you do the copy.

And today, the restriction is on bulk copy-- is not supported.


So it's a little bit tedious, where you have to copy one by one.
It's on the roadmap, where you would be able to take lots of
objects and then just copy them in the same region or copy
them to another region. So that feature is on the roadmap in
the next few months.

So let me quickly-- and the last thing here is objects cannot


be copied from archive storage. Because archive-- again, as
we looked into in the previous module, you cannot change
the tier from standard to archive and vise versa. So this
feature is only applicable to [? default ?] standard. You
cannot use this for archive. There is also like a four hour
minimum restore period there, so you cannot do cross-region
copy using archive storage.

So first things first-- I'm in Ashburn region. Let me go to


Phoenix and create a bucket there. So we will copy over
objects to Phoenix region. I'm in the same compartment. The
default name-- let me just call it a test bucket or something.

And it's a standard tier. If I do archive, copy will not be


supported. And I just create a bucket here. Now this bucket
is right now empty-- there is nothing inside here. So let me
switch back to Ashburn. And I have a test bucket here, and I
can actually copy this object.

Copying is really straightforward. I click Copy-- I pick my


destination region. It can be the same region, it can be
another region. That's fine. And the bucket I just created is
called test, so I'll just do that.

And then it has various values here where I can-- there are
various options here. So I could choose to override
destination object if the destination object exists. I could
choose not to overwrite. I could choose to overwrite only if it
matches the specified Entity Tag-- the ETag.

And ETag matching rules allow you to control the copying or


overwriting based on their ETag values, so I could do that,
and some other options. So let me just copy this object right
now. And you would see that sort of this kicks off this
asynchronous process in the background, where this
happens.

And if I close-- if I read my work request here, you can see


that my object has been copied. So if I go back to Phoenix
region and click on test bucket, you can see that I have an
object here which is existing. So let's switch back to the slides
and talk about another key capability, which is around object
lifecycle management.

So object lifecycle management defines-- you can define


lifecycle rules to automatically archive or delete objects after
a specified number of days. Now as we saw in the cross-
region copy, you have to authorize the service to manage
objects on your behalf. So you have to write a policy,
otherwise this thing doesn't work.
And it's pretty straightforward to create a lifecycle rule. You
can create a rule like this, and then you can apply the rule at
the bucket level or the object name prefix Level. If no prefix is
specified, the rules will apply to all the objects in the bucket.
So what do we mean by that?

If you see here, we have a couple of objects, and they all have
a prefix which is this thing here. So for-- if you don't want
this lifecycle management rule to apply at the bucket level,
you could actually apply it at the prefix level. And in this case
the prefix is gloves_27.

So you could use this kind of prefix to apply a rule.


Otherwise, you can apply the rule to do all the objects in the
bucket. A rule that deletes an object always takes priority
over a rule that would archive the same object.

And you can always enable or disable a rule to make it


inactive or active. It's pretty straightforward. So let me just
quickly jump to the Console and show this in action. So if I
go back to my Console, I have the standard bucket here, I
have an object here. So to create a lifecycle policy rules, we'll
come down to this link here and create a rule here.

And it says what is the rule that picks up on the default


name, and what do I want to do as an action? Do I want to
delete the object or do I want to archive the object? So I'm
saying archive is fine, and delete is fine-- let me just pick
delete.

And how many days do I need to keep it before I delete? So 30


days is fine. And then enable or disable. Right now, of course,
I'm creating the rule, so it's enabled. And now the policy is in
place. And so the object will be deleted after 30 days.

Now I didn't apply any filter in particular here, like the prefix.
If I had done that, I could pick and choose individual objects.
I don't have to do it for all the objects. If I don't want this rule
to apply anymore, I could just disable it instead of deleting it.
So then now it's disabled, but it's still in the history so I can
get some more information here.

So it's as simple as that. And it's really for managing the cost,
managing your objects, because you'll be managing literally
hundreds of objects. So it's a good way to manage the
lifecycle of various objects in various stages, whether you
want to delete them, to save some cost, or move them to
archival storage to, again, reduce the cost and keep them for
long-term backup and retention.

Last feature we are going to look into is a multipart upload.


With multipart upload, individual parts of an object can be
uploaded and parallel To reduce the amount of time you
spend uploading. In fact, in yesterday's-- I was recording this
other compute section, and I was uploading a custom image
which was two gigs in size.

That's a fairly good size. I mean, of course, it's not the-- we


support objects up to 10 terabytes, so in that perspective, it's
not that big. But it's still a fairly large file. So when you do
upload a file, like a 2 gigabyte file or even something smaller
than that, you would see that the service itself uses multipart
upload behind the scenes.

You don't see that in action, but the service is actually doing
that. Now you could do that-- you could use multipart upload
using CLI or SDK. The way it happens is-- first thing you do
is you create object parts. So you can see some numbers
here-- individual parts can be as large as 50 gigs or as small
as 10 MB.

So you could do that using the CLI. CLI does that for you,
and it assigns a part number. Then it initiates an upload, and
you can see the API call it makes to initiate an upload. Then
it uploads the object part and makes sure that all the parts
are uploaded. You can restart a failed upload for an
individual part, et cetera. And then you commit the upload.

And I just want to quickly show you the documentation. If


you see, there is the webpage here because I am not showing
this demo right now. But you can see all the parts listed here.
Initiate the upload, upload the parts, and then commit the
upload [? site. ?] And there is a nice example here-- and scroll
down-- there's a nice example here which shows this in
action using the CLI.

So I'm uploading this file-- it has I think both parts. And you
can see that the part size, et cetera, the count it shows you.
And it's splitting the file into 12 parts for upload, and then
it's uploading the file. You can list the parts of unfinished or
failed uploads if there are parts which failed to upload.

And then you can remove them also if there were parts which
could not be uploaded. So the service takes care of breaking
down the files, uploading them, committing them, doing the
checksums, making sure that it's all good. And as I said, if
you're uploading some large files, the service actually does
this internally. But you could, as an end user, do this as well.

So hopefully, this module gave you a good overview of the


four features we talked about. Pre-authenticated request,
cross-region copy, object lifecycle management, and multipart
upload. Thank you for watching this module. I hope you
found it useful. Thank you.

[SOUND EFFECT]

ORACLE DATABASE IN OCI


1. DATABASE PART 1
Hi. Welcome to the part one of the database level 100 lesson.
My name is Sanjay Narvekar, I'm a product manager in the
Oracle Cloud Infrastructure team. Here's a quick look at the
safe harbor statement.

In today's lesson, you'll be learning about the various options


of deploying database systems in Oracle Cloud Infrastructure.
You'll learn about the different features of database service,
and you'll also learn how to launch a one-node database
system in Oracle Cloud Infrastructure.

Oracle Cloud Infrastructure Database Service is a mission


critical, enterprise grade cloud database service with
comprehensive offerings to cover all enterprise database
needs. We have different services like Exadata, Bare Metal,
virtual machine database systems. You can deploy Real
Application Clusters for virtual machines, and our Exadata
also has the ability for customers to deploy Real Application
Clusters.

We provide complete lifecycle automation, which includes


provisioning databases at the click of a button, patching
databases at the click of a button, and you have the ability to
restore your database from backups at the click of a button.
If you want high availability and scalability for your
databases, we have Real Application Clusters and Oracle
Data Guard. For customers that have workloads that need
CPUs to scale up and down, you can do that with dynamic
CPU scaling. And you can also scale up storage for your
databases.

In terms of security, the OCI Database Service is well


integrated into the infrastructure, identity, and access
management service. You can use security lists in the VCMs
to control the flow of traffic, and OCI DB is also integrated
into the audit log service.

For customers that want to know more about how security is


done in the database, all the database additions in Oracle
Database Service from Standard Edition to the Enterprise
Edition Extreme Performance Package have transmitted data
encryption built in, which means that the database files have
encrypted [? addresses. ?] You can also encrypt the RMAN
backups, and we have encryption done for the block volumes
as well.

OCI Database Service is also integrated into the OCI platform


for tagging, limits, and usage integration. Customers can
bring their own license or use the license included [?
module ?] when deploying databases in OCI Database
Service.

Let us now look at the VM Database System. There are two


types of database systems on virtual machines. The first one
is a one-node VM DB System which consists of one virtual
machine. And the second one is a two-node VM DB System
which consists of two VMs clustered with Real Application
Clusters, or RAC, enabled. VM DB Systems can have only a
single database home, which in turn can have only a single
database. The amount of memory allocation for the VM DB
System depends on the VM shape selected during the
provisioning process.
The size of storage is specified when you launch a VM DB
System, and you can scale up the storage as needed at any
time. However, note that the number of CPU cores on an
existing VM DB system cannot be changed at this time.

If you are launching a DB system with a virtual machine


shape, you have the option of selecting an older database
version. You have to check the Display All Database versions
to include older database versions in the drop-down list of
database version choices when you're going through the
provisioning process. When a two-node RAC VM DB System
is provisioned, the system assigns each node to a different
fault domain by default. Data guard within and across
available domains is available for VM DB Systems. This
requires Database Enterprise Edition.

Let's now look at the VM DB Systems Storage Architecture.


VM DB Systems use the block storage service for the
database storage. It uses ASM on top of the OCI Block
Volume for mirroring data. Block volumes are mounted to the
VM using iSCSI. And ASM uses the external redundancy
relying on the triple meaning of the block storage. When the
ASM disk groups are carved out, different block storage
volumes are used for the DATA and RECO disk groups.

Let's now look at the VM DB systems Storage Architecture for


the Fast Provisioning option. This also uses the block storage
service for the disks required for the VM shapes. The Linux
Logical Volume Manager manages the file systems used by
the database for storing database files, redo logs, et cetera.
Block volumes homes are mounted using the iSCSI.

And note that the available storage value you specify during
provisioning determines the maximum storage available
through scaling. And I have a note down there at the bottom
of the slide for more information in this regard. VM RAC DB
systems cannot be deployed using this option because we
need the great infrastructure for deploying Real Application
Clusters. Currently, we support Oracle database 18c and 19c
releases when you use the Fast Provisioning option for
deploying your VM DB system.

Let's now look at the Bare Metal DB Systems. Bare Metal DB


Systems rely on the Bare Metal servers running Oracle Linux.
It's a one-node database system which runs on a single Bare
Metal server and has locally attached 51 terabyte NVMe
storage. And you start with two cores, and you can scale up
or down OCOUs based on requirement. For a Bare Metal
Server X7 shape, you have 52 CPU cores available with 768
gigabytes of RAM.

If you're running Database Enterprise Edition, you can use


Data Guard within and across availability domains. If a single
node fails, you'll have to launch another system and restore
the databases from current backups.

Let us now look at the Bare Metal DB Systems Storage


Architecture. As I mentioned in the previous slide, the Bare
Metal DB System relies on locally addressed NVMe SSD
drives. So the ASM uses these drives to carve out the DATA
and RECO disk group. And ASM manages the middling of the
NVMe disks. ASM will monitor the disk for hard and soft
failures, and it will proactively offline disks that failed or are
predicted to fail or are performing poorly and performs
corrective actions if possible.

In the case of disk failure, the DB system automatically


creates an internal ticket and notifies the internal team to
contact the customer. These actions ensure highest level
availability and performance at all times.

Let's now look at the Exadata DB Systems. When you do


deploy an Exadata DB System in Oracle Cloud Infrastructure,
you get Oracle Database Enterprise Edition with all of the
advanced options. Exadata DB System is Oracle's fastest and
most available database cloud platform. You can Scale-Out
Compute, Scale-Out Storage, and it has the Infiniband
switches with Infiniband networking between the database
server and the storage server. It also has the PCIe flashcards
attached to the storage server for better performance.

Exadata system gives complete isolation of tenants with no


overprovisioning. It gives customers all the benefits of the
public cloud-- namely, it is fast to provision, it's elastic, and it
is web-driven provisioning. Oracle experts deploy and manage
the infrastructure.

Oracle manages Exadata infrastructure-- namely the servers,


the storage, networking, firmware, hypervisor, et cetera. You
can specify zero cores when you launch Exadata. This will
provision and immediately stop the Exadata service. You are
billed for the Exadata infrastructure for the first month, and
then by the hour after that. Each OCPU you add to the
system is billed by the hour from the time you add it. Scaling
from 1/4 rack to 1/2 rack or from 1/2 to a full rack requires
that the data associated with database deployment is backed
up and is stored on a different Exadata DB system.

So the table below gives you a comparison between the


different Exadata DB System offerings on Oracle Cloud
Infrastructure.
The Exadata DB Systems use the local storage for the ASM.
When backups are provisioned on Exadata storage, 40% of
the available storage space is allocated to DATA disk group
and 60% is allocated to the RECO disk group. When backups
are not provisioned on Exadata storage, 80% of the available
storage space is allocated to the DATA disk group and 20% is
allocated to the RECO disk group. After the storage is
configured, the only way to adjust allocation without
reconfiguring the whole environment is by submitting a
service request to Oracle.

This slide shows you the comparison between the three


different offerings of Virtual Machine, Bare Metal, and
Exadata on Oracle Cloud Infrastructure when it comes to the
various capabilities of scaling, storage, cores, the ability to
have multiple database homes and databases in the service,
as well as the high availability and disaster recovery features.

In this slide, you see the various DB system offerings in


Oracle Cloud Infrastructure as well as the different database
editions and also the Bring Your Own License model support
for the different database systems in Oracle Cloud
Infrastructure.

In this slide, you will see that transferring data encryption is


available in all the database editions that are available in
Oracle Cloud Infrastructure. Also note that as you go from
left to right, with Enterprise Edition, you get all the standard
Enterprise Edition features. But in addition to that, you get
data masking and subsetting pack, diagnostics and tuning
pack, as well as real application testing.

And for customers that are used to deploying Oracle


Database Enterprise Edition on premises, you are probably
aware that you need to license the additional Enterprise
Manager packs like data masking and subsetting, diagnostics
and tuning, real application testing separately before you
start using it on premises. But on OCI, you get this additional
functionality bundled during the deployment of the Database
Enterprise Edition on OCI.

Now from Enterprise Edition, as you go into the Enterprise


Edition High Performance package, you'll also get all of those
features of Enterprise Edition which you saw on the left side.
And in addition, you get the multitenant option, which is
available in database 12c and higher. You also get
partitioning, advanced compression, advance security, label
security, database vault, as well as OLAP, advanced
analytics, spatial and graph, and the remaining management
packs for enterprise management.

And then finally, once you get to the Enterprise Edition


Extreme Performance, this will give you everything that is
there in Enterprise Edition High Performance, but you would
also get Real Application Clusters, the in-memory option--
which is available in database edition 12c and higher-- and
you also get Active Data Guard. Thanks for watching this
video.

2. DATABASE PART 2

Welcome back. In this lesson, we will look at the lifecycle


management activities that you can do on DB Systems in
Oracle Cloud Infrastructure. You can use the OCI console to
perform the following tasks, launching your database system.
You can do a status check of the database creation once you
launch your database. And after that, you can view the
runtime status of the database.
You can start, stop, or reboot DB Systems in OCI console.
Note that billing continues in the stock state for bare metal
DB systems, but not for VM databases.

You can scale CPU cores, scale up the number of enabled


CPU cores in the system for bare metal DB systems only. In
the case of VM database systems, you can increase the
amount of block storage with no impact.

Also know that terminating your DB system permanently


deletes it and any databases running on it. So if for some
reason, you want to terminate a DB system, if you have any
data that you would want to preserve from the database, you
can do two things. One is to take a one off database backup,
which will basically preserve the database backup in the
event you terminate the data system, or before you terminate
the data system you can use data pump to export the
database into object storage, and then you can delete the DB
system. And after a few days, if you think you no longer
require the backups, you can delete it. And then you'll be
fine.

Let us now look at patching database systems as part of


lifecycle management of OCI databases. OCI will
automatically provide you patches on the console. At any
given point you can have n minus 1 patches available for you
to apply on the console. You can run pre-check on the
existing patches that are available. And once those pre-check
processes run successfully, you can patch your database at
the click of a button.

In the case of Exadata and VM RAC shapes, patches are


applied in a rolling fashion. However, for single node DB
systems, if Active Data Guard is configured, this can be
leveraged by the patch service. Otherwise, you will have a
downtime for single node DB systems when you are applying
a patch.

Patching is a two step process. You first patch the DB


system. And then you patch the database. The screenshot at
the bottom of the slide shows you how you can run pre-check
and apply patches for your DB systems and the database in
OCI console.

You can use OCI's identity and access management controls


to control who can list patches, apply them, et cetera. This is
useful when you have many database administrators in your
organization, and you want to give the ability to apply
patches to only a select few. So basically, you will create
multiple groups in identity and access management, and put
the database administrators who can apply patches into the
IAM group which has the permissions for applying patches.

We will now look at the database backup and restore


functionality for OCI Database Systems. This is a managed
backup and restore feature for virtual machines and bare
metal DB systems. Exadata backup process requires creating
a backup config file. Backups can be stored in object or local
storage. However, we recommend that you store backups in
object storage for high durability.

Database systems in private subnets can leverage the service


gateway for storing backups in object storage. In the case of
backup options, we have the automatic incremental backups,
which runs once per day, and repeats the cycle every day.
And by default, these backups are retained for 30 days.

On demand standalone full backups are stored till the point


you decide to delete it from the OCI console. You can restore
a database to the latest backup. Or you can restore the
database to the timestamp.

And finally, you can also restore the database to a particular


system change number or SCN. Lets now look at some
information on automatic backups.

By default, automatic backups are returned to Oracle owned


object storage. Customers will not be able to view the object
storage backups. The default policy cannot be changed at this
time.

Automatic backups, enabled for the first time after November


20, 2018 on any database, will run between midnight and
6:00 AM in the time zone of the DB Systems region. You can
optionally specify a two hour scheduling window for your
database, during which the automatic backup process will
begin.

These are the preset retention periods for automatic backups,


7 days, 15 days, 30 days, 45 days, and 60 days. Backup jobs
are designed to be automatically retried. Also note that Oracle
automatically gets notified if a backup job is stuck.

All backups to Cloud Object Storage and encrypted. There's a


link at the bottom of the slide which will give you more
information on troubleshooting backups in case a backup job
fails.

Let's now look at high availability and scalability. OCI has


robust infrastructure. It has regions [? where ?] three
availability domains architecture. It has fully redundant and
non-blocking networking fabric. And you have the option of
two way or three way mirror storage for databases.
In the case of Exadata, it has redundant Infinibands are great
for cluster networking. In the case of high availability, Oracle
Cloud Infrastructure DB Systems has two options. One is the
database RAC option in virtual machines and Exadata. And
the other one is the automated Data Guard deployment
within and across availability domains for VMs and bare
metal shapes.

As I mentioned in the previous lesson, OCI DB Systems also


have dynamic CPU scaling for bare metal shapes and storage
scaling for VM DB Systems.

Oracle Data Guard is supported on both virtual machine and


bare metal DB systems. It is limited to one standby database
per primary database on OCI. If the customer has a database
license for Active Data Guard or deploys the Oracle
Enterprise Extreme Performance package, they can use the
standby database in a Data Guard setup for queries,
reporting, running tests, or backing up the database from the
standby.

You can do switchover, which has planned role reversal


without any data loss. In this case, no database re-
instantiation is required. And it's typically used for database
upgrades, tech refresh, data center moves, et cetera. This can
be manually invoked via Enterprise Manager, DGMGRL, or
SQL Plus.

For failover you can do unplanned failover of primary. And


the flashback database is used to reinstate the original
primary database. It can be manually invoked via Enterprise
Manager, DGMGRL, or SQL Plus. It can also be done
automatically using fast start failover.
Let us now look at the security features for database service.
For customers looking at instance isolation, OCI DB Systems
provides customers to the Bare Metal DB Systems.

In the case of customers who want to ensure that their DB


systems are running in a very secure fashion, they can use
the features of the OCI network infrastructure to deploy a
virtual cloud network and configure security lists and rules
so that the databases are deployed properly in private
subnets, and traffic is isolated only to the applications that
are deployed on the OCI infrastructure.

Customers can securely connect from their on premises


environment to OCI using VPN and FastConnect, which uses
the dynamic routing gateways for VCNs in Oracle Cloud
Infrastructure.

In the case of user authentication and authorization, OCI


segregates each customer into their own tenancy. And each
tenancy can be further divided into compartments to isolate
workloads for different departments, or different phases of a
particular project, like test dev or production. Customers can
configure identity and access management policies to
determine which user gets access to a particular
compartment.

Customers can also control access to the console using the


identity and access management, user IDs, and group
permissions. Finally, if customers are going to use APIs or
telephone to access OCI, then they will require an API signing
key which can be also controlled and shared with only the
folks who are spinning up infrastructure using APIs.
All access to the DB systems running in OCI will require a
private SSH key, which goes with the public SSH key used
during the deployment of the DB system.

In the case of data encryption, transmitting data encryption


is included with all the database additions that are
provisioned on Oracle Cloud Infrastructure DB Systems.
Customers have the ability to encrypt RMAN backups, and
the local storage and object storage is also encrypted at rest.

If customers want to have end-to-end TLS for their


applications, they can consider using the load balancer
service with TLS 1.2. The customer will have to provide the
certificates for this. And finally, the auditing service logs all
the activities that happen on the console, or via the API. And
auditors can look at this auditing service to look at who
creates a particular resource, or who deletes a particular
resource in Oracle Cloud Infrastructure.

Here's a quick look at the pricing information for DB Systems


on Oracle Cloud Infrastructure. I won't go through all these
numbers, but you can look at this information on
www.Oracle.com/database/VM-cloud-pricing.html.

So to summarize, over the course of these lessons, you


learned about the database service offerings for Oracle
Database in OCI. You learned about Exadata, real application
clusters, the bare metal and VM shapes that allow customers
to deploy every kind of enterprise applications. These
databases provide lifecycle automation for customers, from
provisioning, patching, backup, to restore.

Customers can scale from one core VM to Exadata and have


high availability options, namely Data Guard and real
application clusters. OCI DB Systems also provides
customers with robust security controls, and it allows
customers to leverage the on premises licenses to deploy
Oracle Cloud Infrastructure DB Systems. Thanks for
watching this lesson.
3. DATABASE DEMO 1
Hi. Welcome to this demo of Oracle Database System
Deployment on Oracle Cloud Infrastructure. My name is
Sanjay Narvekar, and I'm a Product Manager in Oracle Cloud
Infrastructure team. And I will be doing this demo for you
today.

I am logging into my Oracle Cloud Infrastructure Console.


And I will navigate to the Bare Metal VM and Database
System link under Databases, and I'll click on Create Data
System. And in the screen, I have a choice of selecting the
compartment that I wish to deploy the database system in. I
will stick to my Sandbox one. And I can rename this data
system if I want to. I'll go with the default.

My region has three domains. So I can select between one of


these availability domains. And in certain regions we just
have one availability domain. So if you are in a region which
has only one availability domain, please know that you don't
have this option of selecting between these three availability
domains. You'll just have one, [INAUDIBLE].

And in this demo today, I will be deploying a virtual machine


data system using the Task Provisioning option. So I'll click
on Virtual Machine. And I can change the shape that I will be
using for the deployment. I'll go with the VM standard 2.2
shape, and I'll click on Select a Shape.

And my node card would be one and I have a choice of going


between Standard Edition, Enterprise Edition, High
Performance. Or Extreme Performance Enterprise Edition
packages. I'll select Enterprise Edition Extreme Performance
package, and I will go with the logical or the manager-based
deployment option, which provisions the database in under
15 minutes.

From a storage configuration perspective, I can start with 256


gigabytes of storage or I can go all the way up to 8,190
gigabits of storage. I will choose the one which just the least
available for me, which is 256.

And under the total storage, it gives me the total storage used
by the database once it's deployed. I need to provide public
key here. We just need it if I want to access the SSH into
database system. So I selected one which I already have. And
I will choose a license type of License Included for this demo,
but if you are a customer looking to bring your own license,
you will select this option.

I already have, Virtual Cloud networks in my compartment.


So I'll select a demo VCN. But if I have the VCN in the main
compartment, I can choose that from there as well.

And for the subnet, I will select a private subnet that I have,
which is a regional subnet. But I also have a choice of
selecting a subnet from a different compartment if I need it.
And I can also assign network security group to control
traffic. But I would not do that so this demo I'll give a
hostname prefix of dest dv and let me show you the advanced
options available for me here I can choose a far domain if I
want for making sure that my database system gets deployed
in one of these four domains.

And I'll click Next, And this is where I can change the
database name if I want to. Note that the database name
cannot be last longer than eight characters here. I can select
between 18 or 19 C for the database system, Joyce. And since
it's a 19 C, I can optionally provide a lovable database name. I
will leave that alone and this is where I provide the password
for the system user.

Note that you have to follow certain password creation rules


while specifying the password. I can choose between OLTP or
DSS types for the workload. I will stay with OLTP. And I will
select, and it will automatic backups.

And I can choose between 7 to 60 days for my backups. I'll


keep it at 7 days. And if I want to, I can choose one of these
options for the backups scheduling, which means that the
database will be backed up sometime between 4:00 AM and
6:00 AM UTC. And clicking on Advanced Options allows me
to select a different character set if I want to, but I will go
with default. Then I would click on Create DB System.

The database provisioning process has now started, and this


is going to take roughly under 15 minutes, which is quite
fast. And because this is using the LVM-based storage option
for the data system. I will post pause this video and I'll get
back to it once the database has been provisioned. And I will
just quickly navigate between the various tabs over here on
the left.

Welcome back. The database system has now been


provisioned. As you can see, the status says it's available
here. And I can click on the database name and show you
that it has been configured for backups by clicking on this
button. And the backup retention feeder is seven days, and
the backups creating time and it's between 4:00 AM and 6:00
AM UTC.
Are we're going to click on Cancel because I don't want to
backup the database right now. And something to note that
as soon as the database gets provisioned, the first automatic
backup kicks in and it'll be available for your full restore at a
later on.

So in the case of other functionality here, clicking on patches


shows you the patch that is available for you for our apply.
And you will do this by running pre-check fist. And once that
process completes, you will go here and click on Apply.

I wondered who that now because the database is backing


up. I won't be able to show you this. And click on Patch
History will show you the list of patches that I've been
applied. And if I click on Data Guide Associations, I will be
able to see a bigger database in the case of DataGuard, stand
is enabled for this database.

So that was looking at the details. And I can click on DB


CONNECTION here to see the connection string for my
database. And I can add tags if I want to be associate it a
with certain project or business center. That's what the
tagging function and ideas for.

And I can click on DB systems here. And look at the node


information here. So my VM server name is called testdb, and
this is the private IP address. And this is a forwarded domain
which we selected when the database was provisioned.

Notice that there is no public IP address. So if I have to


access this particular database server IP using SSH, I'll have
to ensure that there is a bastion server which I can connect
to first and then I can access this database.
Or optionally, if I have VPN connection or FastConnect from
my on-premises environment to OCI, I will be also be able to
access this. But I won't be able to access this or the public
internet without first connecting to a bastion server. And
clicking on patches here shows me that there are no patches
available for the DB system. And fashion history is obviously,
empty because I haven't applied any patches here.

I can add additional SSH keys here by pasting the private this
is its key here and took him on Add as a search key. And if I
wish to move this resource from this compartment to enter
the I can select a target compartment here and click on More
Resource. Note that the person performing this action needs
to have the access to the target compartment Otherwise they
won't be able to move this resource to the target
compartment,

And This is a VM shape so again skids to Asia and in the case


of the logical only manager--based based database systems I
can scale up a 256. database up to 2560 gigabytes,
[INAUDIBLE] 10 times the size. I won't do that right now. So
I'm going to cancel and this completes this demonstration
and thank you for watching.

4. DATABASE DEMO 2

Hi, my name is [INAUDIBLE]. I'm a product manager in the


Oracle Cloud Infrastructure team. I'm going to show you a
demonstration of restoring an Oracle Database backup in
Oracle Cloud Infrastructure. I already have a database named
DSS19 and a db system in Oracle Cloud Infrastructure.

I'm going to click on DSS19, and you can see that this
database has backups occurring automatically every night.
To demonstrate restore database backup process, I'm going to
click on restore, and I will just select the restore to the latest
backup, and click on restore database. And this process will
kick in, and while this backup restore is happening, the
database goes from available to updating state, which means
that database won't be available for access at this point of
database restoration.

Once the database restore activity completes, then this will go


back to green, and the status will say available. I will pause
this demo for now and get back to you once the database
restore has completed. Welcome back. When I left off earlier,
the database was being restored from a backup.

The backup restore has now completed successfully as you


can see, and the db system is now available for use. This
concludes the demo of showing you the database backup
restore functionality in [? VM ?] db system on Oracle Cloud
Infrastructure. Thanks for watching this demo.

[SOUND EFFECT]

ORACLE AUTONOMOUS DATABASE


1. AUTONOMOUS DATABASE PART 1
Hi, welcome to the "Autonomous Database Level 100" lesson.
My name is Sanjay Navekar from the Oracle Cloud
Infrastructure Product Management team. And I will be
walking you through this lesson today.

Here is a quick look at the safe harbor statement. And let's


look at the objectives for today's lesson. After completing
today's lesson, you will be able to compare autonomous
database with the standard database system cloud offerings
in Oracle Cloud Infrastructure. You'll be able to describe the
features of Autonomous Datawarehouse Cloud, both the
serverless and the dedicated offerings, as well as describe the
features of Autonomous Transaction Processing, serverless
and dedicated. You'll learn how to deploy, use, and manage
autonomous database.

This slide shows you the two kinds of offerings that Oracle
Cloud Infrastructure has for Oracle database customers. One
is the traditional automated database services model, wherein
customers get to manage their Oracle database, but Oracle
incorporates database lifecycle automation into the services.
Customers will have DBA and operating system root access.
And they can run older database versions like [INAUDIBLE]
Release 2.

These automated database services include all database


features. And this line of services has Exadata Cloud service
for scalability, performance, and availability, as well as a
Database Cloud Service with virtual machine or bare metal, a
single server offering. Or you can also cluster virtual
machines as real application clusters.

The other service that I'm going to talk about today is the
autonomous database service. In this service, all database
operations are fully automated. The user runs SQL with no
access to operating system or the container database.

This service is built on top of the Exadata platform, so they


will get Exadata performance and availability. This service
can be customizable for data warehouse or transaction
processing workload. And in this service, we have two flavors.
One is the serverless model, where usability is ultra simple.
And it is very elastic. The second offering is the dedicated
model where customers can build a private cloud on Oracle
Cloud Infrastructure.
So let's now look at the use cases for these services that I just
talked about. Autonomous database is the world's best fully
self-serving database. Oracle builds and operates Exadata
infrastructure and databases.

The user runs SQL without any access to the operating


system or to container database. This is pretty useful for
customers who want the elasticity of the cloud, customers
with the workloads or machine learning, and customers who
do not want to spend too much time in tuning the code and
can have use cases for instant provisioning. Basically, this
supports all kinds of workloads, with support for JSON
documents, graphs, and more.

When it comes to Oracle Database Cloud Services, this is the


world's best automated database cloud infrastructure. Oracle
builds and operates the infrastructure. The user, however,
operates databases using provided lifecycle automation. The
user will have full control including database administration
and root access.

This is pretty useful for use cases where customers want to


have high availability. They want to deploy older version and
want to have all the features of the Oracle database. And
customers can start off with small workloads and scale up to
large databases. And customer also has ability to deploy
single instance or real application clusters with automated
backup patching functionality with full access to the
database infrastructure.

The next one here is the Exadata, which is the world's best
database platform. In this, Oracle will build, optimize, and
automate the infrastructure deployment. All in database
automation features that included.
The customer is responsible for provisioning the databases
and managing the databases here as well. As far as use cases
go, this is a great place for customers to build a private cloud,
either on premise or on Oracle Cloud Infrastructure. This is
perfect for consolidation use cases, where customers have a
lot of databases. And they are looking to consolidate on one
platform. So Exadata works well there. And it's very ideal for
highest performance workloads, and has a lot of scalability
features, which is ideal for mission critical workloads.

And the last section of the slide talks of what Oracle


database, which is the world's best database. And it can run
anywhere. The user builds and operates databases and
infrastructure. And this is ideal for customers who have small
to big database, transactional, as well as data warehousing
workloads, and ideal for customer data center do-it-yourself
model.

Let us now p at the Autonomous Optimizations, which are


part of the Autonomous Data Warehouse and Transaction
Processing. Autonomous Data Warehouse stored data in the
columnar format and creates data summaries to speed up
memory joins and aggregates, whereas Autonomous
Transaction Processing stores data in the raw format. Indexes
are created automatically. And memory is used for caching to
avoid IO. In both these services, statistics are updated in real
time, while preventing plan regressions.

This slide shows you the various cloud deployment models for
database on Oracle Cloud Infrastructure. We can start off
with the Database as a Service, virtual machine or bare
metal, or deployed Exadata cloud service on Oracle Cloud
Infrastructure or at the customer site. And then finally, we
have the Autonomous Serverless and Autonomous Dedicated
offerings. I won't go through all this slide, but I will just
pause on briefly here for you guys to read this before moving
on.

Let's now look at the different deployment options for


Autonomous Database Cloud Service. Autonomous Database
can be deployed in two ways. One is the dedicated. And the
other one is the serverless model.

Dedicated deployment is a deployment choice that enables


you to provision autonomous database into their own
dedicated Exadata cloud infrastructure instead of a shared
infrastructure with other tenants. So basically, what this
means is customer will log in into the OCI console, provision
an Exadata infrastructure. And once this Exadata
infrastructure is provisioned, they will create continuous
databases and then deploy the autonomous database Data
Warehouse or Transaction Processing dedicated services on
top of the dedicated deployment.

With serverless deployment, the simplest configuration, you


share the resources of an Exadata cloud infrastructure. You
can quickly get started with no minimum commitment,
enjoying quick database provisioning and independent
scalability of compute and storage. And both deployment
options are available for Autonomous Transaction Processing
and Autonomous Data Warehouse.

Now let us dive deeper into the serverless offering. This is a


fully-managed service. Oracle automates end-to-end
management of the autonomous database. It manages the
provisioning of the new databases, growing, shrinking
storage, and/or compute, patching and upgrades, backup
and recovery, et cetera. So for customers who are
traditionally used to doing all these activities, you know how
labor-intensive and time-consuming these tasks are. Oracle
automates all this, so it frees up your time to do other tasks.

Customers can use the Service Console to do full lifecycle


management of the service, from launching the service,
stopping the service, backing up, restoring, et cetera.
Alternatively, this can also be managed by a command line
interface or REST API.

Let us now look at automated tuning in autonomous


database. When it comes to loading data, customers can just
load and go, which means they just define their tables, load
data, and then start running queries. Customers don't have
to worry about any tuning activities. They don't need any
special database expertise to perform this task. They also
don't have to worry about creating, managing tablespaces,
partitioning compression, in-memory, indexes, or parallel
execution.

Oracle Autonomous Database gives fast performance out of


the box with zero tuning. And we also provide a simple web-
based monitoring console for customers to look at the
database activity, CPU utilization, running statements,
skewed statements, et cetera. And it has built-in resource
management plans.

Autonomous database is fully elastic. Customers can size the


database to the exact compute and storage required. They are
not constrained by the fixed building blocks and don't have to
worry about predefined shapes for spinning up the service.

They can scale the database on demand, independently,


scaling compute or storage. These resizing operations occurs
instantly with the database being fully online. There is no
downtime when the scaling operations occur.
Customers can shut off idle compute to save money. And they
can restart instantly whenever they need to access the
database service. Customers can also enable auto scaling to
allow autonomous database to use more CPU and IO
resources automatically when the work load requires it.

Autonomous database service supports the existing tools


which are running on premises or in the cloud. And these
tools can range from third-party business intelligence tools,
third-party data integration tools, or Oracle Business
Intelligence and data integration tools like BIEE, ODI, et
cetera. This service also supports Analytics Cloud Service,
Golden Gate Cloud Service, Integration Cloud Service, and
others. Customers can connect to autonomous database
service via SQLNet, JDBC, and ODBC.

This slide shows the architecture of Autonomous


Datawarehouse. As you can see here, at the heart of this
service, you have the autonomous database which can be
managed using a service console. And it has built-in query
and application development tools like machine learning, SQL
Developer, Web Access. It has Oracle application experts for
customers who want to deploy APEX applications. And it also
supports the REST data services.

As far as data loading goes, customers can copy files into


object storage cloud service. And they can load data files by
using APIs into the autonomous database service. And on the
left here, you see that Oracle Autonomous Datawarehouse
supports SQL Developer for developers.

And it also has data integration services support for Oracle


Data integrated platform cloud and also third-party data
integration services which runs on Oracle Cloud
Infrastructure or on premises. And this service also supports
Business Intelligence services like Oracle Analytics Cloud,
Oracle Data Visualization Desktop, and any third-party
Business Intelligence tools running on Cloud Infrastructure
are on premises.

Autonomous Transaction Processing architecture is pretty


much similar. In addition to the services and functionality
that we saw earlier in the previous slide, Autonomous
Transaction Processing also has support for developer
services, namely Oracle Java Cloud Service, Developer Cloud
Service, Oracle Container Clusters, or OKE, and the registry
service. This concludes the lesson. Thanks for watching.

2. AUTONOMOUS DATABASE PART 2

Hi. Welcome back. Let us now look at provisioning an


autonomous database in Oracle Cloud Infrastructure.
Provisioning an autonomous database requires only answers
to seven simple questions. You are to pick a data center
region. You are to name the database. You are to select how
many CPU cores you're going to allocate to this database and
how much storage capacity you need. And also, then, select
the license type, whether you want to enable autoscaling or
not, and give a password for the admin account. Once you do
this and click on Create Autonomous Database, a new service
will be created in a few minutes. And after that the database
is open and ready for connections.

Let us know look at the autoscaling functionality of


autonomous database. Autoscaling allows autonomous
database to automatically increase the number of CPU cores
by up to three times the assigned CPU core count value
depending on demand for processing. The autoscaling feature
reduces the number of CPU cores when additional cores are
not needed. You can enable or disable autoscaling at any
time. For billing purposes, the database service determines
the average number of CPUs used per hour.

So on the slide in the right side of the screen, you'll see that
this particular service has automatically scaled up OCPUs up
when there is a demand for more computing power, and then
scales it down once the demand goes down. Let us now look
at securing Oracle Autonomous Database. The autonomous
database stores all data in encrypted format in the Oracle
database. Only authenticated users and applications can
access the data when they connect to the database.

Database clients use SSL/TLS 1.2, encrypted and mutually


authenticated connections. This ensures that there is no
unauthorized access to the Autonomous Database Cloud, and
that communications between the client and server are fully
encrypted and cannot be intercepted or altered. Certificate-
based authentication uses an encrypted key stored in a wallet
on both the client and the server. The key on the client must
match the key on the server to make a connection. A wallet
contains a collection of files, including the key and other
information needed to connect to your database service in the
Autonomous Database Cloud.

You can specify IP addresses allowed to access the


autonomous database using the access control list. This
access control list will block all IP addresses that are not in
the list from accessing the autonomous database. So let us
know look at this example here. So we have an autonomous
database deployed in this example, and this database is being
accessed from a private subnet, as well as from a public
subnet on Oracle Cloud Infrastructure. And it's also being
accessed by a server or the public internet.
So how you would secure this databases? You can use the
access control list to specify the CIDR block. Basically, this is
needed if you're accessing the autonomous database from the
server running on a private subnet, and the access happens
over the service gateway. And to make things simple, like you
can also use a NAT gateway and specify the public IP of the
NAT gateway. This will also be another option for connecting
to the autonomous database from a private subnet. And for
connections to autonomous database from a compute
instance running on a public subnet of Oracle Cloud
Infrastructure, you will just take the public IP and you'll add
it to the access control list. And, finally, like for the access
over the public internet from a computer running on-
premises, you will grab the public IP and you'll add it into
this access control list.

Once the access control list is populated, if a user tries to


access autonomous database, autonomous database service
will look at the access control list and determine whether this
particular connection is valid or not. If the IP address doesn't
fall in this list, then the connection gets rejected. And if it
matches one of these entries, the connection goes through.
And, also, the customers or the user needs to have the wallet,
and they need to have the user ID and password to access
the autonomous database. So security is pretty slick here
when it comes to accessing autonomous database.

Let us know look at troubleshooting connectivity issues. You


need to ensure that the access control list for autonomous
database has the necessary entries for CIDR block ranges
and IP addresses as your use case dictates. When connecting
to autonomous database from a client computer behind a
firewall, the firewall must permit the use of the port specified
in the database connection when connecting to the servers in
the connection. The default port number for Autonomous
Data Warehouse is 1522. Your firewall must allow access to
servers within the .oraclecloud.com domain using port 1522.
When connecting to autonomous database from a server
running on a private subnet, ensure that you have a service
gateway or NAT gateway attached to the VCN. The route table
for the subnet needs to have the appropriate routing rules for
the service gateway or NAT gateway. The security list for the
subnet will need to have the right egress rules. For
connections originating from a server running on a public
subnet, ensure that route table and security lists are
appropriately configured.

We will now learn how to scale your autonomous database in


Oracle Cloud Infrastructure. You can always scale your
database on demand without tedious manual steps. You can
independently scale compute or storage, and this resizing
occurs instantly with the database fully online. Memory, IO
bandwidth, and concurrency scales linearly with the CPU.
You can close your database to save money when not used,
and you can restart instantly.

As far as monitoring your database goes, you have a couple of


choices. You can use the Service Console based monitoring,
where you have a simplified monitoring capability using the
web-based service console. You can look at historical and
real-time database and CPU utilization. You can do real-time
SQL monitoring to monitor running and past SQL
statements. You can also look at the CPU allocation chart to
view number of CPUs utilized by the service, and this can be
pretty handy if you want to figure out how you're getting
billed for your CPU utilization for the Autonomous Data
Service.

We also have the Performance Hub based monitoring. This is


natively integrated in the OCI console, and available with a
single flick from the Autonomous Database detail page. And
this has active session history analytics, and also real-time
SQL monitoring capability.

We'll now look at backup and recovery for the Autonomous


Database Cloud Service. Autonomous Database Cloud
automatically backs up your database for you. The retention
period for backups is 60 days. You can restore and recover
your database to any point in time in this retention period.
Autonomous Database Cloud automatic backups provide
weekly full backups and daily incremental backups. Manual
backups for your Autonomous Database Cloud is not needed,
but you can do manual backups using the cloud console if
you want to take backups before any major changes, for
example before ETL processing, to make restore and recovery
faster. The manual backups are put in your cloud object
storage bucket.

When you initiate a point in time recovery, Autonomous


Database Cloud decides which backup to use for faster
recovery. You can initiate recovery for your Autonomous
Database Cloud database using the cloud console.
Autonomous Database Cloud automatically restores and
recovers your database to the point in time you specify.
Network access control lists are stored in the database with
other database metadata. If the database is restored to a
point in time, the network ACLs are reverted back to the list
as of that point in time.

Let us know look at the cloning feature of Autonomous


Database Cloud. Autonomous database provides cloning,
where you can choose to clone either the full database or only
the database metadata. When you do a full clone, it creates a
new database with the source database's data and metadata.
If you choose to do a metadata clone, a new database is
created with the source database's metadata without the
data.
When creating a full clone database, the minimum storage
that you can specify is the source database's actual used
space rounded to the next terabyte. You can only clone an
autonomous database instance to the same tenancy and the
same region as the source database. During the provisioning
for either a full clone or a metadata clone, the optimizer
statistics are copied from the source database to the clone
database.

The following applies for optimizer statistics for tables in a


cloned database. When you do a full clone, loads into tables
behave the same as loading into a table with statistics
already in place. When you do a metadata clone, the first load
into a table after the clone clears the statistics for that table
and updates the statistics with the new load.

In this slide, you see screenshots for cloning in autonomous


database. So you can select a clone type of full clone or
metadata clone. And you specify which compartment you
want to clone to, select the source database name, and
provide a display name and a database name for the target
database, and also specify the CPU core count and the
storage, as well as click on whether you want to do
autoscaling for your database or not. And then specify the
admin password. And, finally, you will select a license type of
bring your own license or license included model, and then
click on Create Autonomous Database clone.

Let us know look at the predefined services which are used


for accessing Autonomous Data Warehouse. The three
predefined database services are identifiable as high,
medium, and low. And this gives you a choice of performance
and concurrency for Autonomous Data Warehouse. The first
service is high, which gives the highest resources and the
lowest concurrency, and queries run in parallel when
connected using this service.

The second service is medium. When a user connects to the


Autonomous Data Warehouse using this service, they get
lesser resources compared to high, but it has higher
concurrency, and queries run in parallel here. And the last
one here is the low service. For connections to the
Autonomous Data Warehouse using this service, they will get
the least number of resources but the highest concurrency,
and queries run serially here.

So on the slide in the right side of the page, you'll see an


example of for a database with 16 OCPUs. It can have three
connections using the high service, 20 using medium, and 32
using low. And the maximum idle time for high and medium
service is 5 minutes. So, basically, if somebody connects to
the data warehouse using high or medium service and the
session remains idle for 5 minutes, then this session gets
disconnected after 5 minutes.

Note that when connecting for replication purposes, we


recommend that customers use the low database service
name. For example, if you want to use Oracle GoldenGate for
data replication, then you need to configure the database,
connect to the parameter for GoldenGate to use the low
databases.

Let's now look at the predefined services for Autonomous


Transaction Processing connectivity. There are five predefined
database services controlling priority and parallelism. There
are different services defined for transactions and reporting.
As you saw in the previous slide, we have the same kind of
services, namely high, medium, and low, with similar
characteristics as the Autonomous Data Warehouse.
In addition to these three services, we also have TPURGENT
and TP service. These two services, TPURGENT and TP are for
transaction processing. And the other three, high, medium,
and low, can be used for reporting or batch processing. Also
note that, when you use high and medium, the operations
run in parallel and are subject to queuing. And there's no
parallelism, then you use low or TP service.

Let's quickly look at the Autonomous Database Dedicated


service offering in Oracle Cloud Infrastructure. The
autonomous dedicated database service provides a private
database cloud running on dedicated Exadata infrastructure
in the public cloud. It has multiple levels of isolation, which
protects you from noisy or hostile neighbors. Customizable
operational policies give you control of provisioning, software
updates, availability, and density.

This slide shows you the physical characteristics and


constraints. As it stands now, a quarter rack X7 Exadata
infrastructure has two servers, 92 OCPU, with 1.44 terabytes
of RAM. It has 3 storage servers, which will give you 76.8
terabytes of flash and 107 terabytes of disk space. And there
is one cluster per quarter rack, and you can have a maximum
of four autonomous container databases per cluster. And if
you choose to have high availability SLA for your autonomous
databases, you can have a maximum of 100 databases. Or if
you have extreme availability SLA for your autonomous
database, you can have a maximum of 25 databases.

Here's a quick look at the high-level deployment flow for


Autonomous Transaction Processing, Dedicated. The same
thing holds good for Autonomous Data Warehouse Dedicated
as well. The first step is to create a virtual cloud network.
And then you will provision Autonomous Exadata
Infrastructure. And once that is done, you will create
autonomous container database. And then, finally, you will
create your autonomous database.

Let us now look at security functionality in Autonomous


Transaction Processing, Dedicated. This applies to
Autonomous Data Warehouse Dedicated as well. Databases
are always encrypted in this service, and you get a reduced
attack surface. We automatically protect the customer data
from Oracle operations staff. We use Database Vault's new
operations control feature. Oracle automatically applies
security updates for the entire stack. We also apply quarterly
or off-cycle patches for high-impact security vulnerabilities.
Customers can separately use Database Vault for their own
user data isolation.

In the course of these lessons, you learned about the


difference in autonomous database and the DB system cloud
offerings in Oracle Cloud Infrastructure. You learned the
features of Autonomous Data Warehouse Cloud, serverless
and dedicated, as well less transaction processing, serverless
and dedicated. And you also learned how to deploy, use, and
manage autonomous database. Here are some additional
resources that you can use to get more information on the
Autonomous Data Warehouse and the Transaction Processing
Services. Thanks for watching this lesson.

3. AUTONOMOUS DATABASE DEMO 1


Hi. My name is Sanjay Narvekar. I'm a Product Manager in
the Oracle Cloud Infrastructure team. I'll be showing you a
demo of Creating an Oracle Autonomous Data Warehouse in
Oracle Cloud Infrastructure. I'm logging into my console, and
I will select this menu, and then click on Autonomous Data
Warehouse, which takes me to the page, which has a few
instances already running.
To create a new Oracle Autonomous Database, I'll click on
Create Autonomous Database. I can change this
compartment. If I want to select another compartment for my
demo, I will keep the same compartment for this demo. And
I'll rename the data warehouse to Test Data Warehouse and
stick to the same database name. Note that this database
name can be a maximum of 14 characters. And I'll keep the
workload type of Data Warehouse.

For the deployment type, I will select Serverless, and I will go


with the defaults of OCPU count of 1 and storage of 1
terabyte. I'll keep auto-scaling checked. So basically the
Autonomous Data Warehouse will automatically scale up the
cores from 1 to a maximum of three cores if the workload
increases. And once the workload reduces, it'll bring it back
to 1. So I'll keep this checked here.

I don't want this [INAUDIBLE] Preview More Servers. Let's


skip that. And have a provide a password. This is the
password for the admin user I will select a license type of
BYOL.

I want to quickly show you the advanced options that are


available. Basically, this is needed if you want to tag your
database. I will not do any tagging at this point.

I'll click on Create Autonomous Database. And I don't want to


save the password here. So this process should take a few
minutes to complete, and once the provisioning process
completes, you'll see that this color changes to green and the
state will say Available. So right now during provisioning it
and becomes available. Let me pause this demo and I will
come back once this database has been provisioned.
Welcome back. The Autonomous Data Warehouse
Provisioning Process has completed and the database is now
available. So once the data warehouse is provisioned, you can
look at some of the details that you provided during the
provisioning process. You can see that auto-scaling is , the
database was provisioned with the database agency. And
since the database was just provisioned, you won't see any
active backups for this database.

And over here you can see the metrics information, and this
kind of becomes very useful as you start using this database.
You'll see the CP Utilization, Storage Utilization, the Session
Information, Running Statements, Skewed Statements, et
cetera. To find information needed to connect to this test data
warehouse click on the DB connection button here. And from
here, you can download the client credentials of the wallet
that you can use for establishing connectivity from your
computer to the Autonomous Data Warehouse.

And here are the TNS names that you can use for the
connections. You can go with high, medium, or low. Click on
Close. And clicking on Performance Hub gets you to this page
here, which shows you the activities that are currently
happening.

You can look at ASH Analytics as well as SQL Monitoring.


Currently there is not a lot going on in this database because
I just provisioned it. But this is where you can look at some of
this information. Click Close.

And as far as Action goes, I can Create Clone. I can access


the access control list. And this way I can type the IP address
of the CIDR block off the whitelisted IP or CIDR range for this
Autonomous Data Warehouse.
And clicking on Admin password lets me change the
password. I can click on Update License Type, and all this
change the license type for my deployment. And I can also
move this resource from the current compartment to a new
compartment, if I want to. So that is the list of things that
you can do in the Actions link here.

And I can also stop the Autonomous Data Warehouse by


clicking on stop. Clicking on Scale UpDown, gets me to this
pop-up window. This is where I can scale up the CPUs and
also scale up the storage if I want to.

And if I want to turn off auto-scaling, I can always do that by


checking this radio box here and clicking on Update. But I
will not do that for now. So that's Scale Up, Scale Down
process.

And then clicking on Service Console will take me to Service


Console for this particular Autonomous Data Warehouse.
This is where I can see the storage utilization, CP utilization,
the running SQL statements, the number of OCPs allocated,
et cetera. There's not much going on in this database right
now, so you don't see any data to display.

Clicking on Activity brings me to this page where I can


monitor database activity, DataLink statements, the CV
Utilization, Guild statements, et cetera. And clicking on
Monitor SQL allows me to monitor the SQL statements and
look at the plan information and stuff.

Clicking on Administration, gives me this page where I can


download the client credentials again. And if I want to, I can
set Resource Management Rules for the high, medium, and
low. I can set the query time and also the amount of IO.
And Change the CPU IO Shares, that's well, if I want to. And
this is one of the places that I can set the administrator
password. And this is where I can manage the machine
learning users. Currently, there is just one user, which just
the Admin user. But if I want to add additional ones, I can
click on Create and Add New Users I won't do that here.

And the last tab I want to show you here is the Development
tab. I'll click on this. This is where I can access Oracle APEX.
As you know, Oracle APEX is now included with Autonomous
Data Warehouse and Autonomous Transaction Processing.

But this is where you would go to launch Autonomous Data


Warehouse APEX functionality. And it'll ask me for my admin
password. I won't get into this because this can be quite a
long demo if I get into APEX, so I'll skip that.

I'll also show you a SQL Developer Web. So this is web-based


access for running queries against the Autonomous Database
that I just provisioned. I won't save the password, so I'll
uncheck it.

So this is what it looks like. It looks a lot like SQL Developer


Desktop version, but this is more of [INAUDIBLE] version of a
SQL Developer. Again, I won't run any queries here. And this
is where you can access your machine learning notebooks.
And to do this, you'll need the user ID and password. I won't
get into that, but I just want to show you that these are the
links for doing all those activities.

And clicking on Download Oracle Instant Client will let you


download the Oracle Instant Client, which you can use for
configuring your compute instances to access Autonomous
Data Warehouse, or transaction processing.

And finally, there's the RESTful services and sort of


information which you'll need if you want to develop
applications with the RESTful access to the Autonomous
Database.

So this is all of the Service Console that I wanted to show


you. And that's pretty much it for this demo. Thanks for
watching this demo on Provisioning Autonomous Data
Warehouse Oracle Cloud Infrastructure.

4. AUTONOMOUS DATABASE DEMO 2

Hi. My name is Sanjay [INAUDIBLE]. I'm a product manager


in the Oracle Cloud Infrastructure team. In today's video, I
will be showing you a demo of how to recover an Autonomous
Data Warehouse from a backup.

I've already logged into my console. I will click on this menu


icon and then click on Autonomous Data Warehouse. And
that will bring me to this page.

I'll select an existing Autonomous Data Warehouse and click


on Backups. And just to show you how easy it is to recover
from a backup, I'll select the latest backup and click on
Restore. And click on the Restore button.

So once the restore process starts, this database state will


change from Available to Restore in Progress. And this
process should take a few minutes, depending on how large
your database is. I will pause this demo for a few and get
back once this backup restore has completed successfully.

Welcome back. You can see that the autonomous database


now has been restored successfully from a backup. This
concludes today's demo of restoring an Autonomous
Database Warehouse from a backup. Thanks for watching.

5. AUTONOMOUS DATABASE DEMO 3

Hi. My name is Sanjay Narvekar. I'm a product manager in


the Oracle Cloud Infrastructure team. Today, I'll be doing a
demonstration of connecting to an Autonomous Data
Warehouse in Oracle Cloud Infrastructure using the SQL
Developer application running on your computer. I've already
logged into my tenancy for Oracle Cloud Infrastructure.

And on the console, I'm going to click on Autonomous Data


Warehouse here. And I will then click on a database that I
created earlier today. And then, click on DB Connection . And
I'm going to download the client credentials file. This is a file I
will need for making the connection from SQL Developer to
the [INAUDIBLE] Data Warehouse.

I'll provide a password for the wallet and download the file.
I'm now in Oracle SQL Developer. To create a new connection
in SQL Developer, I'll click on New Connection here. And I'll
change the connection type to Cloud Wallet. I will browse to
the wallet file that I just downloaded.

And I'll have to select the corresponding service for that


database, which is over here. I'm going to use the [? high ?]
service. I'll give you the database connection a name. This is
the first time I'm connecting to this database, so admin user
is what I'll be using for making this connection.

I'll then provide the password, and I'll click on Test. And the
database connection succeeded. So I'm going to save this and
now connect to the database. Since I did not save the
password, I'll be prompted for this password.

And it should take a few seconds for this connection. And


there we go. The SQL Developer has made a connection to the
Autonomous Data Warehouse. I can just type a quick query
to show you that the connection is active. And you can see
that the query returned today's date.

That concludes the demo. Thanks for watching.

You might also like